Skip to content
This repository was archived by the owner on Sep 30, 2024. It is now read-only.

Extended Support of Smart Video Workshop for IoT Devcloud #35

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 29 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,52 +1,52 @@
# Optimized Inference at the Edge with Intel® Tools and Technologies
This workshop will walk you through a computer vision workflow using the latest Intel® technologies and comprehensive toolkits including support for deep learning algorithms that help accelerate smart video applications. You will learn how to optimize and improve performance with and without external accelerators and utilize tools to help you identify the best hardware configuration for your needs. This workshop will also outline the various frameworks and topologies supported by Intel® accelerator tools.
# Optimized Inference at the Edge with Intel® Tools and Technologies
This workshop will walk you through a computer vision workflow using the latest Intel® technologies and comprehensive toolkits including support for deep learning algorithms that help accelerate smart video applications. You will learn how to optimize and improve performance with and without external accelerators and utilize tools to help you identify the best hardware configuration for your needs. This workshop will also outline the various frameworks and topologies supported by Intel® accelerator tools.

## How to Get Started
> :warning: For the in-class training, the hardware and software setup part has already been done on the workshop hardware. In-class training participants should directly move to Workshop Agenda section.

> :warning: For the in-class training, the hardware and software setup part has already been done on the workshop hardware. In-class training participants should directly move to Workshop Agenda section.

In order to use this workshop content, you will need to setup your hardware and install the Intel® Distribution of OpenVINO™ toolkit for infering your computer vision application.
### 1. Hardware requirements
The hardware requirements are mentioned in the System Requirement section of the [install guide](https://software.intel.com/en-us/articles/OpenVINO-Install-Linux)

### 2. Operating System
These labs have been validated on Ubuntu* 16.04 OS.
These labs have been validated on Ubuntu* 16.04 OS.

### 3. Software installation steps
#### a). Install Intel® Distribution of OpenVINO™ toolkit
#### a). Install Intel® Distribution of OpenVINO™ toolkit
Use steps described in the [install guide](https://software.intel.com/en-us/articles/OpenVINO-Install-Linux)
to install the Intel® Distribution of OpenVINO™ toolkit, configure Model Optimizer, run the demos, additional steps to install Intel® Media SDK and OpenCL™ mentioned in the the guide.
to install the Intel® Distribution of OpenVINO™ toolkit, configure Model Optimizer, run the demos, additional steps to install Intel® Media SDK and OpenCL™ mentioned in the the guide.

#### b). Install required packages
sudo apt install git
sudo apt install python3-pip
sudo apt install libgflags-dev
sudo pip3 install opencv-python
sudo pip3 install cogapp

#### c). Run the demo scipts and compile samples
Delete $HOME/inference_engine_samples folder if it already exists.
Delete $HOME/inference_engine_samples folder if it already exists.

rm -rf $HOME/inference_engine_samples
Run demo scripts (any one of them or both if you want to both the demos) which will generate the folder $HOME/inference_engine_samples with the current Intel® Distribution of OpenVINO™ toolkit built.

Run demo scripts (any one of them or both if you want to both the demos) which will generate the folder $HOME/inference_engine_samples with the current Intel® Distribution of OpenVINO™ toolkit built.

cd /opt/intel/openvino/deployment_tools/demo
./demo_squeezenet_download_convert_run.sh
./demo_security_barrier_camera.sh

sudo chown -R username.username $HOME/inference_engine_samples_build
cd $HOME/inference_engine_samples_build
make

#### d). Download models using model downloader scripts in Intel® Distribution of OpenVINO™ toolkit installed folder
- Install python3 (version 3.5.2 or newer)
- Install python3 (version 3.5.2 or newer)
- Install yaml and requests modules with command:

sudo -E pip3 install pyyaml requests

- Run model downloader script to download example deep learning models

cd /opt/intel/openvino/deployment_tools/tools/model_downloader
sudo python3 downloader.py --name mobilenet-ssd,ssd300,ssd512,squeezenet1.1,face-detection-retail-0004,face-detection-retail-0004-fp16,age-gender-recognition-retail-0013,age-gender-recognition-retail-0013-fp16,head-pose-estimation-adas-0001,head-pose-estimation-adas-0001-fp16,emotions-recognition-retail-0003,emotions-recognition-retail-0003-fp16,facial-landmarks-35-adas-0002,facial-landmarks-35-adas-0002-fp16

Expand Down Expand Up @@ -96,22 +96,22 @@ sudo chown username.username -R /opt/intel/workshop/

7. It opens in default browser, locate the required jupyter notebook (.ipynb) file and double click on it to open and run.

> :warning: This workshop content has been validated with Intel® Distribution of OpenVINO™ toolkit version R1 (openvino_toolkit_2019.1.094).
> :warning: This workshop content has been validated with Intel® Distribution of OpenVINO™ toolkit version R1 (openvino_toolkit_2019.1.094).




## Workshop Agenda
* **Smart Video/Computer Vision Tools Overview**
- Slides - [Introduction to Smart Video Tools](./presentations/01-Introduction-to-Intel-Smart-Video-Tools.pdf)

* **Training a Deep Learning Model**
- Slides - [Training a Deep Learning Model](./presentations/DL_training_model.pdf)
- Lab - Training a Deep Learning Model [[Default](./dl-model-training/README.md)] [[Python](./dl-model-training/Python/Deep_Learning_Tutorial.ipynb)]

* **Basic End to End Object Detection Inference Example**
- Slides - [Basic End to End Object Detection Example](./presentations/02-03_Basic-End-to-End-Object-Detection-Example.pdf)
- Lab Setup - [Lab Setup Instructions](./Lab_setup.md)
- Lab - Basic End to End Object Detection Example [[C++](./object-detection/README.md)] [[Python](./object-detection/Python/basic_end_to_end_object_detection.ipynb)]
- Lab - Basic End to End Object Detection Example [[C++](./object-detection/README.md)] [[Python](./object-detection/Python/basic_end_to_end_object_detection.ipynb)] [[Devcloud](./object-detection/Devcloud/basic_end_to_end_object_detection.ipynb)]
- Lab - Tensor Flow example [[C++](./advanced-video-analytics/tensor_flow.md)] [[Python](./object-detection/Python/Tensor_Flow_example.ipynb)]
- Lab - [Object Detection with YOLOv3* model](./object-detection/README_yolov3.md)

Expand All @@ -120,15 +120,15 @@ sudo chown username.username -R /opt/intel/workshop/

* **HW Acceleration with Intel® Movidius™ Neural Compute Stick**
- Lab - HW Acceleration with Intel® Movidius™ Neural Compute Stick [[C++](./HW-Acceleration-with-Movidious-NCS/README.md)] [[Python](./HW-Acceleration-with-Movidious-NCS/Python/HW_Acceleration_with_Movidius_NCS.ipynb)]

* **FPGA Inference Accelerator**
- Slides - [HW Acceleration with Intel® FPGA](./presentations/FPGA.pdf)

* **Optimization Tools and Techniques**
* **Optimization Tools and Techniques**
- Slides - [Optimization Tools and Techniques](./presentations/04-05_Optimization_and_advanced_analytics.pdf)
- Lab 1 - Optimization Tools and Techniques [[C++](./optimization-tools-and-techniques/README.md)] [[Python](./optimization-tools-and-techniques/Python/optimization_tools_and_techniques.ipynb)]
- Lab 2- [Intel® VTune™ Amplifier tutorial](./optimization-tools-and-techniques/README_VTune.md)

* **Advanced Video Analytics**
- Lab - Multiple models usage example [[C++](./advanced-video-analytics/multiple_models.md)] [[Python](./advanced-video-analytics/Python/advanced_video_analytics.ipynb)]
<!----
Expand All @@ -139,23 +139,23 @@ sudo chown username.username -R /opt/intel/workshop/
* **Implement Custom Layers for Inference on CPU and Integrated GPU**
- Slides - [Custom Layer](./presentations/custom_layer.pdf)
- Lab - [Custom Layer](./custom-layer/README.md)

* **Additional Examples - Reference Implementations**
- Industrial
- Industrial
- [Safety Gear Detection](./safety-gear-example/README.md)
- [Restricted Zone Notifier](https://github.com/intel-iot-devkit/restricted-zone-notifier-cpp)
- [Object Size Detector](https://github.com/intel-iot-devkit/object-size-detector-cpp)
- Retail
- Retail
- [Store Traffic Monitor](https://github.com/intel-iot-devkit/store-traffic-monitor)
- [Shopper Gaze Monitor](https://github.com/intel-iot-devkit/shopper-gaze-monitor-cpp)
<!--
<!--
* **Workshop Survey**
- [Workshop Survey](https://idz.qualtrics.com/jfe/form/SV_a9GvOxtOrOziykB)
- [Custom Layer Tutorial Survey](https://intelemployee.az1.qualtrics.com/jfe/form/SV_1ZjOKaEIQUM5FpX)
- [Embedded Vision Summit Workshop Survey](https://intel.az1.qualtrics.com/jfe/form/SV_6RsCwmj6QGD3PAF)
-->
> #### Disclaimer

> Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.
> Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

> *Other names and brands may be claimed as the property of others
5 changes: 5 additions & 0 deletions object-detection/Devcloud/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
## Extend the support of Smart Video Workshop for IoT Devcloud
### Lab - Basic End to End Object Detection Example
1. Steps to run the Lab - Basic End to End Object Detection on Dev Cloud
- Download the basic_end_to_end_object_detection.ipynb file and replace it in $HOME/Reference-samples/smart-video-workshop/object-detection/Python/ folder with the existing file.
- Download the updated tutorial1.py, ROIviewer.py files and replace it in $HOME/Reference-samples/smart-video-workshop/object-detection/Python/ folder with existing python files.
139 changes: 139 additions & 0 deletions object-detection/Devcloud/ROIviewer.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
#!/usr/bin/env python
"""
Copyright (c) 2019 Intel Corporation

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

import sys
import os
from argparse import ArgumentParser
import cv2
import logging as log
import struct
import collections



def build_argparser():
parser = ArgumentParser()
parser.add_argument("-i", "--input",
help="Path to video file or image. 'cam' for capturing video stream from camera", required=True,
type=str)
parser.add_argument("-l", "--labels", help="Labels mapping file", required=True, type=str)
parser.add_argument("--ROIfile",help="Path to ROI file.",default="ROIs.txt",type=str)
parser.add_argument("-b", help="Batch size", default=0, type=int)
parser.add_argument('-o', '--output_dir',
help='Location to store the results of the processing',
default=None,
required=True,
type=str)
return parser

class ROI_data_type:
framenum=""
labelnum=""
confidence=""
xmin=""
ymin=""
xmax=""
ymax=""

def main():
log.basicConfig(format="[ %(levelname)s ] %(message)s", level=log.INFO, stream=sys.stdout)
args = build_argparser().parse_args()
batch=args.b
ROIs = collections.deque()
assert os.path.isfile(args.ROIfile), "Specified ROIs.txt file doesn't exist"

fin=open("ROIs.txt",'r')
for l in fin:
R=ROI_data_type()
batchnum,R.framenum,R.labelnum,R.confidence,R.xmin,R.ymin,R.xmax,R.ymax=l.split()
if int(batchnum)==batch:
ROIs.append(R)

if args.input == 'cam':
input_stream = 0
else:
input_stream = args.input
assert os.path.isfile(args.input), "Specified input file doesn't exist"

# print("opening", args.input," batchnum ",args.b,"\n")

cap = cv2.VideoCapture(input_stream)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
out = cv2.VideoWriter(os.path.join(args.output_dir, "cars_output.mp4"),0x00000021,fps,(width,height))

if not cap.isOpened():
print("could not open input video file")
framenum=0
if len(ROIs)>1:
R=ROIs[0]
else:
print("empty ROI file");
if args.labels:
with open(args.labels, 'r') as f:
labels_map = [x.strip() for x in f]
else:
labels_map = None

while True:
ret, frame = cap.read()
if not ret:
break
ncols=cap.get(3)
nrows=cap.get(4)
while int(R.framenum)<framenum:
if len(ROIs)>1:
ROIs.popleft()
R=ROIs[0];
else:
break
while int(R.framenum)==framenum:
xmin = int(float(R.xmin) * float(ncols))
ymin = int(float(R.ymin) * float(nrows))
xmax = int(float(R.xmax) * float(ncols))
ymax = int(float(R.ymax) * float(nrows))

class_id=int(float(R.labelnum)+1)
cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), (0, 255, 0),4,16,0)

if len(labels_map)==0:
templabel=int(float(R.labelnum))+":"+int(R.confidence*100.0)
print(templabel)
else:
templabel=str(labels_map[int(float(R.labelnum))])+":"+str(int(float(R.confidence)*100.0))

cv2.rectangle(frame, (xmin, ymin+32), (xmax, ymin), (155, 155, 155),-1,0)
cv2.putText(frame, templabel, (xmin, ymin+24), cv2.FONT_HERSHEY_COMPLEX, 1.1, (0, 0, 0),3)

if len(ROIs)>1:
ROIs.popleft()
R=ROIs[0]
else:
break
time = (1/20)
out.write(frame)
#cv2.imshow("Detection Results", frame)
if cv2.waitKey(30)>=0:
break
if len(ROIs)<=1:
break
framenum+=1
cap.release()

main()

Loading