📚 A collection of Jupyter notebooks for learning and experimenting with OpenVINO 👓

Overview

📚 OpenVINO Notebooks

🚧 Notebooks are currently in beta. We plan to publish a stable release this summer. Please submit issues on GitHub, start a discussion or join our Unofficial Developer Discord Server* to stay in touch.

A collection of ready-to-run Python* notebooks for learning and experimenting with OpenVINO developer tools. The notebooks are meant to provide an introduction to OpenVINO basics and teach developers how to leverage our APIs for optimized deep learning inference in their applications.

💻 Getting Started

The notebooks are designed to run almost anywhere — your laptop, a cloud VM, or even a Docker container. Here's what you need to get started:

  • CPU (64-bit)
  • Windows*, Linux* or macOS*
  • Python* 3.6-3.8

Before you proceed to the Installation Guide, please review the detailed System Requirements below.

⚙️ System Requirements

The table below lists the supported operating systems and Python versions required to run the OpenVINO notebooks.

Supported Operating System Python* Version (64-bit)
Ubuntu* 18.04 LTS, 64-bit 3.6, 3.7, 3.8
Ubuntu* 20.04 LTS, 64-bit 3.6, 3.7, 3.8
Red Hat* Enterprise Linux* 8, 64-bit 3.6, 3.8
CentOS* 7, 64-bit 3.6, 3.7, 3.8
macOS* 10.15.x versions 3.6, 3.7, 3.8
Windows 10*, 64-bit Pro, Enterprise or Education editions 3.6, 3.7, 3.8
Windows Server* 2016 or higher 3.6, 3.7, 3.8

📝 Installation Guide

NOTE: If OpenVINO is installed globally, please do not run any of these commands in a terminal where setupvars.bat or setupvars.sh are sourced. For Windows, we recommend using Command Prompt (cmd.exe), not PowerShell.

Step 1: Clone the Repository

git clone https://github.com/openvinotoolkit/openvino_notebooks.git

Step 2: Create a Virtual Environment

# Linux and macOS may require typing python3 instead of python
cd openvino_notebooks
python -m venv openvino_env

Step 3: Activate the Environment

For Linux and macOS:

source openvino_env/bin/activate

For Windows:

openvino_env\Scripts\activate

Step 4: Install the Packages

Installs OpenVINO tools and dependencies like Jupyter Lab:

# Upgrade pip to the latest version.
# Use pip's legacy dependency resolver to avoid dependency conflicts
python -m pip install --upgrade pip
pip install -r requirements.txt --use-deprecated=legacy-resolver

Step 5: Install the virtualenv Kernel in Jupyter

python -m ipykernel install --user --name openvino_env

Step 6: Launch the Notebooks!

# To launch a single notebook
jupyter notebook <notebook_filename>

# To launch all notebooks in Jupyter Lab
jupyter lab notebooks

In Jupyter Lab, select a notebook from the file browser using the left sidebar. Each notebook is located in a subdirectory within the notebooks directory.

🧹 Cleaning Up

Shut Down Jupyter Kernel

To end your Jupyter session, press Ctrl-c. This will prompt you to Shutdown this Jupyter server (y/[n])? enter y and hit Enter.

Deativate Virtual Environment

To deactivate your virtualenv, simply run deactivate from the terminal window where you activated openvino_env. This will deactivate your environment.

To reactivate your environment, simply repeat Step 3 from the Install Guide.

Delete Virtual Environment (Optional)

To remove your virtual environment, simply delete the openvino_env directory:

On Linux and macOS:

rm -rf openvino_env

On Windows:

rmdir /s openvino_env

Remove openvino_env Kernel from Jupyter

jupyter kernelspec remove openvino_env

⚠️ Troubleshooting

  • On Ubuntu, if you see the error "libpython3.7m.so.1.0: cannot open shared object file: No such object or directory" please install the required package using apt install libpython3.7-dev

  • If you get an ImportError, doublecheck that you installed the kernel in Step 5. If necessary, choose the openvinoenv kernel from the _Kernel->Change Kernel menu)

  • On Linux and macOS you may need to type python3 instead of python when creating your virtual environment

  • On Linux and macOS you may need to install pip and/or python-venv (depending on your Linux distribution)

  • On Windows, if you have installed multiple versions of Python, use py -3.7 when creating your virtual environment to specify a supported version (in this case 3.7)

  • On Fedora*, Red Hat and Amazon* Linux you may need to install the OpenGL (Open Graphics Library) to use OpenCV. Please run yum install mesa-libGL before launching the notebooks.

  • For macOS systems with Apple* M1, please see community discussion about using Rosetta* 2.


* Other names and brands may be claimed as the property of others.

Comments
  • 406 Human Pose Estimation 3D

    406 Human Pose Estimation 3D

    3D Human Pose Estimation with OpenVINO

    This demo contains 3D multi-person pose estimation demo. Intel OpenVINO™ backend can be used for fast inference on CPU. It is based on Lightweight OpenPose and Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB papers.
    The implementation of this demo starts with the ideas I originally wrote about in my blog There are two options involved in this pull request. One is to use WebGL, which interacts with the browser, and the other is to use the less dependent OpenCV, which implements a basic 3D visual library.

    406-human-pose-estimation-3d

    threejs This demo allows you to use the mouse to change the angle of view from which you view an object.

    406-opencv-human-pose-estimation-3d

    OpenCV This example allows you to use the keyboard to move your camera and press ESC to exit.(You need to set use_popup=True firstly)

    new notebook 
    opened by spencergotowork 34
  • Added pose estimation live demo

    Added pose estimation live demo

    I fixed a lot of things, added documentation and removed big files from the git history. Hence, I create a new PR.

    The picture has the proper licence, as it comes from COCO - https://cocodataset.org/#explore?id=166392

    new notebook 
    opened by adrianboguszewski 22
  • 222 Image Colorization using OpenVINO model tutorial notebook

    222 Image Colorization using OpenVINO model tutorial notebook

    This PR adds demo notebook for grayscale image colorization using colorization-v2 model from Open Model Zoo

    Pending Tasks:

    • [x] - ~~Handle video input to colorize~~
    • [x] - Add explanation (markdown) to the notebook cells
    • [x] - Complete README.md
    • [x] - Follow up with suggestions and reviews
    gsoc wip 
    opened by Davidportlouis 18
  • Add comparison of INT8 and FP32 models

    Add comparison of INT8 and FP32 models

    Added the following features to PyTorch and Tensorflow quantization aware training notebooks

    • fine-tuning of float32 model the same way as int8 model is finetuned
    • accuracy comparison between fine-tuned int8 and fine-tuned float32 models

    Note: nbval fails, however it also seems to fail for the main branch

    opened by nikita-savelyevv 15
  • Add PaddleGAN AnimeGAN notebook

    Add PaddleGAN AnimeGAN notebook

    AnimeGAN notebook with model from https://github.com/PaddlePaddle/PaddleGAN

    Convert PaddleGAN model to ONNX and then to IR, and show inference results.

    PaddlePaddle requirements are installed in the notebook, with !pip. This requires that users activated the openvino_env environment and kernel - which they do if they follow our instructions.

    Converting this model was not completely straightforward. I added some steps to the notebook that show how to go about this, for example to do predictor.run?? to show the source of the function, to see how to preprocess and postprocess the model output.

    image

    This is a Draft PR - a README should be added and the descriptions in the notebook should be updated before merging.

    The notebook currently fails in the CI for Windows. I'll look into that - it seems to be a resource issue. It works on my Windows laptop.

    opened by helena-intel 15
  • ssdlite_mobilenet_v2.xml cannot be opened!

    ssdlite_mobilenet_v2.xml cannot be opened!

    Describe the bug I followed notebook 401-object-detection and it works. Then I wanted to reuse the converted model within a python script with the same command line : ie_core = Core() model = ie_core.read_model(model=root + converted_model_path) where "root" is path to openvino_notebooks

    But I get openvino_notebooks/notebooks/401-object-detection-webcam/model/public/ssdlite_mobilenet_v2/FP16/ssdlite_mobilenet_v2.xml cannot be opened!

    Expected behavior I hope I can reuse the converted model from my script

    Screenshots If applicable, add screenshots to help explain your problem.

    Installation instructions (Please mark the checkbox) [ x ] I followed the installation guide at https://github.com/openvinotoolkit/openvino_notebooks#-installation-guide to install the notebooks. I did it twice !

    ** Environment information ** Pip version: 22.1 OpenVINO source: /home/fenaux/openvino_env/lib/python3.9/site-packages/openvino OpenVINO IE version: 2022.1.0-7019-cdb9bec7210-releases/2022/1 OpenVINO environment activated: OK Jupyter kernel installed for openvino_env: NOT OK Python version: 3.9 OK OpenVINO pip package installed: OK OpenVINO import succeeds: OK OpenVINO development tools installed: OK OpenVINO not installed globally: OK No broken requirements: OK

    Thanks for your help

    opened by fenaux 13
  • Webcam Hello World

    Webcam Hello World

    Here is a webcam version of hello world. It uses the same model as the 001-hello-world but we use a webcam feed as the input. The main issue is how we can do CI with this at all, and that's why I'm thinking we have to put this under a 4xx series as it will have hardware dependencies.

    However, I will push a pull request here and see what we think about it and so at least I have this somewhere. :)

    opened by raymondlo84 12
  • 	223-text-prediction

    223-text-prediction

    Interactive Text Prediction with OpenVINO

    This is a demo for text prediction using gpt-2 model The complete pipeline of this demo's notebook is shown below.

    image2


    This is an interactive demonstration in which the user can type text into the input bar and generate predicted text. This procedure can be repeated as many times as the user desires.

    image3

    gsoc wip 
    opened by dwipddalal 11
  • [GSOC] 226-yolo-v4-tf object detection notebook.

    [GSOC] 226-yolo-v4-tf object detection notebook.

    A notebook that implements Yolo-v4-tiny-tf and yolo-v4-tf. Compared to the 401 object detection notebook, changes had to be made to the output processing to find bounding boxes and to resize the image while preserving the aspect ratio for improved performance. yolov4 model

    What's left: Some documentation / explanation.

    Something I found that needs to be confirmed is that Cx, Cy (cell index) and w, h (bounding box width/height), from the documentation, needs to have order changed to Cy, Cx and h, w respectively. Converted model input documentation should also become B, H, W, C, instead of B, C, H, W which gives an error. Whether or not the input is BGR or RGB isn't too clear yet considering the model goes by original input for dimensions.

    Edit: New Documentation is consistent with the current inputs I have. So BHWC is correct (using BGR).

    gsoc wip 
    opened by thavens 11
  • Known Issues with OpenVINO 2022.3 + OpenVINO Notebooks

    Known Issues with OpenVINO 2022.3 + OpenVINO Notebooks

    Here is a list of known issues for using OpenVINO 2022.3 and OpenVINO Notebooks. You can compile and obtain the 2022.3 from here.

    https://github.com/openvinotoolkit/openvino/wiki

    Known issues (Ubuntu 22.04 + Python 3.10):

    1. Python 3.10 and torch 1.8.1 dependencies is conflicting. (ERROR: Could not find a version that satisfies the requirement torch==1.8.1+cpu)
    2. PaddlePaddle 2.2 is also conflicting/missing. (ERROR: Could not find a version that satisfies the requirement paddlepaddle==2.2.*)
    3. Tensorflow 2.5.3 (ERROR: Could not find a version that satisfies the requirement tensorflow==2.5.3)
    opened by raymondlo84 10
  • Fix Deprecation/Future Warnings in Notebook 211-Speech-to-Text

    Fix Deprecation/Future Warnings in Notebook 211-Speech-to-Text

    In the committed version, Imports are at the top of the notebook.

    • librosa.filters.mel in audio_to_mel

      FutureWarning: Pass sr=16000, n_fft=512 as keyword args. From version 0.10 passing these as positional arguments will result in an error.

    Question

    The original pull requeset did NOT add librosa (the audio analysis package used here) into requirements.txt or .docker/Pipfile. Is it on purpose? Should I tell how to install it in the notebook?

    opened by YDX-2147483647 10
  • not able to read my custom model .xml file

    not able to read my custom model .xml file

    im using yolov7 226 yolo optimisation notebook,

    i trained my model using yolov7x.cfg which has 40 classes, I did all respectivate changes according to model .

    i am able to generate -onnx & .XML file , its inference is also working but when I was planning for converting it into int8 format . I'm not able to load it

    from openvino.runtime import Core
    core = Core()
    # read converted model
    model = core.read_model('model/best_veh_withbgnew.xml')
    # load model on CPU device
    compiled_model = core.compile_model(model, 'CPU')
    
    

    I'm getting this error

    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    <ipython-input-10-db161e3ad74f> in <module>
          2 core = Core()
          3 # read converted model
    ----> 4 model = core.read_model('model/best_veh_withbgnew.xml')
          5 # load model on CPU device
          6 compiled_model = core.compile_model(model, 'CPU')
    
    RuntimeError: Check 'false' failed at C:\Jenkins\workspace\private-ci\ie\build-windows-vs2019\b\repos\openvino\src\frontends\common\src\frontend.cpp:54:
    Converting input model
    Incorrect weights in bin file!
    
    
    opened by akashAD98 1
  • 226-yolov7-optimization on Ubuntu

    226-yolov7-optimization on Ubuntu

    When I run this notebook on Ubuntu with a successful setup of the virtual env and requirements.txt install...the kernel dies on my machine half way through every time...would you have tips to try?

    Its this block of code towards the end...where it does run I can see the process go from 0 to 100% but after a 100% is met the Kernel dies and I cant make it any further.

    mp, mr, map50, map, maps, num_images, labels = test(data=data, model=compiled_model, dataloader=dataloader, names=NAMES)
    # Print results
    s = ('%20s' + '%12s' * 6) % ('Class', 'Images', 'Labels', 'Precision', 'Recall', '[email protected]', '[email protected]:.95')
    print(s)
    pf = '%20s' + '%12i' * 2 + '%12.3g' * 4  # print format
    print(pf % ('all', num_images, labels, mp, mr, map50, map))
    

    Any options to try greatly appreciated.

    opened by bbartling 22
  • Duplicated images in the repository

    Duplicated images in the repository

    I found there are many duplicates in the repository e.g coco.jpg. It increases cloning time and space usage. It would be good to create a "central directory" with images and videos to use across all notebooks.

    I propose:

    1. Create the "data" dir in the root dir
    2. Move all images and videos from specific notebooks, remove duplicates
    3. Update links to media in all notebooks
    4. Update contributing guide
    enhancement 
    opened by adrianboguszewski 1
Releases(v0.1.0)
Owner
OpenVINO Toolkit
OpenVINO Toolkit
Repository for MeshTalk supplemental material and code once the (already approved) 16 GHS captures our lab will make publicly available are released.

meshtalk This repository contains code to run MeshTalk for face animation from audio. If you use MeshTalk, please cite @inproceedings{richard2021mesht

Meta Research 221 Jan 06, 2023
Official repository for Automated Learning Rate Scheduler for Large-Batch Training (8th ICML Workshop on AutoML)

Automated Learning Rate Scheduler for Large-Batch Training The official repository for Automated Learning Rate Scheduler for Large-Batch Training (8th

Kakao Brain 35 Jan 04, 2023
Unified API to facilitate usage of pre-trained "perceptor" models, a la CLIP

mmc installation git clone https://github.com/dmarx/Multi-Modal-Comparators cd 'Multi-Modal-Comparators' pip install poetry poetry build pip install d

David Marx 37 Nov 25, 2022
🔥🔥High-Performance Face Recognition Library on PaddlePaddle & PyTorch🔥🔥

face.evoLVe: High-Performance Face Recognition Library based on PaddlePaddle & PyTorch Evolve to be more comprehensive, effective and efficient for fa

Zhao Jian 3.1k Jan 02, 2023
This repository contains the PyTorch implementation of the paper STaCK: Sentence Ordering with Temporal Commonsense Knowledge appearing at EMNLP 2021.

STaCK: Sentence Ordering with Temporal Commonsense Knowledge This repository contains the pytorch implementation of the paper STaCK: Sentence Ordering

Deep Cognition and Language Research (DeCLaRe) Lab 23 Dec 16, 2022
HINet: Half Instance Normalization Network for Image Restoration

HINet: Half Instance Normalization Network for Image Restoration Liangyu Chen, Xin Lu, Jie Zhang, Xiaojie Chu, Chengpeng Chen Paper: https://arxiv.org

303 Dec 31, 2022
This project is a re-implementation of MASTER: Multi-Aspect Non-local Network for Scene Text Recognition by MMOCR

This project is a re-implementation of MASTER: Multi-Aspect Non-local Network for Scene Text Recognition by MMOCR,which is an open-source toolbox based on PyTorch. The overall architecture will be sh

Jianquan Ye 82 Nov 17, 2022
CvT-ASSD: Convolutional vision-Transformerbased Attentive Single Shot MultiBox Detector (ICTAI 2021 CCF-C 会议)The 33rd IEEE International Conference on Tools with Artificial Intelligence

CvT-ASSD including extra CvT, CvT-SSD, VGG-ASSD models original-code-website: https://github.com/albert-jin/CvT-SSD new-code-website: https://github.c

金伟强 -上海大学人工智能小渣渣~ 5 Mar 07, 2022
HyperLib: Deep learning in the Hyperbolic space

HyperLib: Deep learning in the Hyperbolic space Background This library implements common Neural Network components in the hypberbolic space (using th

105 Dec 25, 2022
Awesome Artificial Intelligence, Machine Learning and Deep Learning as we learn it

Awesome Artificial Intelligence, Machine Learning and Deep Learning as we learn it. Study notes and a curated list of awesome resources of such topics.

mani 1.2k Jan 07, 2023
MIMIC Code Repository: Code shared by the research community for the MIMIC-III database

MIMIC Code Repository The MIMIC Code Repository is intended to be a central hub for sharing, refining, and reusing code used for analysis of the MIMIC

MIT Laboratory for Computational Physiology 1.8k Dec 26, 2022
A Dynamic Residual Self-Attention Network for Lightweight Single Image Super-Resolution

DRSAN A Dynamic Residual Self-Attention Network for Lightweight Single Image Super-Resolution Karam Park, Jae Woong Soh, and Nam Ik Cho Environments U

4 May 10, 2022
PRTR: Pose Recognition with Cascade Transformers

PRTR: Pose Recognition with Cascade Transformers Introduction This repository is the official implementation for Pose Recognition with Cascade Transfo

mlpc-ucsd 133 Dec 30, 2022
Spectralformer: Rethinking hyperspectral image classification with transformers

The code in this toolbox implements the "Spectralformer: Rethinking hyperspectral image classification with transformers". More specifically, it is detailed as follow.

Danfeng Hong 104 Jan 04, 2023
Trading and Backtesting environment for training reinforcement learning agent or simple rule base algo.

TradingGym TradingGym is a toolkit for training and backtesting the reinforcement learning algorithms. This was inspired by OpenAI Gym and imitated th

Yvictor 1.1k Jan 02, 2023
MoveNet Single Pose on OpenVINO

MoveNet Single Pose tracking on OpenVINO Running Google MoveNet Single Pose models on OpenVINO. A convolutional neural network model that runs on RGB

35 Nov 11, 2022
Save-restricted-v-3 - Save restricted content Bot For telegram

Save restricted content Bot Contact: Telegram A stable telegram bot to get restr

DEVANSH 11 Dec 21, 2022
Type4Py: Deep Similarity Learning-Based Type Inference for Python

Type4Py: Deep Similarity Learning-Based Type Inference for Python This repository contains the implementation of Type4Py and instructions for re-produ

Software Analytics Lab 45 Dec 15, 2022
A flexible and extensible framework for gait recognition.

A flexible and extensible framework for gait recognition. You can focus on designing your own models and comparing with state-of-the-arts easily with the help of OpenGait.

Shiqi Yu 335 Dec 22, 2022
PyTorch implementation of Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network

hierarchical-multi-label-text-classification-pytorch Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network Approach This

Mingu Kang 17 Dec 13, 2022