Easy to use Python camera interface for NVIDIA Jetson

Related tags

Deep Learningjetcam
Overview

JetCam

JetCam is an easy to use Python camera interface for NVIDIA Jetson.

  • Works with various USB and CSI cameras using Jetson's Accelerated GStreamer Plugins

  • Easily read images as numpy arrays with image = camera.read()

  • Set the camera to running = True to attach callbacks to new frames

JetCam makes it easy to prototype AI projects in Python, especially within the Jupyter Lab programming environment installed in JetCard.

If you find an issue, please let us know!

Setup

git clone https://github.com/NVIDIA-AI-IOT/jetcam
cd jetcam
sudo python3 setup.py install

JetCam is tested against a system configured with the JetCard setup. Different system configurations may require additional steps.

Usage

Below we show some usage examples. You can find more in the notebooks.

Create CSI camera

Call CSICamera to use a compatible CSI camera. capture_width, capture_height, and capture_fps will control the capture shape and rate that images are aquired. width and height control the final output shape of the image as returned by the read function.

from jetcam.csi_camera import CSICamera

camera = CSICamera(width=224, height=224, capture_width=1080, capture_height=720, capture_fps=30)

Create USB camera

Call USBCamera to use a compatbile USB camera. The same parameters as CSICamera apply, along with a parameter capture_device that indicates the device index. You can check the device index by calling ls /dev/video*.

from jetcam.usb_camera import USBCamera

camera = USBCamera(capture_device=1)

Read

Call read() to read the latest image as a numpy.ndarray of data type np.uint8 and shape (224, 224, 3). The color format is BGR8.

image = camera.read()

The read function also updates the camera's internal value attribute.

camera.read()
image = camera.value

Callback

You can also set the camera to running = True, which will spawn a thread that acquires images from the camera. These will update the camera's value attribute automatically. You can attach a callback to the value using the traitlets library. This will call the callback with the new camera value as well as the old camera value

camera.running = True

def callback(change):
    new_image = change['new']
    # do some processing...

camera.observe(callback, names='value')

Cameras

CSI Cameras

These cameras work with the CSICamera class. Try them out by following the example notebook.

Model Infared FOV Resolution Cost
Raspberry Pi Camera V2 62.2 3280x2464 $25
Raspberry Pi Camera V2 (NOIR) x 62.2 3280x2464 $31
Arducam IMX219 CS lens mount 3280x2464 $65
Arducam IMX219 M12 lens mount 3280x2464 $60
LI-IMX219-MIPI-FF-NANO 3280x2464 $29
WaveShare IMX219-77 77 3280x2464 $19
WaveShare IMX219-77IR x 77 3280x2464 $21
WaveShare IMX219-120 120 3280x2464 $20
WaveShare IMX219-160 160 3280x2464 $23
WaveShare IMX219-160IR x 160 3280x2464 $25
WaveShare IMX219-200 200 3280x2464 $27

USB Cameras

These cameras work with the USBCamera class. Try them out by following the example notebook.

Model Infared FOV Resolution Cost
Logitech C270 60 1280x720 $18

See also

  • JetBot - An educational AI robot based on NVIDIA Jetson Nano

  • JetRacer - An educational AI racecar using NVIDIA Jetson Nano

  • JetCard - An SD card image for web programming AI projects with NVIDIA Jetson Nano

  • torch2trt - An easy to use PyTorch to TensorRT converter

Comments
  • Camera works, Jetcam does not

    Camera works, Jetcam does not

    I am trying to get a Raspberry Pi v2 camera module working on a Jetson Xavier NX with Jetpack 4.4 installed.

    (Specifically, I want to use Jetcam because one of your other projects, https://github.com/NVIDIA-AI-IOT/trt_pose uses Jetcam in its live demo.)

    I know my camera is connected properly and working because if I run:

    gst-launch-1.0 nvarguscamerasrc ! nvoverlaysink
    

    ... I get a video image on screen immediately, no problem.

    However, running even the most basic example (csi_camera notebook), I always get errors:

    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    /usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py in __init__(self, *args, **kwargs)
         23             if not re:
    ---> 24                 raise RuntimeError('Could not read image from camera.')
         25         except:
    
    RuntimeError: Could not read image from camera.
    
    During handling of the above exception, another exception occurred:
    
    RuntimeError                              Traceback (most recent call last)
    <ipython-input-2-4d23bcae2fae> in <module>
          1 from jetcam.csi_camera import CSICamera
          2 
    ----> 3 camera = CSICamera(width=224, height=224, capture_width=1980, capture_height=1080, capture_fps=30)
    
    /usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py in __init__(self, *args, **kwargs)
         25         except:
         26             raise RuntimeError(
    ---> 27                 'Could not initialize camera.  Please see error trace.')
         28 
         29         atexit.register(self.cap.release)
    
    RuntimeError: Could not initialize camera.  Please see error trace
    

    I've even tried the fix (hack?) suggested in https://github.com/NVIDIA-AI-IOT/jetcam/issues/12 but this makes no difference.

    Any advice on what to look for or what the issue could be?

    opened by anselanza 3
  • remove duplicate comma

    remove duplicate comma

    This duplicate comma causes an error on Jetpack 4.3 (OpenCV 4). error opening bin: could not parse caps "video/x-raw, , format=(string)BGR" Fix #17

    opened by borongyuan 3
  • camera can not initial

    camera can not initial

    Python 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information.

    from jetcam.csi_camera import CSICamera camera = CSICamera(width=224, height=224, capture_width=1080, capture_height=720, capture_fps=30) Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py", line 24, in init RuntimeError: Could not read image from camera.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py", line 27, in init RuntimeError: Could not initialize camera. Please see error trace.

    opened by wangnan31415926 1
  • cv2.cpython-36m-aarch64-linux-gnu.so: undefined symbol

    cv2.cpython-36m-aarch64-linux-gnu.so: undefined symbol

    [email protected]:/usr/lib$ python3
    Python 3.6.8 (default, Jan 14 2019, 11:02:34)
    [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> from jetcam.usb_camera import USBCamera
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/usb_camera.py", line 3, in <module>
    ImportError: /usr/local/lib/python3.6/dist-packages/cv2.cpython-36m-aarch64-linux-gnu.so: undefined symbol: _ZTIN2cv3dnn14dnn4_v201809175LayerE
    >>>
    
    
    

    My HW is jetson nano and SW env is

    [email protected]:/usr/lib$ jetson-release
     - NVIDIA Jetson NANO/TX1
       * Jetpack 4.2 [L4T 32.1.0]
       * CUDA GPU architecture 5.3
     - Libraries:
       * CUDA 10.0.166
       * cuDNN 7.3.1.28-1+cuda10.0
       * TensorRT 5.0.6.3-1+cuda10.0
       * Visionworks 1.6.0.500n
       * OpenCV 4.0.1 compiled CUDA: YES
     - Jetson Performance: active
    [email protected]:/usr/lib$
    
    
    opened by hgnan 0
  • Install failure

    Install failure

    I runsudo python3 setup.py install

    I get the following:

    /usr/local/lib/python3.8/dist-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
      warnings.warn(
    /usr/local/lib/python3.8/dist-packages/setuptools/command/easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
      warnings.warn(
    /usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py:123: PkgResourcesDeprecationWarning: 0.1.36ubuntu1 is an invalid version and will not be supported in a future release
      warnings.warn(
    /usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py:123: PkgResourcesDeprecationWarning: 0.23ubuntu1 is an invalid version and will not be supported in a future release
      warnings.warn(
    running bdist_egg
    running egg_info
    writing jetcam.egg-info/PKG-INFO
    writing dependency_links to jetcam.egg-info/dependency_links.txt
    writing top-level names to jetcam.egg-info/top_level.txt
    reading manifest file 'jetcam.egg-info/SOURCES.txt'
    adding license file 'LICENSE.md'
    writing manifest file 'jetcam.egg-info/SOURCES.txt'
    installing library code to build/bdist.linux-aarch64/egg
    running install_lib
    running build_py
    creating build/bdist.linux-aarch64/egg
    creating build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/csi_camera.py -> build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/__init__.py -> build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/usb_camera.py -> build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/camera.py -> build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/utils.py -> build/bdist.linux-aarch64/egg/jetcam
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/csi_camera.py to csi_camera.cpython-38.pyc
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/__init__.py to __init__.cpython-38.pyc
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/usb_camera.py to usb_camera.cpython-38.pyc
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/camera.py to camera.cpython-38.pyc
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/utils.py to utils.cpython-38.pyc
    creating build/bdist.linux-aarch64/egg/EGG-INFO
    copying jetcam.egg-info/PKG-INFO -> build/bdist.linux-aarch64/egg/EGG-INFO
    copying jetcam.egg-info/SOURCES.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
    copying jetcam.egg-info/dependency_links.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
    copying jetcam.egg-info/top_level.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
    zip_safe flag not set; analyzing archive contents...
    creating 'dist/jetcam-0.0.0-py3.8.egg' and adding 'build/bdist.linux-aarch64/egg' to it
    removing 'build/bdist.linux-aarch64/egg' (and everything under it)
    Processing jetcam-0.0.0-py3.8.egg
    Removing /usr/lib/python3.8/site-packages/jetcam-0.0.0-py3.8.egg
    Copying jetcam-0.0.0-py3.8.egg to /usr/lib/python3.8/site-packages
    jetcam 0.0.0 is already the active version in easy-install.pth
    
    Installed /usr/lib/python3.8/site-packages/jetcam-0.0.0-py3.8.egg
    Processing dependencies for jetcam==0.0.0
    Finished processing dependencies for jetcam==0.0.0
    

    import jetcam returns ModuleNotFoundError: No module named 'jetcam'

    What am I doing wrong?

    opened by master0v 1
  • Jetbot Camera Not Working- RuntimeError: Could not initialize camera.  Please see error trace.

    Jetbot Camera Not Working- RuntimeError: Could not initialize camera. Please see error trace.

    Hello, For some reason I can't get my camera to work again. For context, I tried to use a custom dataset from roboflow but then my kernel kept dying after installing roboflow. I reconfigured the right numpy and edited my .bashrc as I saw in NVIDIA's forum. But now the camera wont initialize. I know it works because it used to work before. I also am able to save a short video with it and able to call it in the terminal. But whenever I try to run a cell in Jupyter that requires the camera, it fails. I've tried restarting the camera too. But no luck :( Any help would be appreciated!

    opened by niiita 1
  • Camera ON LED continues to be on unless I restart the OS.

    Camera ON LED continues to be on unless I restart the OS.

    Hi,

    How can I close the camera after camera.unobserve(update_image, names='value') ? The camera ON LED continues to be on unless I restart the OS. I am using Logitech C270 USB camera. Is there a command to close the camera?

    opened by jam244 0
  • jetcam thread race - read thread and processing thread

    jetcam thread race - read thread and processing thread

    with camera.running = True, jetcam spawns a thread which reads into camera.value

    Now let's say we do, new_image = change['new'] and do some processing. I guess Python does shallow copying and only assigns a reference to the original image array in the new_image variable. So, effectively, new_image and camera.value are pointing to the same memory region. Lets say my processing-thread takes a very long time. In the mean time, camera.value is updated by jetcam-thread. This can cause a thread race. Is that right?

    opened by PhilipsKoshy 0
  • Cannot query video position: status=0, value=-1, duration=-1

    Cannot query video position: status=0, value=-1, duration=-1

    i tried camera = USBCamera(width=224, height=224, capture_width=640, capture_height=480, capture_device=0) the reply is [ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1

    opened by Chenhait 0
  • AttributeError: 'directional_link' object has no attribute 'link'

    AttributeError: 'directional_link' object has no attribute 'link'

    The beginning steps is OK. But when 'camera_link.link()' fail to execute and I got an error: `--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in ----> 1 camera_link.link()

    AttributeError: 'directional_link' object has no attribute 'link'`

    Don't know what is the reason.

    opened by watershade 0
Releases(v0.0.0)
Owner
NVIDIA AI IOT
NVIDIA AI IOT
Beyond Image to Depth: Improving Depth Prediction using Echoes (CVPR 2021)

Beyond Image to Depth: Improving Depth Prediction using Echoes (CVPR 2021) Kranti Kumar Parida, Siddharth Srivastava, Gaurav Sharma. We address the pr

Kranti Kumar Parida 33 Jun 27, 2022
Vision Transformer and MLP-Mixer Architectures

Vision Transformer and MLP-Mixer Architectures Update (2.7.2021): Added the "When Vision Transformers Outperform ResNets..." paper, and SAM (Sharpness

Google Research 6.4k Jan 04, 2023
A web porting for NVlabs' StyleGAN2, to facilitate exploring all kinds characteristic of StyleGAN networks

This project is a web porting for NVlabs' StyleGAN2, to facilitate exploring all kinds characteristic of StyleGAN networks. Thanks for NVlabs' excelle

K.L. 150 Dec 15, 2022
AntiFuzz: Impeding Fuzzing Audits of Binary Executables

AntiFuzz: Impeding Fuzzing Audits of Binary Executables Get the paper here: https://www.usenix.org/system/files/sec19-guler.pdf Usage: The python scri

Chair for Sys­tems Se­cu­ri­ty 88 Dec 21, 2022
An Implementation of Transformer in Transformer in TensorFlow for image classification, attention inside local patches

Transformer-in-Transformer An Implementation of the Transformer in Transformer paper by Han et al. for image classification, attention inside local pa

Rishit Dagli 40 Jul 25, 2022
Code accompanying the paper Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs (Chen et al., CVPR 2020, Oral).

Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs This repository contains PyTorch implementation of our pa

Shizhe Chen 178 Dec 29, 2022
Official implementation of the paper "Lightweight Deep CNN for Natural Image Matting via Similarity Preserving Knowledge Distillation"

Lightweight-Deep-CNN-for-Natural-Image-Matting-via-Similarity-Preserving-Knowledge-Distillation Introduction Accepted at IEEE Signal Processing Letter

DongGeun-Yoon 19 Jun 07, 2022
Hydra Lightning Template for Structured Configs

Hydra Lightning Template for Structured Configs Template for creating projects with pytorch-lightning and hydra. How to use this template? Create your

Model-driven Machine Learning 4 Jul 19, 2022
Implementation of Axial attention - attending to multi-dimensional data efficiently

Axial Attention Implementation of Axial attention in Pytorch. A simple but powerful technique to attend to multi-dimensional data efficiently. It has

Phil Wang 250 Dec 25, 2022
General Assembly Capstone: NBA Game Predictor

Project 6: Predicting NBA Games Problem Statement Can I predict the results of NBA games from the back-half of a season from the opening half of the s

Adam Muhammad Klesc 1 Jan 14, 2022
Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly

Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly Code for this paper Ultra-Data-Efficient GAN Tra

VITA 77 Oct 05, 2022
Codebase for Attentive Neural Hawkes Process (A-NHP) and Attentive Neural Datalog Through Time (A-NDTT)

Introduction Codebase for the paper Transformer Embeddings of Irregularly Spaced Events and Their Participants. This codebase contains two packages: a

Alan Yang 28 Dec 12, 2022
Machine-in-the-Loop Rewriting for Creative Image Captioning

Machine-in-the-Loop Rewriting for Creative Image Captioning Data Annotated sources of data used in the paper: Data Source URL Mohammed et al. Link Gor

Vishakh P 6 Jul 24, 2022
prior-based-losses-for-medical-image-segmentation

Repository for papers: Benchmark: Effect of Prior-based Losses on Segmentation Performance: A Benchmark Midl: A Surprisingly Effective Perimeter-based

Rosana EL JURDI 9 Sep 07, 2022
Implementation of ICLR 2020 paper "Revisiting Self-Training for Neural Sequence Generation"

Self-Training for Neural Sequence Generation This repo includes instructions for running noisy self-training algorithms from the following paper: Revi

Junxian He 45 Dec 31, 2022
JstDoS - HTTP Protocol Stack Remote Code Execution Vulnerability

jstDoS If you are going to skid that, please give credits ! ^^ ¿How works? This

apolo 4 Feb 11, 2022
Interactive dimensionality reduction for large datasets

BlosSOM 🌼 BlosSOM is a graphical environment for running semi-supervised dimensionality reduction with EmbedSOM. You can use it to explore multidimen

19 Dec 14, 2022
Technical Analysis Indicators - Pandas TA is an easy to use Python 3 Pandas Extension with 130+ Indicators

Pandas TA - A Technical Analysis Library in Python 3 Pandas Technical Analysis (Pandas TA) is an easy to use library that leverages the Pandas package

Kevin Johnson 3.2k Jan 09, 2023
Safe Model-Based Reinforcement Learning using Robust Control Barrier Functions

README Repository containing the code for the paper "Safe Model-Based Reinforcement Learning using Robust Control Barrier Functions". Specifically, an

Yousef Emam 13 Nov 24, 2022
A Haskell kernel for IPython.

IHaskell You can now try IHaskell directly in your browser at CoCalc or mybinder.org. Alternatively, watch a talk and demo showing off IHaskell featur

Andrew Gibiansky 2.4k Dec 29, 2022