Stream images from a connected camera over MQTT, view using Streamlit, record to file and sqlite

Overview

mqtt-camera-streamer

Summary: Publish frames from a connected camera or MJPEG/RTSP stream to an MQTT topic, and view the feed in a browser on another computer with Streamlit.

Long introduction: A typical task in IOT/science is that you have a camera connected to one computer and you want to view the camera feed on a second computer, and maybe preprocess the images before saving them to disk. I have always found this to be way more effort than expected. In particular, working with camera streams can get quite complicated and may lead you to experiment with tools like Gstreamer and ffmpeg that have a steep learning curve. In contrast, working with MQTT is very straightforward and is often familiar to anyone with an interest in IOT. This repo, mqtt-camera-streamer uses MQTT to send frames from a camera over a network at low frames-per-second (FPS). A viewer is provided for viewing the camera stream on any computer on the network. Frames can be saved to disk for further processing. Also it is possible to setup an image processing pipeline by linking MQTT topics together, using an on_message(topic) to do some processing and send the processed image downstream on another topic.

Note that this is not a high FPS solution, and in practice I achieve around 1 FPS which is practical for IOT experiments and tasks such as preprocessing (cropping, rotating) images prior to viewing them. This code is written for simplicity and ease of use, not high performance.

Installation

Install system wide on an RPi, or on other OS use a venv to isolate your environment, and install the required dependencies:

$ (base) python3 -m venv venv
$ (base) source venv/bin/activate
$ (venv) pip3 install -r requirements.txt

Listing cameras with OpenCV

The check-opencv-cameras.py script assists in discovering which cameras OpenCV can connect to on your computer (does not work with RPi camera). If your laptop has a built-in webcam this will generally be listed as VIDEO_SOURCE = 0. If you plug in an external USB webcam this takes precedence over the built-in webcam, with the external camera becoming VIDEO_SOURCE = 0 and the built-in webcam becoming VIDEO_SOURCE = 1.

To check which OpenCV cameras are detected run:

$ (venv) python3 scripts/check-opencv-cameras.py

Configuration using config.yml

Use the config.yml file in the config directory to configure your system. If your desired camera is listed as source 0 you will configure video_source: 0. Alternatively you can configure the video source as an MJPEG or RTSP stream. For example in config.yml you may configure something like video_source: "rtsp://admin:[email protected]:554/11" for a commercial RTSP camera. To configure a RPi camera running the web_streaming.py example you configure video_source: http://pi_ip:8000/stream.mjpg

Validate the config can be loaded by running:

$ (venv) python3 scripts/validate-config.py

Note that this script does not check the accuracy of any of the values in config.yml, just that the file path is correct and the file structure is OK.

By default scripts/opencv-camera.py will look for the config file at ./config/config.yml but an alternative path can be specified using the environment variable MQTT_CAMERA_CONFIG. You can set this using export MQTT_CAMERA_CONFIG=/home/pi/github/mqtt-camera-streamer/config/config.yml

Publish camera frames

To publish camera frames with OpenCV over MQTT:

$ (venv) python3 scripts/opencv-camera-publish.py

Camera display

To view the camera stream with Streamlit:

$ (venv) streamlit run scripts/viewer.py

Note: if Streamlit becomes unresponsive, ctrl-z to pause Streamlit then kill -9 %%. Also note that the viewer can be run on any machine on your network.

Save frames

To save frames to disk:

$ (venv) python3 scripts/save-captures.py

Save frames to db

As save-captures.py but in addition saving the frame thumbnail to a sqlite db:

$ (venv) python3 scripts/db-recorder.py

The images can be viewed using sqlite browser

If you wish to run a server with UI for browsing the images then datasette with the datasette-render-images plugin can be used.

$ (venv) pip install datasette
$ (venv) pip install datasette-render-images
$ (venv) datasette captures/records.db

Image processing pipeline

To process a camera stream (the example rotates the image):

$ (venv) python3 scripts/processing.py

Home Assistant

You can view the camera feed using Home Assistant and configuring an MQTT camera. Add to your configuration.yaml:

camera:
  - platform: mqtt
    topic: homie/mac_webcam/capture
    name: mqtt_camera
  - platform: mqtt
    topic: homie/mac_webcam/capture/rotated
    name: mqtt_camera_rotated
  - platform: mjpeg # the raw mjpeg feed if using picamera
    name: picamera
    mjpeg_url: http://192.168.1.134:8000/stream.mjpg

MQTT

Need an MQTT broker? If you have Docker installed I recommend eclipse-mosquitto. A basic broker can be run with:

docker run -p 1883:1883 -d eclipse-mosquitto

Note that I have structured the MQTT topics following the homie MQTT convention, linked in the references. This is not necessary but is best practice IMO.

OpenCV & streamlit on RPi

OpenCV is used to read the images from a connected camera or MJPEG/RTSP stream. On a Raspberry pi (RPi) installing OpenCV can be troublesome, and I found it necessary to first sudo apt-get install libatlas-base-dev libjasper-dev libqtgui4 python3-pyqt5 libqt4-test libilmbase-dev libopenexr-dev libgstreamer1.0-dev libavcodec58 libavformat58 libswscale5 before installing opencv using the instructions below. Likewise Streamlit can be challenging to install on an RPi, and if you dont need it then remove it from requirements.txt. If you do wish to install Streamlit on the RPi see this thread for latest guidance. On 24/3/2021 I was able to install opencv-python==4.5.1.48 but not streamlit on an RPi4 32bit.

RPi camera

Use an official RPi camera and ensure picamera is installed with pip3 install picamera. If you use the RPi in desktop mode you can check the camera feed using raspistill -o image.jpg. Use the official web_streaming example which creates an mjpeg stream on http://pi_ip:8000/stream.mjpg. This mjpeg stream can be configured as a source with mqtt-camera-streamer to translate the mjepg stream to an mqtt stream.

RPi service

You can run any of the scripts as a service, which means they will automatically start on RPi boot, and can be easily started & stopped. Create the service file in the appropriate location on the RPi using:

sudo nano /etc/systemd/system/my_script.service

Entering the following (adapted for your script.py file location and args, assumes you are using system python3):

[Unit]
Description=Service for mqtt-camera-publish
After=network.target

[Service]
ExecStart=/usr/bin/python3 -u opencv-camera-publish.py
WorkingDirectory=/home/pi/github/mqtt-camera-streamer/scripts
StandardOutput=inherit
StandardError=inherit
Restart=always
User=pi

[Install]
WantedBy=multi-user.target

Once this file has been created you can to start the service using: sudo systemctl start my_script.service

View the status and logs with: sudo systemctl status my_script.service

Stop the service with: sudo systemctl stop my_script.service

Restart the service with: sudo systemctl restart my_script.service

You can have the service auto-start on rpi boot by using: sudo systemctl enable my_script.service

You can disable auto-start using: sudo systemctl disable my_script.service

References

Comments
  • ImportError: libjasper.so.1: cannot open shared object file: No such file or directory on RPi

    ImportError: libjasper.so.1: cannot open shared object file: No such file or directory on RPi

    Getting error on RPi: ImportError: libjasper.so.1: cannot open shared object file: No such file or directory. Try:

    sudo apt-get install libatlas-base-dev
    sudo apt-get install libjasper-dev
    sudo apt-get install libqtgui4
    sudo apt-get install python3-pyqt5
    sudo apt-get install libqt4-test
    

    Still getting the error. Run sudo apt update --fix-missing and restart pi. STILL getting this error, clearly a cv2 issue

    opened by robmarkcole 8
  • opencv install on rpi4 32bit -

    opencv install on rpi4 32bit -

    pip3 install opencv-python>=4.4.0.46

    Error ImportError: libwebp.so.6: cannot open shared object file: No such file or directory

    That was fixed with sudo apt-get install libwebp-dev

    opened by robmarkcole 3
  • Add display

    Add display

    using flask

    Requires a fair amount of code

    • https://github.com/robsmall/flask-raspi-video-streamer/blob/master/simple-mjpeg-server.py
    • https://github.com/blakeblackshear/frigate/blob/master/detect_objects.py
    • https://blog.miguelgrinberg.com/post/video-streaming-with-flask

    use HTTPServer

    Much less code

    • https://github.com/robmarkcole/simple_mjpeg_streamer_http_server
    opened by robmarkcole 2
  • Mqtt and streamlit in docker

    Mqtt and streamlit in docker

    Hello,

    I am working on a project which requires to publish the results of some of the programs through mqtt onto the streamlit. However, I have to perform this in docker and I think it's not straight-forward to access webcam through docker. I am getting the following error when I run the camera.py code in docker: Traceback (most recent call last): File "camero.py", line 50, in <module> main() File "camero.py", line 28, in main client.connect(MQTT_BROKER, port=MQTT_PORT) File "/usr/local/lib/python3.6/dist-packages/paho/mqtt/client.py", line 937, in connect return self.reconnect() File "/usr/local/lib/python3.6/dist-packages/paho/mqtt/client.py", line 1071, in reconnect sock = self._create_socket_connection() File "/usr/local/lib/python3.6/dist-packages/paho/mqtt/client.py", line 3522, in _create_socket_connection return socket.create_connection(addr, source_address=source, timeout=self._keepalive) File "/usr/lib/python3.6/socket.py", line 724, in create_connection raise err File "/usr/lib/python3.6/socket.py", line 713, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused

    Can you please help me resolve this? Thank you.

    opened by MauryaShraddha 1
  • python3-opencv disappeared?

    python3-opencv disappeared?

    ~/mqtt-camera-streamer $ sudo apt install python3-opencv
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    E: Unable to locate package python3-opencv
    
    opened by robmarkcole 1
  • Raspberry pi error: No matching distribution found for opencv-python

    Raspberry pi error: No matching distribution found for opencv-python

    On a pi:

      Could not find a version that satisfies the requirement opencv-python (from -r requirements.txt (line 3)) (from versions: )
    No matching distribution found for opencv-python (from -r requirements.txt (line 3))
    

    Solution -> sudo apt install python3-opencv and don't use venv

    opened by robmarkcole 1
  • Add script to log thumbnails to sqlite db

    Add script to log thumbnails to sqlite db

    As title for rudimentary recording/reviewing system, to allow review of captures if files stored on server. Add streamlit ui with date time picker. Show datasette usage

    opened by robmarkcole 0
  • Bump streamlit from 0.79.0 to 1.11.1

    Bump streamlit from 0.79.0 to 1.11.1

    Bumps streamlit from 0.79.0 to 1.11.1.

    Release notes

    Sourced from streamlit's releases.

    1.11.1

    No release notes provided.

    1.11.0

    No release notes provided.

    1.10.0

    No release notes provided.

    1.9.2

    No release notes provided.

    1.9.1

    No release notes provided.

    1.9.0

    No release notes provided.

    1.8.1

    No release notes provided.

    1.8.0

    No release notes provided.

    1.7.0

    • ❄️ Add st.snow()!

    1.6.0

    • 🗜 WebSocket compression is now disabled by default, which will improve CPU and latency performance for large dataframes. You can use the server.enableWebsocketCompression  configuration option to re-enable it if you find the increased network traffic more impactful.
    • ☑️ 🔘 Radio and checkboxes improve focus on Keyboard navigation (#4308)

    1.5.1

    No release notes provided.

    1.5.0

    Release date: Jan 27, 2022

    Notable Changes

    • 🌟 Favicon defaults to a PNG to allow for transparency (#4272).
    • 🚦 Select Slider Widget now has the disabled parameter that removes interactivity (completing all of our widgets) (#4314).

    Other Changes

    • 🔤 Improvements to our markdown library to provide better support for HTML (specifically nested HTML) (#4221).
    • 📖 Expanders maintain their expanded state better when multiple expanders are present (#4290).
    • 🗳 Improved file uploader and camera input to call its on_change handler only when necessary (#4270).

    1.4.0

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • How to transmit with sound

    How to transmit with sound

    Hi, I tried this program and it runs 12~14 FPS in my environment 360p and 480p also smoothly, which is surprising. But found that it is splitting the video into images for delivery and then combined for display, so its sound is lost. If I need to keep the sound, how should I do it.

    opened by Visoar 0
Releases(0.8)
Owner
Robin Cole
Physics PhD, python, data science, deep learning & space
Robin Cole
A certifiable defense against adversarial examples by training neural networks to be provably robust

DiffAI v3 DiffAI is a system for training neural networks to be provably robust and for proving that they are robust. The system was developed for the

SRI Lab, ETH Zurich 202 Dec 13, 2022
Research into Forex price prediction from price history using Deep Sequence Modeling with Stacked LSTMs.

Forex Data Prediction via Recurrent Neural Network Deep Sequence Modeling Research Paper Our research paper can be viewed here Installation Clone the

Alex Taradachuk 2 Aug 07, 2022
Multi-Objective Reinforced Active Learning

Multi-Objective Reinforced Active Learning Dependencies wandb tqdm pytorch = 1.7.0 numpy = 1.20.0 scipy = 1.1.0 pycolab == 1.2 Weights and Biases O

Markus Peschl 6 Nov 19, 2022
Official PyTorch implementation of GDWCT (CVPR 2019, oral)

This repository provides the official code of GDWCT, and it is written in PyTorch. Paper Image-to-Image Translation via Group-wise Deep Whitening-and-

WonwoongCho 135 Dec 02, 2022
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Fraunhofer SCAI 10 Oct 11, 2022
Official implementation of NeurIPS'21: Implicit SVD for Graph Representation Learning

isvd Official implementation of NeurIPS'21: Implicit SVD for Graph Representation Learning If you find this code useful, you may cite us as: @inprocee

Sami Abu-El-Haija 16 Jan 08, 2023
NBEATSx: Neural basis expansion analysis with exogenous variables

NBEATSx: Neural basis expansion analysis with exogenous variables We extend the NBEATS model to incorporate exogenous factors. The resulting method, c

Cristian Challu 100 Dec 31, 2022
A PyTorch Lightning Callback for pushing models to the Hugging Face Hub 🤗⚡️

hf-hub-lightning A callback for pushing lightning models to the Hugging Face Hub. Note: I made this package for myself, mostly...if folks seem to be i

Nathan Raw 27 Dec 14, 2022
Segment axon and myelin from microscopy data using deep learning

Segment axon and myelin from microscopy data using deep learning. Written in Python. Using the TensorFlow framework. Based on a convolutional neural network architecture. Pixels are classified as eit

NeuroPoly 103 Nov 29, 2022
Some simple programs built in Python: webcam with cv2 that detects eyes and face, with grayscale filter

Programas en Python Algunos programas simples creados en Python: 📹 Webcam con c

Madirex 1 Feb 15, 2022
Galaxy images labelled by morphology (shape). Aimed at ML development and teaching

Galaxy images labelled by morphology (shape). Aimed at ML debugging and teaching.

Mike Walmsley 14 Nov 28, 2022
The implementation of ICASSP 2020 paper "Pixel-level self-paced learning for super-resolution"

Pixel-level Self-Paced Learning for Super-Resolution This is an official implementaion of the paper Pixel-level Self-Paced Learning for Super-Resoluti

Elon Lin 41 Dec 15, 2022
This repository contains tutorials for the py4DSTEM Python package

py4DSTEM Tutorials This repository contains tutorials for the py4DSTEM Python package. For more information about py4DSTEM, including installation ins

11 Dec 23, 2022
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)

MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-t

Facebook Research 5.1k Jan 04, 2023
Tracking Progress in Question Answering over Knowledge Graphs

Tracking Progress in Question Answering over Knowledge Graphs Table of contents Question Answering Systems with Descriptions The QA Systems Table cont

Knowledge Graph Question Answering 47 Jan 02, 2023
Pytorch implementation for "Large-Scale Long-Tailed Recognition in an Open World" (CVPR 2019 ORAL)

Large-Scale Long-Tailed Recognition in an Open World [Project] [Paper] [Blog] Overview Open Long-Tailed Recognition (OLTR) is the author's re-implemen

Zhongqi Miao 761 Dec 26, 2022
Code and real data for the paper "Counterfactual Temporal Point Processes", available at arXiv.

counterfactual-tpp This is a repository containing code and real data for the paper Counterfactual Temporal Point Processes. Pre-requisites This code

Networks Learning 11 Dec 09, 2022
A PyTorch Image-Classification With AlexNet And ResNet50.

PyTorch 图像分类 依赖库的下载与安装 在终端中执行 pip install -r -requirements.txt 完成项目依赖库的安装 使用方式 数据集的准备 STL10 数据集 下载:STL-10 Dataset 存储位置:将下载后的数据集中 train_X.bin,train_y.b

FYH 4 Feb 22, 2022
Pytorch Implementation of DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis (TTS Extension)

DiffSinger - PyTorch Implementation PyTorch implementation of DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis (TTS Extension). Status

Keon Lee 152 Jan 02, 2023