ObjectDetNet is an easy, flexible, open-source object detection framework

Overview

Getting started with the ObjectDetNet

ObjectDetNet is an easy, flexible, open-source object detection framework which allows you to easily train, resume & prototype training sessions, run inference and flexibly work with checkpoints in a production grade environment.

Quick Start

Copy and paste this into your command line

#run in docker 
docker run --rm -it --init  --runtime=nvidia  --ipc=host  -e NVIDIA_VISIBLE_DEVICES=0 buffalonoam/zazu-image:0.3 bash

mkdir data
cd data
git clone https://github.com/dataloop-ai/tiny_coco.git
cd ..
git clone https://github.com/dataloop-ai/ObjectDetNet.git
cd ObjectDetNet
python main.py --train

After training just run:

python main.py --predict 
# OR 
python main.py --predict_single
# to predict a single item

To change the data you run on or the parameters of your model just update the example_checkpoint.pt file!

At the core of the ObjectDetNet framework is the checkpoint object. The checkpoint object is a json, pt or json styled file to be loaded into python as a dictionary. Checkpoint objects aren't just used for training, but also necessary for running inference. Bellow is an example of how a checkpoint object might look.

├── {} devices
│   ├── {} gpu_index
│       ├── 0
├── {} model_specs
│   ├── {} name
│       ├── retinanet
│   ├── {} training_configs
│       ├── {} depth
│           ├── 152
│       ├── {} input_size
│       ├── {} learning_rate
│   ├── {} data
│       ├── {} home_path
│       ├── {} annotation_type
│           ├── coco
│       ├── {} dataset_name
├── {} hp_values
│       ├── {} learning_rate
│       ├── {} tuner/epochs
│       ├── {} tuner/initial_epoch
├── {} labels
│       ├── {} 0
│           ├── Rodent
│       ├── {} 1
│       ├── {} 2
├── {} metrics
│       ├── {} val_accuracy
│           ├── 0.834
├── {} model
├── {} optimizer
├── {} scheduler
├── {} epoch
│       ├── 18

For training your checkpoint dictionary must have the following keys:

  • device - gpu index for which to convert all tensors
  • model_specs - contains 3 fields
    1. name
    2. training_configs
    3. data

To resume training you'll also need:

  • model - contains state of model weights
  • optimizer - contains state of optimizer
  • scheduler - contains state of scheduler
  • epoch - to know what epoch to start from

To run inference your checkpoint will need:

  • model_specs
  • labels

If you'd like to customize by adding your own model, check out Adding a Model

Feel free to reach out with any questions

WeChat: BuffaloNoam
Line: buffalonoam
WhatsApp: +972524226459

Refrences

Thank you to these repositories for their contributions to the ObjectDetNet

High-Resolution Image Synthesis with Latent Diffusion Models

Latent Diffusion Models arXiv | BibTeX High-Resolution Image Synthesis with Latent Diffusion Models Robin Rombach*, Andreas Blattmann*, Dominik Lorenz

CompVis Heidelberg 5.6k Dec 30, 2022
Defending graph neural networks against adversarial attacks (NeurIPS 2020)

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks Authors: Xiang Zhang ( Zitnik Lab @ Harvard 44 Dec 07, 2022

The MLOps platform for innovators 🚀

​ DS2.ai is an integrated AI operation solution that supports all stages from custom AI development to deployment. It is an AI-specialized platform service that collects data, builds a training datas

9 Jan 03, 2023
Official PyTorch implementation of "VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization" (CVPR 2021)

VITON-HD — Official PyTorch Implementation VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization Seunghwan Choi*1, Sunghyun Pa

Seunghwan Choi 250 Jan 06, 2023
Code for the tech report Toward Training at ImageNet Scale with Differential Privacy

Differentially private Imagenet training Code for the tech report Toward Training at ImageNet Scale with Differential Privacy by Alexey Kurakin, Steve

Google Research 29 Nov 03, 2022
Extracting and filtering paraphrases by bridging natural language inference and paraphrasing

nli2paraphrases Source code repository accompanying the preprint Extracting and filtering paraphrases by bridging natural language inference and parap

Matej Klemen 1 Mar 09, 2022
PyTorch GPU implementation of the ES-RNN model for time series forecasting

Fast ES-RNN: A GPU Implementation of the ES-RNN Algorithm A GPU-enabled version of the hybrid ES-RNN model by Slawek et al that won the M4 time-series

Kaung 305 Jan 03, 2023
Partial implementation of ODE-GAN technique from the paper Training Generative Adversarial Networks by Solving Ordinary Differential Equations

ODE GAN (Prototype) in PyTorch Partial implementation of ODE-GAN technique from the paper Training Generative Adversarial Networks by Solving Ordinary

Somshubra Majumdar 15 Feb 10, 2022
Official implementation of "Synthetic Temporal Anomaly Guided End-to-End Video Anomaly Detection" (ICCV Workshops 2021: RSL-CV).

Official PyTorch implementation of "Synthetic Temporal Anomaly Guided End-to-End Video Anomaly Detection" This is the implementation of the paper "Syn

Marcella Astrid 11 Oct 07, 2022
In this project, we'll be making our own screen recorder in Python using some libraries.

Screen Recorder in Python Project Description: In this project, we'll be making our own screen recorder in Python using some libraries. Requirements:

Hassan Shahzad 4 Jan 24, 2022
A selection of State Of The Art research papers (and code) on human locomotion (pose + trajectory) prediction (forecasting)

A selection of State Of The Art research papers (and code) on human trajectory prediction (forecasting). Papers marked with [W] are workshop papers.

Karttikeya Manglam 40 Nov 18, 2022
Pyramid Pooling Transformer for Scene Understanding

Pyramid Pooling Transformer for Scene Understanding Requirements: torch 1.6+ torchvision 0.7.0 timm==0.3.2 Validated on torch 1.6.0, torchvision 0.7.0

Yu-Huan Wu 119 Dec 29, 2022
"Domain Adaptive Semantic Segmentation without Source Data" (ACM MM 2021)

LDBE Pytorch implementation for two papers (the paper will be released soon): "Domain Adaptive Semantic Segmentation without Source Data", ACM MM2021.

benfour 16 Sep 28, 2022
bio_inspired_min_nets_improve_the_performance_and_robustness_of_deep_networks

Code Submission for: Bio-inspired Min-Nets Improve the Performance and Robustness of Deep Networks Run with docker To build a docker environment, chan

0 Dec 09, 2021
Collaborative forensic timeline analysis

Timesketch Table of Contents About Timesketch Getting started Community Contributing About Timesketch Timesketch is an open-source tool for collaborat

Google 2.1k Dec 28, 2022
Official PyTorch implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation

U-GAT-IT — Official PyTorch Implementation : Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Imag

Hyeonwoo Kang 2.4k Jan 04, 2023
Catbird is an open source paraphrase generation toolkit based on PyTorch.

Catbird is an open source paraphrase generation toolkit based on PyTorch. Quick Start Requirements and Installation The project is based on PyTorch 1.

Afonso Salgado de Sousa 5 Dec 15, 2022
Trading Strategies for Freqtrade

Freqtrade Strategies Strategies for Freqtrade, developed primarily in a partnership between @werkkrew and @JimmyNixx from the Freqtrade Discord. Use t

Bryan Chain 242 Jan 07, 2023
Neural Tangent Generalization Attacks (NTGA)

Neural Tangent Generalization Attacks (NTGA) ICML 2021 Video | Paper | Quickstart | Results | Unlearnable Datasets | Competitions | Citation Overview

Chia-Hung Yuan 34 Nov 25, 2022
Tensorflow implementation of DeepLabv2

TF-deeplab This is a Tensorflow implementation of DeepLab, compatible with Tensorflow 1.2.1. Currently it supports both training and testing the ResNe

Chenxi Liu 21 Sep 27, 2022