A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.

Overview

Object Pose Estimation Demo

License

This tutorial will go through the steps necessary to perform pose estimation with a UR3 robotic arm in Unity. You’ll gain experience integrating ROS with Unity, importing URDF models, collecting labeled training data, and training and deploying a deep learning model. By the end of this tutorial, you will be able to perform pick-and-place with a robot arm in Unity, using computer vision to perceive the object the robot picks up.

Want to skip the tutorial and run the full demo? Check out our Quick Demo.

Want to skip the tutorial and focus on collecting training data for the deep learning model? Check out our Quick Data-Collection Demo.

Note: This project has been developed with Python 3 and ROS Noetic.

Table of Contents


Part 1: Create Unity Scene with Imported URDF

This part includes downloading and installing the Unity Editor, setting up a basic Unity scene, and importing a robot. We will import the UR3 robot arm using the URDF Importer package.


Part 2: Setup the Scene for Data Collection

This part focuses on setting up the scene for data collection using the Unity Computer Vision Perception Package. You will learn how to use Perception Package Randomizers to randomize aspects of the scene in order to create variety in the training data.

If you would like to learn more about Randomizers, and apply domain randomization to this scene more thoroughly, check out our further exercises for the reader here.


Part 3: Data Collection and Model Training

This part includes running data collection with the Perception Package, and using that data to train a deep learning model. The training step can take some time. If you'd like, you can skip that step by using our pre-trained model.

To measure the success of grasping in simulation using our pre-trained model for pose estimation, we did 100 trials and got the following results:

Success Failures Percent Success
Without occlusion 82 5 94
With occlusion 7 6 54
All 89 11 89

Note: Data for the above experiment was collected in Unity 2020.2.1f1.


Part 4: Pick-and-Place

This part includes the preparation and setup necessary to run a pick-and-place task using MoveIt. Here, the cube pose is predicted by the trained deep learning model. Steps covered include:

  • Creating and invoking a motion planning service in ROS
  • Sending captured RGB images from our scene to the ROS Pose Estimation node for inference
  • Using a Python script to run inference on our trained deep learning model
  • Moving Unity Articulation Bodies based on a calculated trajectory
  • Controlling a gripping tool to successfully grasp and drop an object.

Support

For general questions, feedback, or feature requests, connect directly with the Robotics team at [email protected].

For bugs or other issues, please file a GitHub issue and the Robotics team will investigate the issue as soon as possible.

More from Unity Robotics

Visit the Robotics Hub for more tutorials, tools, and information on robotics simulation in Unity!

License

Apache License 2.0

Comments
  • Do I have the possibility to run the training by google colab?

    Do I have the possibility to run the training by google colab?

    Because of not being privileged with a good machine to work the training of CNN Vgg 16, I would like to run part 3 of the tutorial with the graphics card of google. Is it possible to run in that environment? If yes could explain me in the best possible way.

    Thanks :)

    opened by RockStheff 14
  • Not compatible with latest perception sdk build 0.8.0-preview.3

    Not compatible with latest perception sdk build 0.8.0-preview.3

    While importing scene, PoseEstimationScenario.cs has errors: 1. Assets/TutorialAssets/Scripts/PoseEstimationScenario.cs(27,26): error CS0507: 'PoseEstimationScenario.isIterationComplete': cannot change access modifiers when overriding 'protected' inherited member 'ScenarioBase.isIterationComplete' 2. Assets/TutorialAssets/Scripts/PoseEstimationScenario.cs(28,26): error CS0507: 'PoseEstimationScenario.isScenarioComplete': cannot change access modifiers when overriding 'protected' inherited member 'ScenarioBase.isScenarioComplete' 3. Assets/TutorialAssets/Scripts/PoseEstimationScenario.cs(10,14): error CS0534: 'PoseEstimationScenario' does not implement inherited abstract member 'ScenarioBase.isScenarioReadyToStart.get'

    Modified the file to fix these errors and generated data. Noticed that metrics.json are not being created only captures.json are. When starting training, it gives an error in between looking for metrics data. Had to revert back 0.7.0-preview.2 build to regenerate data for training.

    Can this be updated to be compatible with perception 0.8.0-preview.3 (or the latest) build. It is suspected that the previous builds had some bug in them which gave erroneous bounding box data which led to poor training results for pose estimation.

    opened by arunabaijal 12
  • Problems in step 2 of the tutorial - Add and Set Up Randomizers

    Problems in step 2 of the tutorial - Add and Set Up Randomizers

    Hi there, I was trying to follow the steps of the tutorial, and I came across an impasse. More specifically, in Part 2, The moment that I search the scripts in c#, to add them in the game Object "Simulation Scenario", appears in the search bar as "not found". -> At this stage!

    image

    I tried to test in different versions of Unity editor, and did not succeed. Starting from the topic "Domain Randomization" in some C# scripts are not being recognized as a component in a given Game Object. Could you steer me somehow??? Thank you in advance.

    opened by RockStheff 6
  • fixed gpu error

    fixed gpu error

    #52 modification still cause following error. -> Error processing request: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

    So we need to convert output type to cpu from gpu.

    Note: If you use docker environment, please add --gpus all option.

    docker run -it --rm --gpus all -p 10000:10000 -p 5005:5005 unity-robotics:pose-estimation /bin/bash
    

    You can also use nvidia-smi command in docker whether gpu is enable or not.

    opened by adakoda 5
  • How to add custom messages to the ROS-Unity communication

    How to add custom messages to the ROS-Unity communication

    It would be great if you could briefly show us how to add custom ROS messages to the system. For example I'm trying to stream camera images from Unity to ROS.

    opened by tensarflow 5
  • Pose Estimation not working correctly

    Pose Estimation not working correctly

    Describe the bug

    The pose estimation is not executed correctly. I get an error regarding model weights and input not being on the same device. When I change this line to this

        device = torch.device("cpu")
    

    it works fine.

    To Reproduce

    Used the demo Unity project, therefore did not everything in the 4 readme's.

    Console logs / stack traces

    [ERROR] [1640807467.034139]: Error processing request: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
    ['Traceback (most recent call last):\n', '  File "/opt/ros/noetic/lib/python3/dist-packages/rospy/impl/tcpros_service.py", line 633, in _handle_request\n    response = convert_return_to_response(self.handler(request), self.response_class)\n', '  File "/home/ensar/Robotics-Object-Pose-Estimation/ROS/src/ur3_moveit/scripts/pose_estimation_script.py", line 96, in pose_estimation_main\n    est_position, est_rotation = _run_model(image_path)\n', '  File "/home/ensar/Robotics-Object-Pose-Estimation/ROS/src/ur3_moveit/scripts/pose_estimation_script.py", line 52, in _run_model\n    output = run_model_main(image_path, MODEL_PATH)\n', '  File "/home/ensar/Robotics-Object-Pose-Estimation/ROS/src/ur3_moveit/src/ur3_moveit/setup_and_run_model.py", line 138, in run_model_main\n    output_translation, output_orientation = model(torch.stack(image).reshape(-1, 3, 224, 224))\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl\n    result = self.forward(*input, **kwargs)\n', '  File "/home/ensar/Robotics-Object-Pose-Estimation/ROS/src/ur3_moveit/src/ur3_moveit/setup_and_run_model.py", line 54, in forward\n    x = self.model_backbone(x)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl\n    result = self.forward(*input, **kwargs)\n', '  File "/usr/local/lib/python3.8/dist-packages/torchvision/models/vgg.py", line 43, in forward\n    x = self.features(x)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl\n    result = self.forward(*input, **kwargs)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 117, in forward\n    input = module(input)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl\n    result = self.forward(*input, **kwargs)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 423, in forward\n    return self._conv_forward(input, self.weight)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 419, in _conv_forward\n    return F.conv2d(input, weight, self.bias, self.stride,\n', 'RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same\n']
    
    

    Expected behavior

    A working pose estimation.

    Environment (please complete the following information, where applicable):

    • Unity Version: Unity 2020.2.7f1: The demo project was 2020.2.6f1 an older version.
    • Unity machine OS + version: Ubuntu 20.04
    • ROS machine OS + version: Ubuntu 20.04, ROS Noetic
    • ROS–Unity communication: I installed the ROS environment as described in Part 0
    • Package branches or versions: Version 0.8.0-preview.3 - March 24, 2021
    opened by tensarflow 5
  • Use TGS solver

    Use TGS solver

    Proposed change(s)

    Ignore the collisions on the inner knuckles so that the TGS solver will work.

    Fix a bug related to Ubuntu package installation when building the docker image. [Issue]

    Types of change(s)

    • [x] Bug fix
    • [ ] New feature
    • [ ] Code refactor
    • [ ] Documentation update
    • [x] Other: enable to use TGS solver

    Testing and Verification

    Tested the Pose Estimation Quick Demo with the TGS solver

    Test Configuration:

    • Unity Version: Unity 2020.2.6f1

    https://user-images.githubusercontent.com/56408141/120538136-f8b62a80-c39a-11eb-854d-b00a9acfc77e.mov

    Checklist

    • [x] Ensured this PR is up-to-date with the target branch
    • [x] Followed the style guidelines as described in the Contribution Guidelines
    • [x] Added tests that prove my fix is effective or that my feature works
    • [x] Updated the Changelog and described changes in the Unreleased section
    • [x] Updated the documentation as appropriate

    Other comments

    opened by peifeng-unity 5
  • Could NOT find ros_tcp_endpoint

    Could NOT find ros_tcp_endpoint

    In Pick-and-Place with Object Pose Estimation: Quick Demo, Set Up the ROS Side, Step2. use "docker build -t unity-robotics:pose-estimation -f docker/Dockerfile ." and show error. What should I do? Thanks!

    E:\UnityProjects\2020\Robotics-Object-Pose-Estimation>docker build -t unity-robotics:pose-estimation -f docker/Dockerfile . [+] Building 14.2s (17/18) => [internal] load build definition from Dockerfile 0.1s => => transferring dockerfile: 1.41kB 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/ros:noetic-ros-base 5.1s => [internal] load build context 2.0s => => transferring context: 110.51MB 1.9s => [ 1/14] FROM docker.io/library/ros:[email protected]:68085c6624824d5ad276450d21377d34dccdc75785707f244a9 0.0s => CACHED [ 2/14] RUN sudo apt-get update && sudo apt-get install -y vim iputils-ping net-tools python3-pip ros- 0.0s => CACHED [ 3/14] RUN sudo -H pip3 --no-cache-dir install rospkg numpy jsonpickle scipy easydict torch==1.7.1+cu 0.0s => CACHED [ 4/14] WORKDIR /catkin_ws 0.0s => CACHED [ 5/14] COPY ./ROS/src/moveit_msgs /catkin_ws/src/moveit_msgs 0.0s => CACHED [ 6/14] COPY ./ROS/src/robotiq /catkin_ws/src/robotiq 0.0s => CACHED [ 7/14] COPY ./ROS/src/ros_tcp_endpoint /catkin_ws/src/ros_tcp_endpoint 0.0s => CACHED [ 8/14] COPY ./ROS/src/universal_robot /catkin_ws/src/universal_robot 0.0s => [ 9/14] COPY ./ROS/src/ur3_moveit /catkin_ws/src/ur3_moveit 1.1s => [10/14] COPY ./docker/set-up-workspace /setup.sh 0.1s => [11/14] COPY docker/tutorial / 0.1s => [12/14] RUN /bin/bash -c "find /catkin_ws -type f -print0 | xargs -0 dos2unix" 1.0s => ERROR [13/14] RUN dos2unix /tutorial && dos2unix /setup.sh && chmod +x /setup.sh && /setup.sh && rm /setup.sh 4.8s

    [13/14] RUN dos2unix /tutorial && dos2unix /setup.sh && chmod +x /setup.sh && /setup.sh && rm /setup.sh: #17 0.402 dos2unix: converting file /tutorial to Unix format... #17 0.406 dos2unix: converting file /setup.sh to Unix format... #17 1.304 -- The C compiler identification is GNU 9.3.0 #17 1.548 -- The CXX compiler identification is GNU 9.3.0 #17 1.567 -- Check for working C compiler: /usr/bin/cc #17 1.694 -- Check for working C compiler: /usr/bin/cc -- works #17 1.696 -- Detecting C compiler ABI info #17 1.779 -- Detecting C compiler ABI info - done #17 1.799 -- Detecting C compile features #17 1.800 -- Detecting C compile features - done #17 1.806 -- Check for working CXX compiler: /usr/bin/c++ #17 1.895 -- Check for working CXX compiler: /usr/bin/c++ -- works #17 1.897 -- Detecting CXX compiler ABI info #17 1.987 -- Detecting CXX compiler ABI info - done #17 2.007 -- Detecting CXX compile features #17 2.008 -- Detecting CXX compile features - done #17 2.376 -- Using CATKIN_DEVEL_PREFIX: /catkin_ws/devel #17 2.377 -- Using CMAKE_PREFIX_PATH: /opt/ros/noetic #17 2.377 -- This workspace overlays: /opt/ros/noetic #17 2.408 -- Found PythonInterp: /usr/bin/python3 (found suitable version "3.8.5", minimum required is "3") #17 2.409 -- Using PYTHON_EXECUTABLE: /usr/bin/python3 #17 2.409 -- Using Debian Python package layout #17 2.447 -- Found PY_em: /usr/lib/python3/dist-packages/em.py #17 2.447 -- Using empy: /usr/lib/python3/dist-packages/em.py #17 2.585 -- Using CATKIN_ENABLE_TESTING: ON #17 2.585 -- Call enable_testing() #17 2.588 -- Using CATKIN_TEST_RESULTS_DIR: /catkin_ws/build/test_results #17 3.003 -- Forcing gtest/gmock from source, though one was otherwise available. #17 3.003 -- Found gtest sources under '/usr/src/googletest': gtests will be built #17 3.003 -- Found gmock sources under '/usr/src/googletest': gmock will be built #17 3.033 -- Found PythonInterp: /usr/bin/python3 (found version "3.8.5") #17 3.036 -- Found Threads: TRUE #17 3.052 -- Using Python nosetests: /usr/bin/nosetests3 #17 3.119 -- catkin 0.8.9 #17 3.119 -- BUILD_SHARED_LIBS is on #17 3.289 -- BUILD_SHARED_LIBS is on #17 3.289 -- Using CATKIN_WHITELIST_PACKAGES: moveit_msgs;ros_tcp_endpoint;ur3_moveit;robotiq_2f_140_gripper_visualization;ur_description;ur_gazebo #17 4.211 -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #17 4.211 -- ~~ traversing 1 packages in topological order: #17 4.211 -- ~~ - ur3_moveit #17 4.211 -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #17 4.212 -- +++ processing catkin package: 'ur3_moveit' #17 4.212 -- ==> add_subdirectory(ur3_moveit) #17 4.771 -- Could NOT find ros_tcp_endpoint (missing: ros_tcp_endpoint_DIR) #17 4.771 -- Could not find the required component 'ros_tcp_endpoint'. The following CMake error indicates that you either need to install the package with the same name or change your environment so that it can be found. #17 4.771 CMake Error at /opt/ros/noetic/share/catkin/cmake/catkinConfig.cmake:83 (find_package): #17 4.771 Could not find a package configuration file provided by "ros_tcp_endpoint" #17 4.771 with any of the following names: #17 4.771 #17 4.771 ros_tcp_endpointConfig.cmake #17 4.771 ros_tcp_endpoint-config.cmake #17 4.771 #17 4.771 Add the installation prefix of "ros_tcp_endpoint" to CMAKE_PREFIX_PATH or #17 4.771 set "ros_tcp_endpoint_DIR" to a directory containing one of the above #17 4.771 files. If "ros_tcp_endpoint" provides a separate development package or #17 4.771 SDK, be sure it has been installed. #17 4.771 Call Stack (most recent call first): #17 4.771 ur3_moveit/CMakeLists.txt:13 (find_package) #17 4.771 #17 4.772 #17 4.775 -- Configuring incomplete, errors occurred! #17 4.775 See also "/catkin_ws/build/CMakeFiles/CMakeOutput.log". #17 4.775 See also "/catkin_ws/build/CMakeFiles/CMakeError.log". #17 4.782 Base path: /catkin_ws #17 4.782 Source space: /catkin_ws/src #17 4.782 Build space: /catkin_ws/build #17 4.782 Devel space: /catkin_ws/devel #17 4.782 Install space: /catkin_ws/install #17 4.782 Creating symlink "/catkin_ws/src/CMakeLists.txt" pointing to "/opt/ros/noetic/share/catkin/cmake/toplevel.cmake" #17 4.782 #### #17 4.782 #### Running command: "cmake /catkin_ws/src -DCATKIN_WHITELIST_PACKAGES=moveit_msgs;ros_tcp_endpoint;ur3_moveit;robotiq_2f_140_gripper_visualization;ur_description;ur_gazebo -DCATKIN_DEVEL_PREFIX=/catkin_ws/devel -DCMAKE_INSTALL_PREFIX=/catkin_ws/install -G Unix Makefiles" in "/catkin_ws/build" #17 4.782 #### #17 4.782 Invoking "cmake" failed


    executor failed running [/bin/sh -c dos2unix /tutorial && dos2unix /setup.sh && chmod +x /setup.sh && /setup.sh && rm /setup.sh]: exit code: 1

    opened by JoSharon 5
  • System.Net.SocketException: Address already in use

    System.Net.SocketException: Address already in use

    Hello Team,

    I'm getting the System.Net.SocketException: Address already in use error from the Unity console.

    Troubleshooting workaround by leaving the Override Unity IP Address blank and Change the ROS IP Address to the IP of your Docker container didn't fix the error.

    Docker IP Configuration,

    [email protected]:/catkin_ws# ifconfig 
    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 172.17.0.3  netmask 255.255.0.0  broadcast 172.17.255.255
            ether 02:42:ac:11:00:03  txqueuelen 0  (Ethernet)
            RX packets 179  bytes 24664 (24.6 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 61  bytes 4008 (4.0 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 53259  bytes 14479754 (14.4 MB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 53259  bytes 14479754 (14.4 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    

    Unity_IP_Configuration

    Regards, Jegathesan S

    opened by nullbyte91 5
  • Cube not rotating

    Cube not rotating

    Hello, thank you for the very beneficial tutorial, I'm currently going through it. In part 2, I've followed the tutorial up to step 10 without errors. In step 10, the cube is overlaid with a green bounding box, however, it is not rotating. Any idea what could be the problem? I'm using Unity 2020.2.0f1

    The following is a screenshot of my editor. Screen Shot 2021-03-02 at 10 18 32 AM

    and if i continue to step 11, same thing, the box moves to a place and then stops moving, like the photo attached:

    Screen Shot 2021-03-02 at 10 37 34 AM
    opened by ZahraaBass 5
  • Error: arm/arm: Unable to sample any valid states for goal tree

    Error: arm/arm: Unable to sample any valid states for goal tree

    Hello there, I am trying to build robotics-object-pose-estimation project in my local machine but after I am running ROS server and try to click on pose estimation button in unity it return error "Error: arm/arm: Unable to sample any valid states for goal tree" Any help? Thanks

    Console logs / stack traces

    [ERROR] [1663850579.670529300]: arm/arm: Unable to sample any valid states for goal tree

    Screenshots

    Screenshot (1)

    Environment (please complete the following information, where applicable):

    • Unity Version: [e.g. Unity 2021.3.9f1]
    • Unity machine OS + version: [e.g. Windows 11]
    • ROS machine OS + version: [e.g. Ubuntu 18.04, ROS Noetic]
    • ROS–Unity communication: [e.g. Docker]
    • Package branches or versions: [e.g. [email protected]]
    opened by waedbara 4
  • ROS failed when I changed the camera rotation

    ROS failed when I changed the camera rotation

    Describe the bug MicrosoftTeams-image

    To Reproduce Steps to reproduce the behavior: Just change camera rotation into this, defaul value is 20 MicrosoftTeams-image (1)

    Additional context

    Idk why it's work fine with default camera but when I change its rotation, it's failed.

    opened by BaoLocPham 0
  • Problems when building docker image

    Problems when building docker image

    I am getting this error when building the docker image. Both on windows or ubuntu. I am attaching the screenshot of the error. I have followed all the steps.

    Screenshot 2022-11-20 at 11 12 52 AM

    any suggestion on how to solve this issue?

    opened by dipinoch 0
  • A lot pick up erros

    A lot pick up erros

    Hi,

    In my build the robot almost never succeeds in picking up the cube. Even though I get shell msg "You can start planning" I've noticed three ERRORS in the dock workspace:

    1. [controller_spawner-3]
    2. [ERROR] [1650563249.826889700]: Could not find the planner configuration 'None' on the param server
    3. [ERROR] [1650563266.917313200]: Action client not connected: /follow_joint_trajectory

    Are any of these possibly related?

    Thank you very much for your time.

    opened by andrecavalcante 1
  • The Cube label for data collection is misplaced in a weird way

    The Cube label for data collection is misplaced in a weird way

    Describe the bug

    The Cube label is misplaced in a weird way.

    To Reproduce

    Steps to reproduce the behavior:

    Just running a Demo project with the Perception camera turned on (was trying to collect images for model training).

    Screenshots

    Screenshot 2022-01-19 at 02 00 33

    Environment:

    • Unity Version: e.g. Unity 2020.2.6f1 (As suggested)
    • Unity machine OS + version: MacOS 12.1
    • ROS machine OS + version: As suggested
    • ROS–Unity communication: Docker
    • Package branches or versions: As suggested
    stale 
    opened by nkdchck 5
Releases(v0.0.1)
Owner
Unity Technologies
Unity Technologies
A simple pygame dino game which can also be trained and played by a NEAT KI

Dino Game AI Game The game itself was developed with the Pygame module pip install pygame You can also play it yourself by making the dino jump with t

Kilian Kier 7 Dec 05, 2022
Official Tensorflow implementation of "M-LSD: Towards Light-weight and Real-time Line Segment Detection"

M-LSD: Towards Light-weight and Real-time Line Segment Detection Official Tensorflow implementation of "M-LSD: Towards Light-weight and Real-time Line

NAVER/LINE Vision 357 Jan 04, 2023
Repository for Traffic Accident Benchmark for Causality Recognition (ECCV 2020)

Causality In Traffic Accident (Under Construction) Repository for Traffic Accident Benchmark for Causality Recognition (ECCV 2020) Overview Data Prepa

Tackgeun 21 Nov 20, 2022
Implementation for paper MLP-Mixer: An all-MLP Architecture for Vision

MLP Mixer Implementation for paper MLP-Mixer: An all-MLP Architecture for Vision. Give us a star if you like this repo. Author: Github: bangoc123 Emai

Ngoc Nguyen Ba 86 Dec 10, 2022
A Novel Incremental Learning Driven Instance Segmentation Framework to Recognize Highly Cluttered Instances of the Contraband Items

A Novel Incremental Learning Driven Instance Segmentation Framework to Recognize Highly Cluttered Instances of the Contraband Items This repository co

Taimur Hassan 3 Mar 16, 2022
3D HourGlass Networks for Human Pose Estimation Through Videos

3D-HourGlass-Network 3D CNN Based Hourglass Network for Human Pose Estimation (3D Human Pose) from videos. This was my summer'18 research project. Dis

Naman Jain 51 Jan 02, 2023
Machine Learning Models were applied to predict the mass of the brain based on gender, age ranges, and head size.

Brain Weight in Humans Variations of head sizes and brain weights in humans Kaggle dataset obtained from this link by Anubhab Swain. Image obtained fr

Anne Livia 1 Feb 02, 2022
Chess reinforcement learning by AlphaGo Zero methods.

About Chess reinforcement learning by AlphaGo Zero methods. This project is based on these main resources: DeepMind's Oct 19th publication: Mastering

Samuel 2k Dec 29, 2022
Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity

Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity, such as gratings, photonic-crystal slabs, metasurfaces, surf

Alex Song 17 Dec 19, 2022
Morphable Detector for Object Detection on Demand

Morphable Detector for Object Detection on Demand (ICCV 2021) PyTorch implementation of the paper Morphable Detector for Object Detection on Demand. I

9 Feb 23, 2022
Official implementation of "DSP: Dual Soft-Paste for Unsupervised Domain Adaptive Semantic Segmentation"

DSP Official implementation of "DSP: Dual Soft-Paste for Unsupervised Domain Adaptive Semantic Segmentation". Accepted by ACM Multimedia 2021. Authors

20 Oct 24, 2022
Blender Python - Node-based multi-line text and image flowchart

MindMapper v0.8 Node-based text and image flowchart for Blender Mindmap with shortcuts visible: Mindmap with shortcuts hidden: Notes This was requeste

SpectralVectors 58 Oct 08, 2022
Improving Compound Activity Classification via Deep Transfer and Representation Learning

Improving Compound Activity Classification via Deep Transfer and Representation Learning This repository is the official implementation of Improving C

NingLab 2 Nov 24, 2021
A pytorch-based real-time segmentation model for autonomous driving

CFPNet: Channel-Wise Feature Pyramid for Real-Time Semantic Segmentation This project contains the Pytorch implementation for the proposed CFPNet: pap

342 Dec 22, 2022
Camera-caps - Examine the camera capabilities for V4l2 cameras

camera-caps This is a graphical user interface over the v4l2-ctl command line to

Jetsonhacks 25 Dec 26, 2022
The Official TensorFlow Implementation for SPatchGAN (ICCV2021)

SPatchGAN: Official TensorFlow Implementation Paper "SPatchGAN: A Statistical Feature Based Discriminator for Unsupervised Image-to-Image Translation"

39 Dec 30, 2022
StackNet is a computational, scalable and analytical Meta modelling framework

StackNet This repository contains StackNet Meta modelling methodology (and software) which is part of my work as a PhD Student in the computer science

Marios Michailidis 1.3k Dec 15, 2022
ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction

ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction. NeurIPS 2021.

Gengshan Yang 59 Nov 25, 2022
A PyTorch Lightning solution to training OpenAI's CLIP from scratch.

train-CLIP 📎 A PyTorch Lightning solution to training CLIP from scratch. Goal ⚽ Our aim is to create an easy to use Lightning implementation of OpenA

Cade Gordon 396 Dec 30, 2022
An updated version of virtual model making

Model-Swap-Face v2   这个项目是基于stylegan2 pSp制作的,比v1版本Model-Swap-Face在推理速度和图像质量上有一定提升。主要的功能是将虚拟模特进行环球不同区域的风格转换,目前转换器提供西欧模特、东亚模特和北非模特三种主流的风格样式,可帮我们实现生产资料零成

seeprettyface.com 62 Dec 09, 2022