MetaDrive: Composing Diverse Scenarios for Generalizable Reinforcement Learning

Overview


MetaDrive: Composing Diverse Driving Scenarios for Generalizable RL


MetaDrive is a driving simulator with the following key features:

  • Compositional: It supports generating infinite scenes with various road maps and traffic settings for the research of generalizable RL.
  • Lightweight: It is easy to install and run. It can run up to 300 FPS on a standard PC.
  • Realistic: Accurate physics simulation and multiple sensory input including Lidar, RGB images, top-down semantic map and first-person view images.

🛠 Quick Start

Install MetaDrive via:

git clone https://github.com/decisionforce/metadrive.git
cd metadrive
pip install -e .

or

pip install metadrive-simulator

Note that the program is tested on both Linux and Windows. Some control and display issues in MacOS wait to be solved

You can verify the installation of MetaDrive via running the testing script:

# Go to a folder where no sub-folder calls metadrive
python -m metadrive.examples.profile_metadrive

Note that please do not run the above command in a folder that has a sub-folder called ./metadrive.

🚕 Examples

Run the following command to launch a simple driving scenario with auto-drive mode on. Press W, A, S, D to drive the vehicle manually.

python -m metadrive.examples.drive_in_single_agent_env

Run the following command to launch a safe driving scenario, which includes more complex obstacles and cost to be yielded.

python -m metadrive.examples.drive_in_safe_metadrive_env

You can also launch an instance of Multi-Agent scenario as follows

python -m metadrive.examples.drive_in_multi_agent_env --env roundabout

or launch and render in pygame front end

python -m metadrive.examples.drive_in_multi_agent_env --pygame_render --env roundabout

env argument could be:

  • roundabout (default)
  • intersection
  • tollgate
  • bottleneck
  • parkinglot
  • pgmap

Run the example of procedural generation of a new map as:

python -m metadrive.examples.procedural_generation

Note that the above four scripts can not be ran in a headless machine. Please refer to the installation guideline in documentation for more information about how to launch runing in a headless machine.

Run the following command to draw the generated maps from procedural generation:

python -m metadrive.examples.draw_maps

To build the RL environment in python script, you can simply code in the OpenAI gym format as:

import metadrive  # Import this package to register the environment!
import gym

env = gym.make("MetaDrive-v0", config=dict(use_render=True))
# env = metadrive.MetaDriveEnv(config=dict(environment_num=100))  # Or build environment from class
env.reset()
for i in range(1000):
    obs, reward, done, info = env.step(env.action_space.sample())  # Use random policy
    env.render()
    if done:
        env.reset()
env.close()

🏫 Documentations

Find more details in: MetaDrive

📎 References

Working in Progress!

build codecov Documentation GitHub license Codacy Badge GitHub contributors

Comments
  • Reproducibility Problem

    Reproducibility Problem

    Hello,

    I am trying to create custom scenarios. For that, I created a custom map similar to vis_a_small_town.py, and I am using the drive_in_multi_agent_env.py example. The environment is defined as follow:

    env = envs[env_cls_name](
            {
                "use_render": True, # if not args.pygame_render else False,
                "manual_control": True,
                "crash_done": False,
                #"agent_policy": ManualControllableIDMPolicy, 
                "num_agents": total_agent_number,
                #"prefer_track_agent": "agent3",
                "show_fps": True, 
                "vehicle_config": {
                    "lidar": {"num_others": total_agent_number},
                    "show_lidar": False,    
                },
                "target_vehicle_configs": {"agent{}".format(i): {
                        #"spawn_lateral": i * 2,
                        "spawn_longitude": i * 10,
                        #"spawn_lane_index":0,
                        "vehicle_model": vehicle_model_list[i],
                        #"max_engine_force":1,
                        "max_speed":100,
                    }
                                               for i in range(5)}
            }
        )
    

    I have a list that consists of steering and throttle_brake values. The scenario consists of almost 1600 steps. I am assigning these values regarding step number into env.step:

    o, r, d, info=env.step({
                    'agent0': [agent0.steering, agent0.pedal], 
                    'agent1': [agent1.steering, agent1.pedal],
                    'agent2': [agent2.steering, agent2.pedal],
                    'agent3': [agent3.steering, agent3.pedal],
                    'agent4': [agent4.steering, agent4.pedal],
                    'agent5': [agent5.steering, agent5.pedal],
                    'agent6': [agent6.steering, agent6.pedal],
                    'agent7': [agent7.steering, agent7.pedal],
                    'agent8': [agent8.steering, agent8.pedal],
                    'agent9': [agent9.steering, agent9.pedal],
                    }
                )
    

    At the end of the list for vehicle commands, the counter for loop set the zero, and commands are reused. Also, vehicles' locations, speeds, and headings are set to initial values stored in the dictionary:

    def initilize_vehicles(env):
    
        global total_agent_number
    
        for i in range (total_agent_number):
            agent_str = "agent" + str(i)
            env.vehicles[agent_str].set_heading_theta(vehicles_initial_values[agent_str]['initial_heading_theta'])
            env.vehicles[agent_str].set_position([vehicles_initial_values[agent_str]['initial_position_x'],vehicles_initial_values[agent_str]['initial_position_y']]) # it is x,y from the first block of the map
            env.vehicles[agent_str].set_velocity(env.vehicles[agent_str].velocity_direction, vehicles_initial_values[agent_str]['initial_velocity']) 
    

    I want to reproduce the scenario and test my main algorithm. However, the problem is vehicles do not act in the same way in every run of scenario. I checked my commands for vehicle using:

    env.vehicles["agent0"].steering,env.vehicles["agent0"].throttle_brake

    Vehicle commands are the same for each repetition of scenarios.

    When I don't use a loop and start the MetaDrive from the terminal, I mostly see the same action from cars. I tested almost 10 times. But in loop case, cars start to act differently after the first loop.

    Reproducibility is a huge concern for me. Is it something about the physic engine? Are there any configuration parameters for the engine?

    Thanks!!

    opened by BedirhanKeskin 9
  • Add more description for Waymo dataset

    Add more description for Waymo dataset

    What changes do you make in this PR?

    • Please describe why you create this PR

    Checklist

    • [ ] I have merged the latest main branch into current branch.
    • [ ] I have run bash scripts/format.sh before merging.
    • Please use "squash and merge" mode.
    opened by pengzhenghao 6
  • Constant FPS mode

    Constant FPS mode

    Is there a way to set constant fps mode? I tried env.engine.force_fps.toggle(): then env.engine.force_fps.fps is showing 50 but visualisation is showing 10-16 fps in the top right corner. Is there any other way? Thanks in advance!

    opened by bbenja 6
  • What is neighbours_distance ?

    What is neighbours_distance ?

    Hello, What is the neighbours_distance and difference with the distance definition inside Lidar? They are inside MULTI_AGENT_METADRIVE_DEFAULT_CONFIG I guess the unit is in meters? `` Ekran görüntüsü 2022-03-31 114715

    opened by BedirhanKeskin 5
  • I encountered an error at an unknown location during runtime. Hello

    I encountered an error at an unknown location during runtime. Hello

    Successfully registered the following environments: ['MetaDrive-validation-v0', 'MetaDrive-10env-v0', 'MetaDrive-100envs-v0', 'MetaDrive-1000envs-v0', 'SafeMetaDrive-validation-v0', 'SafeMetaDrive-10env-v0', 'SafeMetaDrive-100envs-v0', 'SafeMetaDrive-1000envs-v0', 'MARLTollgate-v0', 'MARLBottleneck-v0', 'MARLRoundabout-v0', 'MARLIntersection-v0', 'MARLParkingLot-v0', 'MARLMetaDrive-v0']. Known pipe types: wglGraphicsPipe (all display modules loaded.)

    opened by shushushulian 4
  • about panda3d

    about panda3d

    when i run "python -m metadrive.examples.drive_in_safe_metadrive_env", set use_render=true the output: Successfully registered the following environments: ['MetaDrive-validation-v0', 'MetaDrive-10env-v0', 'MetaDrive-100envs-v0', 'MetaDrive-1000envs-v0', 'SafeMetaDrive-validation-v0', 'SafeMetaDrive-10env-v0', 'SafeMetaDrive-100envs-v0', 'SafeMetaDrive-1000envs-v0', 'MARLTollgate-v0', 'MARLBottleneck-v0', 'MARLRoundabout-v0', 'MARLIntersection-v0', 'MARLParkingLot-v0', 'MARLMetaDrive-v0']. Known pipe types: glxGraphicsPipe (1 aux display modules not yet loaded.)

    opened by benicioolee 4
  • RGB Camera returns time-buffered grayscale images

    RGB Camera returns time-buffered grayscale images

    Hi, I am running a vanilla MetaDriveEnv with the rgb camera sensor.

    veh_config = dict(
        image_source="rgb_camera",
        rgb_camera=(IMG_DIM, IMG_DIM))
    

    I wanted to see the images the sensor was producing, so was saving a few of them: I am using: from PIL import Image

    action = np.array([0,0])
    obs, reward, done, info = env.step(action)
    img = Image.fromarray(np.array(obs['image']*256,np.uint8))
    img.save(f"test.jpeg")
    

    I noticed that the images all looked grayscale. And upon further inspection I found the following behavior:: Suppose we want (N,N) images, which should be represented as arrays of size (N,N,3). Step 0: image[:,:,0] = zeros(N,N) ; image[:,:,1] = zeros(N,N) ; image[:,:,2] = zeros(N,N) Step 1: image[:,:,0] = zeros(N,N) ; image[:,:,1] = zeros(N,N) ; image[:,:,2] = m1 Step 2: image[:,:,0] = zeros(N,N) ; image[:,:,1] = m1 ; image[:,:,2] = m2 Step 3: image[:,:,0] = m1 ; image[:,:,1] = m2 ; image[:,:,2] = m3

    where m1, m2, m3 are (N,N) matrices.

    So, the images are in reality displaying 3 different timesteps with the color channels taking the time info (R=t-2, G=t-1, B=t) . That is why the images look mostly gray, since the values are identical almost everywhere – except where we expect some movement (contours) where we see that lines look colorful and strange.

    Apologies if this is expected behavior, and I just had some configuration incorrect.

    image

    opened by EdAlexAguilar 4
  • Fix close and reset issue

    Fix close and reset issue

    What changes do you make in this PR?

    • Please describe why you create this PR

    close #191

    Checklist

    • [x] I have merged the latest main branch into current branch.
    • [x] I have run bash scripts/format.sh before merging.
    • Please use "squash and merge" mode.
    opened by pengzhenghao 4
  • Selection of parameter in Rllib training for SAC agent in MetaDriveEnv and SafeMetaDriveEnv

    Selection of parameter in Rllib training for SAC agent in MetaDriveEnv and SafeMetaDriveEnv

    What is the proper buffer size / batch size / entropy coefficient for SAC to reproduce the results? I find it hard to reproduce results in SafeMetaDriveEnv. In https://arxiv.org/pdf/2109.12674.pdf, does the reported success rate of SAÇ in Table1 refer to the training success rate (and no collision, i.e. safe_rl_env=True)?

    opened by HenryLHH 4
  • Suggestion to run multiple instance in parallel?

    Suggestion to run multiple instance in parallel?

    First of all I would like to express my gratitude on this great project. I really like the feature-rich and lightweight nature of MetaDrive as a driving simulator for reinforcement learning.

    I am wondering what is the recommended way to run multiple MetaDrive instances in parallel (each one with a single ego-car agent)? This seem to be a common use case for reinforcement learning training. I am currently running a batch of MetaDrive simulator with each of them in wrapped in a process, which does seem to have overheads of extra resource and communication/synchronization.

    Another problem I encountered when running multiple instances (say, 60 instances on single machine) in their own process is that I will get a lot of warning like this:

    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    

    I guess this has something to do with audio. This happens even though I am running the TopDown environment, which should not involve sound. Did you see those warning when running multiple instances as well?

    Also, is there a plan to have a vectorized batch version?

    Thanks!

    opened by breakds 4
  • Rendering FPS of example script is too low

    Rendering FPS of example script is too low

    When I run python -m metadrive.examples.drive_in_single_agent_env I found the fps is about 4 fps I used the nvidia-smi command and found that my 2060gpu was not occupied.

    I also can find some warnings about: WARNING:root: It seems you don't install our cython utilities yet! Please reinstall MetaDrive via .........

    opened by feidieufo 4
  • Errors when running metadrive.tests.scripts.generate_video_for_image_obs

    Errors when running metadrive.tests.scripts.generate_video_for_image_obs

    In metadrive directory, I ran python -m metadrive.tests.scripts.generate_video_for_image_obs , then it reported an error as below: python -m metadrive.tests.scripts.generate_video_for_image_obs

    Successfully registered the following environments: ['MetaDrive-validation-v0', 'MetaDrive-10env-v0', 'MetaDrive-100envs-v0', 'MetaDrive-1000envs-v0', 'SafeMetaDrive-validation-v0', 'SafeMetaDrive-10env-v0', 'SafeMetaDrive-100envs-v0', 'SafeMetaDrive-1000envs-v0', 'MARLTollgate-v0', 'MARLBottleneck-v0', 'MARLRoundabout-v0', 'MARLIntersection-v0', 'MARLParkingLot-v0', 'MARLMetaDrive-v0']. :display(warning): Unable to load libpandagles2.so: No error. Known pipe types: (all display modules loaded.) Traceback (most recent call last): File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/queenie/Documents/metadrive/metadrive/tests/scripts/generate_video_for_image_obs.py", line 157, in env.reset() File "/Users/queenie/Documents/metadrive/metadrive/envs/base_env.py", line 333, in reset self.lazy_init() # it only works the first time when reset() is called to avoid the error when render File "/Users/queenie/Documents/metadrive/metadrive/envs/base_env.py", line 234, in lazy_init self.engine = initialize_engine(self.config) File "/Users/queenie/Documents/metadrive/metadrive/engine/engine_utils.py", line 11, in initialize_engine cls.singleton = cls(env_global_config) File "/Users/queenie/Documents/metadrive/metadrive/engine/base_engine.py", line 28, in init EngineCore.init(self, global_config) File "/Users/queenie/Documents/metadrive/metadrive/engine/core/engine_core.py", line 135, in init super(EngineCore, self).init(windowType=self.mode) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 339, in init self.openDefaultWindow(startDirect = False, props=props) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 1021, in openDefaultWindow self.openMainWindow(*args, **kw) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 1056, in openMainWindow self.openWindow(*args, **kw) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 766, in openWindow win = func() File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 752, in callbackWindowDict = callbackWindowDict) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 818, in _doOpenWindow self.makeDefaultPipe() File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 648, in makeDefaultPipe "No graphics pipe is available!\n" File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/directnotify/Notifier.py", line 130, in error raise exception(errorString) Exception: No graphics pipe is available! Your Config.prc file must name at least one valid panda display library via load-display or aux-display. (drivemeta) ➜ metadrive git:(main) python -m metadrive.tests.scripts.generate_video_for_image_obs Successfully registered the following environments: ['MetaDrive-validation-v0', 'MetaDrive-10env-v0', 'MetaDrive-100envs-v0', 'MetaDrive-1000envs-v0', 'SafeMetaDrive-validation-v0', 'SafeMetaDrive-10env-v0', 'SafeMetaDrive-100envs-v0', 'SafeMetaDrive-1000envs-v0', 'MARLTollgate-v0', 'MARLBottleneck-v0', 'MARLRoundabout-v0', 'MARLIntersection-v0', 'MARLParkingLot-v0', 'MARLMetaDrive-v0']. :display(warning): Unable to load libpandagles2.so: No error. Known pipe types: (all display modules loaded.) Traceback (most recent call last): File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/queenie/Documents/metadrive/metadrive/tests/scripts/generate_video_for_image_obs.py", line 157, in env.reset() File "/Users/queenie/Documents/metadrive/metadrive/envs/base_env.py", line 333, in reset self.lazy_init() # it only works the first time when reset() is called to avoid the error when render File "/Users/queenie/Documents/metadrive/metadrive/envs/base_env.py", line 234, in lazy_init self.engine = initialize_engine(self.config) File "/Users/queenie/Documents/metadrive/metadrive/engine/engine_utils.py", line 11, in initialize_engine cls.singleton = cls(env_global_config) File "/Users/queenie/Documents/metadrive/metadrive/engine/base_engine.py", line 28, in init EngineCore.init(self, global_config) File "/Users/queenie/Documents/metadrive/metadrive/engine/core/engine_core.py", line 135, in init super(EngineCore, self).init(windowType=self.mode) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 339, in init self.openDefaultWindow(startDirect = False, props=props) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 1021, in openDefaultWindow self.openMainWindow(*args, **kw) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 1056, in openMainWindow self.openWindow(*args, **kw) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 766, in openWindow win = func() File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 752, in callbackWindowDict = callbackWindowDict) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 818, in _doOpenWindow self.makeDefaultPipe() File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 648, in makeDefaultPipe "No graphics pipe is available!\n" File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/directnotify/Notifier.py", line 130, in error raise exception(errorString) Exception: No graphics pipe is available! Your Config.prc file must name at least one valid panda display library via load-display or aux-display.

    opened by YouSonicAI 2
  • opencv-python-headless in requirements seems to create conflict?

    opencv-python-headless in requirements seems to create conflict?

    Sometimes it has different version to opencv-python can cause issue. It is only used in top-down-rendering. Can we change this dependency to opencv-python?

    opened by pengzhenghao 0
Releases(MetaDrive-0.2.6.0)
Owner
DeciForce: Crossroads of Machine Perception and Autonomy
Research on Unifying Machine Perception and Autonomy in Zhou Group
DeciForce: Crossroads of Machine Perception and Autonomy
Network Enhancement implementation in pytorch

network_enahncement_pytorch Network Enhancement implementation in pytorch Research paper Network Enhancement: a general method to denoise weighted bio

Yen 1 Nov 12, 2021
Object classification with basic computer vision techniques

naive-image-classification Object classification with basic computer vision techniques. Final assignment for the computer vision course I took at univ

2 Jul 01, 2022
Software Platform for solving and manipulating multiparametric programs in Python

PPOPT Python Parametric OPtimization Toolbox (PPOPT) is a software platform for solving and manipulating multiparametric programs in Python. This pack

10 Sep 13, 2022
Temporal-Relational CrossTransformers

Temporal-Relational Cross-Transformers (TRX) This repo contains code for the method introduced in the paper: Temporal-Relational CrossTransformers for

83 Dec 12, 2022
PSANet: Point-wise Spatial Attention Network for Scene Parsing, ECCV2018.

PSANet: Point-wise Spatial Attention Network for Scene Parsing (in construction) by Hengshuang Zhao*, Yi Zhang*, Shu Liu, Jianping Shi, Chen Change Lo

Hengshuang Zhao 217 Oct 30, 2022
SW components and demos for visual kinship recognition. An emphasis is put on the FIW dataset-- data loaders, benchmarks, results in summary.

FIW Data Development Kit Table of Contents Introduction Families In the Wild Database Publications Organization To Do License Getting Involved Introdu

Joseph P. Robinson 12 Jun 04, 2022
NL-Augmenter 🦎 → 🐍 A Collaborative Repository of Natural Language Transformations

NL-Augmenter 🦎 → 🐍 The NL-Augmenter is a collaborative effort intended to add transformations of datasets dealing with natural language. Transformat

684 Jan 09, 2023
TGRNet: A Table Graph Reconstruction Network for Table Structure Recognition

TGRNet: A Table Graph Reconstruction Network for Table Structure Recognition Xue, Wenyuan, et al. "TGRNet: A Table Graph Reconstruction Network for Ta

Wenyuan 68 Jan 04, 2023
Detection of drones using their thermal signatures from thermal camera through YOLO-V3 based CNN with modifications to encapsulate drone motion

Drone Detection using Thermal Signature This repository highlights the work for night-time drone detection using a using an Optris PI Lightweight ther

Chong Yu Quan 6 Dec 31, 2022
Reinfore learning tool box, contains trpo, a3c algorithm for continous action space

RL_toolbox all the algorithm is running on pycharm IDE, or the package loss error may exist. implemented algorithm: trpo a3c a3c:for continous action

yupei.wu 44 Oct 10, 2022
Generative Models for Graph-Based Protein Design

Graph-Based Protein Design This repo contains code for Generative Models for Graph-Based Protein Design by John Ingraham, Vikas Garg, Regina Barzilay

John Ingraham 159 Dec 15, 2022
PyTorch implementation of paper A Fast Knowledge Distillation Framework for Visual Recognition.

FKD: A Fast Knowledge Distillation Framework for Visual Recognition Official PyTorch implementation of paper A Fast Knowledge Distillation Framework f

Zhiqiang Shen 129 Dec 24, 2022
Effective Use of Transformer Networks for Entity Tracking

Effective Use of Transformer Networks for Entity Tracking (EMNLP19) This is a PyTorch implementation of our EMNLP paper on the effectiveness of pre-tr

5 Nov 06, 2021
PyTorch implementation of our paper How robust are discriminatively trained zero-shot learning models?

How robust are discriminatively trained zero-shot learning models? This repository contains the PyTorch implementation of our paper How robust are dis

Mehmet Kerim Yucel 5 Feb 04, 2022
Classical OCR DCNN reproduction based on PaddlePaddle framework.

Paddle-SVHN Classical OCR DCNN reproduction based on PaddlePaddle framework. This project reproduces Multi-digit Number Recognition from Street View I

1 Nov 12, 2021
Mae segmentation - Reproduction of semantic segmentation using masked autoencoder (mae)

ADE20k Semantic segmentation with MAE Getting started Install the mmsegmentation

97 Dec 17, 2022
百度2021年语言与智能技术竞赛机器阅读理解Pytorch版baseline

项目说明: 百度2021年语言与智能技术竞赛机器阅读理解Pytorch版baseline 比赛链接:https://aistudio.baidu.com/aistudio/competition/detail/66?isFromLuge=true 官方的baseline版本是基于paddlepadd

周俊贤 54 Nov 23, 2022
Code for Reciprocal Adversarial Learning for Brain Tumor Segmentation: A Solution to BraTS Challenge 2021 Segmentation Task

BRATS 2021 Solution For Segmentation Task This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmenta

Himashi Amanda Peiris 6 Sep 15, 2022
FaceOcc: A Diverse, High-quality Face Occlusion Dataset for Human Face Extraction

FaceExtraction FaceOcc: A Diverse, High-quality Face Occlusion Dataset for Human Face Extraction Occlusions often occur in face images in the wild, tr

16 Dec 14, 2022
Official implementation of the paper "Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering"

Light Field Networks Project Page | Paper | Data | Pretrained Models Vincent Sitzmann*, Semon Rezchikov*, William Freeman, Joshua Tenenbaum, Frédo Dur

Vincent Sitzmann 130 Dec 29, 2022