An offline deep reinforcement learning library

Overview

d3rlpy: An offline deep reinforcement learning library

test build Documentation Status codecov Maintainability Gitter MIT

d3rlpy is an offline deep reinforcement learning library for practitioners and researchers.

import d3rlpy

dataset, env = d3rlpy.datasets.get_dataset("hopper-medium-v0")

# prepare algorithm
sac = d3rlpy.algos.SAC()

# train offline
sac.fit(dataset, n_steps=1000000)

# train online
sac.fit_online(env, n_steps=1000000)

# ready to control
actions = sac.predict(x)

key features

โšก Most Practical RL Library Ever

  • offline RL: d3rlpy supports state-of-the-art offline RL algorithms. Offline RL is extremely powerful when the online interaction is not feasible during training (e.g. robotics, medical).
  • online RL: d3rlpy also supports conventional state-of-the-art online training algorithms without any compromising, which means that you can solve any kinds of RL problems only with d3rlpy.
  • advanced engineering: d3rlpy is designed to implement the faster and efficient training algorithms. For example, you can train Atari environments with x4 less memory space and as fast as the fastest RL library.

๐Ÿ”ฐ Easy-To-Use API

  • zero-knowledge of DL library: d3rlpy provides many state-of-the-art algorithms through intuitive APIs. You can become a RL engineer even without knowing how to use deep learning libraries.
  • scikit-learn compatibility: d3rlpy is not only easy, but also completely compatible with scikit-learn API, which means that you can maximize your productivity with the useful scikit-learn's utilities.

๐Ÿš€ Beyond State-Of-The-Art

  • distributional Q function: d3rlpy is the first library that supports distributional Q functions in the all algorithms. The distributional Q function is known as the very powerful method to achieve the state-of-the-performance.
  • many tweek options: d3rlpy is also the first to support N-step TD backup and ensemble value functions in the all algorithms, which lead you to the place no one ever reached yet.

installation

d3rlpy supports Linux, macOS and Windows.

PyPI (recommended)

PyPI version PyPI - Downloads

$ pip install d3rlpy

Anaconda

Anaconda-Server Badge Anaconda-Server Badge Anaconda-Server Badge

$ conda install -c conda-forge d3rlpy

Docker

Docker Pulls

$ docker run -it --gpus all --name d3rlpy takuseno/d3rlpy:latest bash

supported algorithms

algorithm discrete control continuous control offline RL?
Behavior Cloning (supervised learning) โœ… โœ…
Deep Q-Network (DQN) โœ… โ›”
Double DQN โœ… โ›”
Deep Deterministic Policy Gradients (DDPG) โ›” โœ…
Twin Delayed Deep Deterministic Policy Gradients (TD3) โ›” โœ…
Soft Actor-Critic (SAC) โœ… โœ…
Batch Constrained Q-learning (BCQ) โœ… โœ… โœ…
Bootstrapping Error Accumulation Reduction (BEAR) โ›” โœ… โœ…
Advantage-Weighted Regression (AWR) โœ… โœ… โœ…
Conservative Q-Learning (CQL) โœ… โœ… โœ…
Advantage Weighted Actor-Critic (AWAC) โ›” โœ… โœ…
Critic Reguralized Regression (CRR) โ›” โœ… โœ…
Policy in Latent Action Space (PLAS) โ›” โœ… โœ…
TD3+BC โ›” โœ… โœ…

supported Q functions

other features

Basically, all features are available with every algorithm.

  • evaluation metrics in a scikit-learn scorer function style
  • export greedy-policy as TorchScript or ONNX
  • parallel cross validation with multiple GPU

experimental features

benchmark results

d3rlpy is benchmarked to ensure the implementation quality. The benchmark scripts are available reproductions directory. The benchmark results are available d3rlpy-benchmarks repository.

examples

MuJoCo

import d3rlpy

# prepare dataset
dataset, env = d3rlpy.datasets.get_d4rl('hopper-medium-v0')

# prepare algorithm
cql = d3rlpy.algos.CQL(use_gpu=True)

# train
cql.fit(dataset,
        eval_episodes=dataset,
        n_epochs=100,
        scorers={
            'environment': d3rlpy.metrics.evaluate_on_environment(env),
            'td_error': d3rlpy.metrics.td_error_scorer
        })

See more datasets at d4rl.

Atari 2600

import d3rlpy
from sklearn.model_selection import train_test_split

# prepare dataset
dataset, env = d3rlpy.datasets.get_atari('breakout-expert-v0')

# split dataset
train_episodes, test_episodes = train_test_split(dataset, test_size=0.1)

# prepare algorithm
cql = d3rlpy.algos.DiscreteCQL(n_frames=4, q_func_factory='qr', scaler='pixel', use_gpu=True)

# start training
cql.fit(train_episodes,
        eval_episodes=test_episodes,
        n_epochs=100,
        scorers={
            'environment': d3rlpy.metrics.evaluate_on_environment(env),
            'td_error': d3rlpy.metrics.td_error_scorer
        })

See more Atari datasets at d4rl-atari.

PyBullet

import d3rlpy

# prepare dataset
dataset, env = d3rlpy.datasets.get_pybullet('hopper-bullet-mixed-v0')

# prepare algorithm
cql = d3rlpy.algos.CQL(use_gpu=True)

# start training
cql.fit(dataset,
        eval_episodes=dataset,
        n_epochs=100,
        scorers={
            'environment': d3rlpy.metrics.evaluate_on_environment(env),
            'td_error': d3rlpy.metrics.td_error_scorer
        })

See more PyBullet datasets at d4rl-pybullet.

Online Training

import d3rlpy
import gym

# prepare environment
env = gym.make('HopperBulletEnv-v0')
eval_env = gym.make('HopperBulletEnv-v0')

# prepare algorithm
sac = d3rlpy.algos.SAC(use_gpu=True)

# prepare replay buffer
buffer = d3rlpy.online.buffers.ReplayBuffer(maxlen=1000000, env=env)

# start training
sac.fit_online(env, buffer, n_steps=1000000, eval_env=eval_env)

tutorials

Try a cartpole example on Google Colaboratory!

  • offline RL tutorial: Open In Colab
  • online RL tutorial: Open In Colab

contributions

Any kind of contribution to d3rlpy would be highly appreciated! Please check the contribution guide.

The release planning can be checked at milestones.

community

Channel Link
Chat Gitter
Issues GitHub Issues

family projects

Project Description
d4rl-pybullet An offline RL datasets of PyBullet tasks
d4rl-atari A d4rl-style library of Google's Atari 2600 datasets
MINERVA An out-of-the-box GUI tool for offline RL

roadmap

The roadmap to the future release is available in ROADMAP.md.

citation

The paper is available here.

@InProceedings{seno2021d3rlpy,
  author = {Takuma Seno, Michita Imai},
  title = {d3rlpy: An Offline Deep Reinforcement Library},
  booktitle = {NeurIPS 2021 Offline Reinforcement Learning Workshop},
  month = {December},
  year = {2021}
}

acknowledgement

This work is supported by Information-technology Promotion Agency, Japan (IPA), Exploratory IT Human Resources Project (MITOU Program) in the fiscal year 2020.

Comments
  • Problem with loading trained model

    Problem with loading trained model

    I am trying to load a trained model with CQL.load_model(..full model [path). I first got fname is missing I tried fname=..full_model_path I then got self is missing I added self It still doesn't load the model. no attribute 'impl' ...

    bug 
    opened by hn2 21
  • Question regarding plotting Cumulative Reward graph on Tensorboard

    Question regarding plotting Cumulative Reward graph on Tensorboard

    I really enjoyed working with this repo. Thank you very much for your great work! I was just wondering how to have the cumulative reward plots on Tensorboard for deep Q network algorithm.

    Thank you again!

    enhancement 
    opened by ajam74001 14
  • [BUG] gaussian likelihood computation

    [BUG] gaussian likelihood computation

    ======== dynamics.py ===========

    def _gaussian_likelihood( x: torch.Tensor, mu: torch.Tensor, logstd: torch.Tensor ) -> torch.Tensor: inv_std = torch.exp(-logstd) return (((mu - x) ** 2) * inv_std).mean(dim=1, keepdim=True)

    ======= I think It should be... =============

    def _gaussian_likelihood( x: torch.Tensor, mu: torch.Tensor, logstd: torch.Tensor ) -> torch.Tensor: inv_std = torch.exp(-logstd) return 0.5 * (((mu - x) ** 2) * (inv_std ** 2)).sum(dim=1, keepdim=True)

    image

    bug 
    opened by tominku 14
  • d4rlpy MDPDataset

    d4rlpy MDPDataset

    Hi @takuseno, firstly thanks a lot for such a high quality repo for offline RL. I have a question about the method get_d4rl(), why the rewards are all moved by one step? while cursor < dataset_size: # collect data for step=t observation = dataset["observations"][cursor] action = dataset["actions"][cursor] if episode_step == 0: reward = 0.0 else: reward = dataset["rewards"][cursor - 1]

    Long for your feedback.

    opened by cclvr 14
  • [BUG] Final observation not stored

    [BUG] Final observation not stored

    Hello,

    Describe the bug it seems that the final observation is not stored in the Episode object.

    Looking at the code, if an episode is only one step long, the Episode object should store:

    • initial observation
    • action, reward
    • final observation

    But it seems that the observations array has the same length as the actions or rewards one which probably means that the final observation is not stored.

    Note: this would probably require some changes later on in the code as no action is taken after the final observation.

    Additional context The way it is handled in SB3 for instance is to have a separate array that store the next observation. A special treatment is also needed when using multiple envs at the same time that may reset automatically.

    See https://github.com/DLR-RM/stable-baselines3/blob/503425932f5dc59880f854c4f0db3255a3aa8c1e/stable_baselines3/common/off_policy_algorithm.py#L488 and https://github.com/DLR-RM/stable-baselines3/blob/503425932f5dc59880f854c4f0db3255a3aa8c1e/stable_baselines3/common/buffers.py#L267 (when using only one array)

    cc @megan-klaiber

    bug 
    opened by araffin 12
  • ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group

    ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group

    I get this error when loading a trained model Whta does it mean?

    ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group

    bug 
    opened by hn2 11
  • [REQUEST] Save model less frequently than metrics

    [REQUEST] Save model less frequently than metrics

    Hello, when running fit_online I'd like to be able to save the metrics regularly (eg, once every episode, which is 200 timesteps for the pendulum environment) without having to save the model .pt files at the same high frequency (because the model files are quite large).

    Put another way, I'd like to be able to write data to the evaluation.csv file without having to write a model_?????.pt file every time.

    I can't see how this is possible in the current code. If it's not possible, I'd like to request it as a feature. Thanks!

    enhancement 
    opened by pstansell 11
  • How to switch batch size during training?

    How to switch batch size during training?

    @takuseno , firstly thanks a lot for your clear and complete code base for offline RL. Recently I try to conduct new algorithms based on this code base, and I want to switch batch size during the training process, but I don't know how to modify it with the smallest changes . Could you help to give some clue? Looking forward to your replay.

    opened by cclvr 10
  • [REQUEST] Run time benchmarks,

    [REQUEST] Run time benchmarks,

    Hello dear @takuseno, Thank you very much for sharing this amazing library. I am training CQL and DQN models for breakout Atari on V100 GPU. However, the training is so slow (it takes a day to run 50 episodes). I was wondering if you have a benchmark for run times?

    enhancement 
    opened by ajam74001 9
  • NaN in Predictions while online finetune

    NaN in Predictions while online finetune

    Hi @takuseno , First of all thanks again for your awesome work, I was able to train my agent in a custom environment with your help and already increased the performance significantly! Nevertheless, I wanted to fine tune the agent in an online environment. Unfortunately. this worked for only somewhere between 500-1000 steps (not fixed, seems arbitrary) until I get an AssertionError because NaN values are predicted. I get the following trace. Any idea where I could look into / fix this?

    Exception has occurred: ValueError       (note: full exception trace is shown but execution is paused at: _run_module_as_main)
    Expected parameter loc (Tensor of shape (1, 4)) of distribution Normal(loc: torch.Size([1, 4]), scale: torch.Size([1, 4])) to satisfy the constraint Real(), but found invalid values:
    tensor([[nan, nan, nan, nan]])
      File "/home/user/ws/d3/.venv/lib/python3.10/site-packages/torch/distributions/distribution.py", line 55, in __init__
        raise ValueError(
      File "/home/user/ws/d3/.venv/lib/python3.10/site-packages/torch/distributions/normal.py", line 54, in __init__
        super(Normal, self).__init__(batch_shape, validate_args=validate_args)
      File "/home/user/ws/d3/.venv/lib/python3.10/site-packages/d3rlpy/models/torch/distributions.py", line 99, in __init__
        self._dist = Normal(self._mean, self._std)
      File "/home/user/ws/d3/.venv/lib/python3.10/site-packages/d3rlpy/models/torch/policies.py", line 175, in dist
        return SquashedGaussianDistribution(mu, clipped_logstd.exp())
      File "/home/user/ws/d3/.venv/lib/python3.10/site-packages/d3rlpy/models/torch/policies.py", line 189, in forward
        dist = self.dist(x)
      File "/home/user/ws/d3/.venv/lib/python3.10/site-packages/d3rlpy/models/torch/policies.py", line 245, in best_action
        action = self.forward(x, deterministic=True, with_log_prob=False)
      File "/home/user/ws/d3/.venv/lib/python3.10/site-packages/d3rlpy/algos/torch/ddpg_impl.py", line 195, in _predict_best_action
        return self._policy.best_action(x)
      File "/home/user/ws/d3/.venv/lib/python3.10/site-packages/d3rlpy/algos/torch/base.py", line 58, in predict_best_action
        action = self._predict_best_action(x)
      File "/home/user/ws/d3/.venv/lib/python3.10/site-packages/d3rlpy/torch_utility.py", line 295, in wrapper
        return f(self, *tensors, **kwargs)
      File "/home/user/ws/d3/.venv/lib/python3.10/site-packages/d3rlpy/torch_utility.py", line 305, in wrapper
        return f(self, *args, **kwargs)
      File "/home/user/ws/d3/.venv/lib/python3.10/site-packages/d3rlpy/algos/base.py", line 127, in predict
        return self._impl.predict_best_action(x)
      File "/home/user/ws/d3/.venv/lib/python3.10/site-packages/d3rlpy/online/explorers.py", line 50, in sample
        greedy_actions = algo.predict(x)
      File "/home/user/ws/d3/.venv/lib/python3.10/site-packages/d3rlpy/online/iterators.py", line 212, in train_single_env
        action = explorer.sample(algo, x, total_step)[0]
      File "/home/user/ws/d3/.venv/lib/python3.10/site-packages/d3rlpy/algos/base.py", line 251, in fit_online
        train_single_env(
      File "/home/user/ws/d3/simulation/examples/tune_d3rlpy.py", line 78, in <module>
        cql.fit_online(env, buffer, explorer, n_steps=1000)
      File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
        exec(code, run_globals)
      File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main (Current frame)
        return _run_code(code, main_globals, None,
    

    I used following script to initiate fine-tuning:

    cql = d3rlpy.algos.CQL(use_gpu=False, action_scaler=action_scaler, scaler=scaler)
    cql.build_with_env(env)
    cql.load_model("model_43596.pt")
    
    buffer = d3rlpy.online.buffers.ReplayBuffer(maxlen=100000, env=env)
    explorer = d3rlpy.online.explorers.ConstantEpsilonGreedy(0.1)
    cql.fit_online(env, buffer, explorer, n_steps=1000)
    
    opened by lettersfromfelix 9
  • Create a Generator version of fit as fitter

    Create a Generator version of fit as fitter

    This is just to start studying the change and discuss about it

    This provides many benefits such as monitoring, live changes to algo params etc

    This will also alleviate the need for doing complicated hierarchies of Callbacks mechanisms that are easier to solve with iterators and generators.

    At least for me it is very useful to have direct access to metrics, to have direct access to the algo object to change and query things every epoch and adjust things interactively instead of a programmatic callback way.

    opened by jamartinh 9
  • loss=nan

    loss=nan

    Hello, I'm trying to run offline RL where the state is formed by 75 or 100 variables (sampled from a bayesian network). The collected samples are in a data frame called "data", and I run the following.

    
    observations_dwh=data[['disease','weight','heartattack']].to_numpy()
    
    rewards = data['variable74']
    
    m=len(actions)
    
    terminals = np.repeat(1,m)
    
    dataset_dwh = MDPDataset(observations_dwh, actions, rewards, terminals)
    
    train_episodes_dwh, test_episodes_dwh = train_test_split(dataset_dwh)
    
    q_func_dwh=d3rlpy.algos.DQN()
    
    q_func_dwh.fit(train_episodes_dwh,test_episodes_dwh,scorers={'advantage': discounted_sum_of_advantage_scorer,
                                                  'td_error': td_error_scorer, # smaller is better
                                                  'value_scale': average_value_estimation_scorer
                                                 })`
    
    
    And it runs quite good, except that the loss=nan from the first step,
    any idea why?
    
    Thanks.
    bug 
    opened by MauricioGS99 0
  • NameNotFound: Environment BreakoutNoFrameskip doesn't exist

    NameNotFound: Environment BreakoutNoFrameskip doesn't exist

    Hello,

    I am running the example code on the welcome page of Github for Atari 2600 and Online Training. Both of the two pieces of code raise the error that the environment cannot be found. Please see below.

    For Atari 2600, I just copy the code and paste in PyCharm on Windows 11.

    import d3rlpy
    from sklearn.model_selection import train_test_split
    
    # prepare dataset
    dataset, env = d3rlpy.datasets.get_atari('breakout-expert-v0')
    
    # split dataset
    train_episodes, test_episodes = train_test_split(dataset, test_size=0.1)
    
    # prepare algorithm
    cql = d3rlpy.algos.DiscreteCQL(
        n_frames=4,
        q_func_factory='qr',
        scaler='pixel',
        use_gpu=True,
    )
    
    # start training
    cql.fit(
        train_episodes,
        eval_episodes=test_episodes,
        n_epochs=100,
        scorers={
            'environment': d3rlpy.metrics.evaluate_on_environment(env),
            'td_error': d3rlpy.metrics.td_error_scorer,
        },
    )
    

    And it says image

    Same for Online Training, I just copy and paste the code to PyCharm on Windows 11.

    import d3rlpy
    import gym
    
    # prepare environment
    env = gym.make('HopperBulletEnv-v0')
    eval_env = gym.make('HopperBulletEnv-v0')
    
    # prepare algorithm
    sac = d3rlpy.algos.SAC(use_gpu=True)
    
    # prepare replay buffer
    buffer = d3rlpy.online.buffers.ReplayBuffer(maxlen=1000000, env=env)
    
    # start training
    sac.fit_online(env, buffer, n_steps=1000000, eval_env=eval_env)
    

    And it says image

    Thank you!

    bug 
    opened by Zebin-Li 3
  • [REQUEST] Support Mildly Conservative Q-Learning (MCQ)

    [REQUEST] Support Mildly Conservative Q-Learning (MCQ)

    Hi

    Thank you for providing excellent code. I am using CQL for offline reinforcement learning. CQL is very useful with its attention span, but we need to compensate for its weaknesses.

    So I found the following paper, would it be a valuable additional implementation for this repository? https://arxiv.org/abs/2206.04745

    Unfortunately I don't have the power to implement this, so I will add it here as an issue. Thank you.

    enhancement 
    opened by bakud 0
  • [REQUEST] Enable observation dictionary input.

    [REQUEST] Enable observation dictionary input.

    Is your feature request related to a problem? Please describe. Currently, your MDPDataset class assert the observation to be an ndarray object. However, In the field of autonomous driving, the MDP observation cannot be represented by a simple ndarray object. Typically, the observation space can be composed of a BEV image and a speed profile, which is not supported by your MDPDataset yet.

    Describe the solution you'd like I believe it will make the repo stronger to enable observation dictionary storage and training like {"BEV": ndarray(C, W, H), "speed": (1,)} in the MDPDataset (as well as Episode and Transition class).

    enhancement 
    opened by Emiyalzn 0
  • [BUG] Pytorch module hooks are not executed

    [BUG] Pytorch module hooks are not executed

    Describe the bug I'm trying to debug some issues during online training (using fit_online) using pytorch hooks, but these hooks are not being executed. Looking at the code, policies are explicitly calling self.forward() like this. Directly calling self.forward() doesn't execute any hooks (see this post), so __call__() should be used instead. So self.forward() should be replaced with self().

    To Reproduce

    1. Register a hook with the policy module, e.g. algo._impl.policy.register_module_forward_pre_hook(hook)
    2. Train with algo.fit_online(...)
    3. Observe that the hook is never invoked

    Expected behavior The registered hooks should be executed.

    Additional context N/A.

    bug 
    opened by abhaybd 0
  • TransitionMiniBatch object is NOT writable

    TransitionMiniBatch object is NOT writable

    For validating an idea, I want to modify rewards in a TransitionMiniBatch dynamically. However, it threw an exception TransitionMiniBatch object is NOT writable. I checked the source code, and found that TransitionMiniBatch was implemented by C. I wonder there is a method to modify TransitionMiniBatch object. Thanks!

    enhancement 
    opened by XiudingCai 1
Releases(v1.1.1)
Owner
Takuma Seno
Machine learning engineer at Sony AI / Ph.D CS student at Keio University.
Takuma Seno
This repository contains the code for the CVPR 2021 paper "GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields"

GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields Project Page | Paper | Supplementary | Video | Slides | Blog | Talk If

1.1k Dec 30, 2022
Implementation of the pix2pix model on satellite images

This repo shows how to implement and use the pix2pix GAN model for image to image translation. The model is demonstrated on satellite images, and the

3 May 24, 2022
High-Resolution 3D Human Digitization from A Single Image.

PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization (CVPR 2020) News: [2020/06/15] Demo with Google Colab (i

Meta Research 8.4k Dec 29, 2022
Py4fi2nd - Jupyter Notebooks and code for Python for Finance (2nd ed., O'Reilly) by Yves Hilpisch.

Python for Finance (2nd ed., O'Reilly) This repository provides all Python codes and Jupyter Notebooks of the book Python for Finance -- Mastering Dat

Yves Hilpisch 1k Jan 05, 2023
Referring Video Object Segmentation

Awesome-Referring-Video-Object-Segmentation Welcome to starts โญ & comments ๐Ÿ’น & sharing ๐Ÿ˜€ !! - 2021.12.12: Recent papers (from 2021) - welcome to ad

Explorer 57 Dec 11, 2022
Calculates JMA (Japan Meteorological Agency) seismic intensity (shindo) scale from acceleration data recorded in NumPy array

shindo.py Calculates JMA (Japan Meteorological Agency) seismic intensity (shindo) scale from acceleration data stored in NumPy array Introduction Japa

RR_Inyo 3 Sep 23, 2022
PyTorch implementation of 'Gen-LaneNet: a generalized and scalable approach for 3D lane detection'

(pytorch) Gen-LaneNet: a generalized and scalable approach for 3D lane detection Introduction This is a pytorch implementation of Gen-LaneNet, which p

Yuliang Guo 233 Jan 06, 2023
Fibonacci Method Gradient Descent

An implementation of the Fibonacci method for gradient descent, featuring a TKinter GUI for inputting the function / parameters to be examined and a matplotlib plot of the function and results.

Emma 1 Jan 28, 2022
The official implementation of paper Siamese Transformer Pyramid Networks for Real-Time UAV Tracking, accepted by WACV22

SiamTPN Introduction This is the official implementation of the SiamTPN (WACV2022). The tracker intergrates pyramid feature network and transformer in

Robotics and Intelligent Systems Control @ NYUAD 29 Jan 08, 2023
[ICCV 2021] Relaxed Transformer Decoders for Direct Action Proposal Generation

RTD-Net (ICCV 2021) This repo holds the codes of paper: "Relaxed Transformer Decoders for Direct Action Proposal Generation", accepted in ICCV 2021. N

Multimedia Computing Group, Nanjing University 80 Nov 30, 2022
Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples This project is for the paper "Training Confidence-Calibrated Clas

168 Nov 29, 2022
A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection

Confluence: A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection 1. ไป‹็ป ็”จไปฅๆ›ฟไปฃ NMS๏ผŒๅœจๆ‰€ๆœ‰ bbox ไธญๆŒ‘้€‰ๅ‡บๆœ€ไผ˜็š„้›†ๅˆใ€‚ NMS ไป…่€ƒ่™‘ไบ† bbox ็š„ๅพ—ๅˆ†๏ผŒ็„ถๅŽๆ นๆฎ IOU ๆฅ

44 Sep 15, 2022
Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation (CVPR 2021)

Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation Input Image Initial CAM Successive Maps with adversar

Jungbeom Lee 110 Dec 07, 2022
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.

WECHSEL Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. arXiv: https://arx

Institute of Computational Perception 45 Dec 29, 2022
Official code release for "GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis"

GRAF This repository contains official code for the paper GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis. You can find detailed usage i

349 Dec 29, 2022
Official code for article "Expression is enough: Improving tra๏ฌ€ic signal control with advanced tra๏ฌ€ic state representation"

1 Introduction Official code for article "Expression is enough: Improving tra๏ฌ€ic signal control with advanced tra๏ฌ€ic state representation". The code s

Liang Zhang 10 Dec 10, 2022
Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting

InversePrompting Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting Code: The code is provided in the "chinese_ip"

THUDM 101 Dec 16, 2022
QKeras: a quantization deep learning library for Tensorflow Keras

QKeras github.com/google/qkeras QKeras 0.8 highlights: Automatic quantization using QKeras; Stochastic behavior (including stochastic rouding) is disa

Google 437 Jan 03, 2023
OBBDetection: an oriented object detection toolbox modified from MMdetection

OBBDetection note: If you have questions or good suggestions, feel free to propose issues and contact me. introduction OBBDetection is an oriented obj

MIXIAOXIN_HO 3 Nov 11, 2022
Codes for SIGIR'22 Paper 'On-Device Next-Item Recommendation with Self-Supervised Knowledge Distillation'

OD-Rec Codes for SIGIR'22 Paper 'On-Device Next-Item Recommendation with Self-Supervised Knowledge Distillation' Paper, saved teacher models and Andro

Xin Xia 11 Nov 22, 2022