JORLDY an open-source Reinforcement Learning (RL) framework provided by KakaoEnterprise

Related tags

Deep LearningJORLDY
Overview

JORLDY (Beta)

license badge

Hello WoRLd!! Join Our Reinforcement Learning framework for Developing Yours (JORLDY) is an open-source Reinforcement Learning (RL) framework provided by KakaoEnterprise. It is named after Jordy, one of the Kakao Niniz character. It provides various RL algorithms and environment and they can be easily used using single code. This repository is opened for helping RL researchers and students who study RL.

🔥 Features

  • 20+ RL Algorithms and various RL environment are provided
  • Algorithms and environment are customizable
  • New algorithms are environment can be added
  • Distributed RL algorithms are provided using ray
  • Benchmark of the algorithms is conducted in many RL environment

Notification

Currently, JORLDY is pre-release version. It only supports Linux, but you can use JORLDY with Docker on Windows and Mac. However, you can use only (single, sync_distributed)_train_nomp.py and eval.py on a local environment in Windows and Mac. In WSL, there is an issue with the algorithm using the target network in the script using multiprocessing library. We will address these issues as soon as possible.

* (single, sync_distributed)_train_nomp.py: these scripts don't use multiprocessing library. In detail, the manage process is included in the main process. So it can be a bit slow.

⬇️ Installation

 $ git clone https://github.com/kakaoenterprise/JORLDY.git  
 $ cd JORLDY
 $ pip install -r requirements.txt

 # linux
 $ apt-get update 
 $ apt-get -y install libgl1-mesa-glx # for opencv
 $ apt-get -y install libglib2.0-0    # for opencv
 $ apt-get -y install gifsicle        # for gif optimize

🐳 To use docker

(customize if necessary)

 $ cd JORLDY

 # mac, linux
 $ docker build -t jorldy -f ./docker/Dockerfile .
 $ docker run -it --rm --name jorldy -v `pwd`:/JORLDY jorldy /bin/bash

 # windows
 > docker build -t jorldy -f .\docker\Dockerfile .
 > docker run -it --rm --name jorldy -v %cd%:/JORLDY jorldy /bin/bash

To use additional environments

(atari and super-mario-bros need to be installed manually due to licensing issues)

 # To use atari
 $ pip install --upgrade gym[atari,accept-rom-license]
 
 # To use super-mario-bros
 $ pip install gym-super-mario-bros

🚀 Getting started

$ cd jorldy

# Examples: python [script name] --config [config path]
$ python single_train.py --config config.dqn.cartpole
$ Python single_train.py --config config.rainbow.atari --env.name assault

# Examples: Python [script name] --config [config path] --[optional parameter key] [parameter value]
$ python single_train.py --config config.dqn.cartpole --agent.batch_size 64
$ python sync_distributed_train.py --config config.ppo.cartpole --train.num_worker 8 

🗂️ Release

Version Release Date Source Release Note
0.0.1 November 03, 2021 Source Release Note

🔍 How to

📄 Documentation

👥 Contributors

📫 Contact: [email protected]

contributors

©️ License

Apache License 2.0

🚫 Disclaimer

Installing in JORDY and/or utilizing algorithms or environments not provided KEP may involve a use of third party’s intellectual property. It is advisable that a user obtain licenses or permissions from the right holder(s), if necessary, or take any other necessary measures to avoid infringement or misappropriation of third party’s intellectual property rights.

Comments
  • Ray memory issue when running rnd ppo

    Ray memory issue when running rnd ppo

    Describe the bug Ray memory issue occurred when running rnd ppo on montezuma's revenge of Atari env.

    To Reproduce Run rnd ppo on montezuma's revenge

    Expected behavior Memory issue occurs

    Screenshots 스크린샷 2021-11-29 오후 3 13 47

    Development Env. (OS, version, libraries): Linux Ubuntu, Python 3.8, requirement (jorldy0.0.2)

    Additional context Add any other context about the problem here.

    bug 
    opened by leonard-q 3
  • Modify train files, eval_manager

    Modify train files, eval_manager

    :star2: Hello! Thanks for contributing JORLDY!

    Checklist

    Please check if you consider the following items.

    • [v] My code follows the style guidelines of this project
    • [v] My code follows the naming convention of documentation
    • [v] I have commented my code, particularly in hard-to-understand areas
    • [v] My changes generate no new warnings or errors

    Types of changes

    Bugfix

    Test Configuration

    • OS: Windows 10
    • Python version: 3.8
    • Additional libraries: None

    Description

    • Fixed #44

    The basic idea is that eval_manager in the child process should create its env. For now, distributed_train.py process doesn’t use env after creating agent config.

    opened by zenoengine 3
  • V-MPO atari performance issue

    V-MPO atari performance issue

    I am tried running V-MPO on atari Breakout, and it didn't seem to gain any momentum; Any reason why this might be? I tried changing some of the parameters in the config file and I still didn't get any improvement. Is this how it suppose to be at the beginning of training?

    image

    bug 
    opened by hlsafin 2
  • Leonard/multi modal

    Leonard/multi modal

    :star2: Hello! Thanks for contributing JORLDY!

    Checklist

    Please check if you consider the following items.

    • [v] My code follows the style guidelines of this project
    • [v] My code follows the naming convention of documentation
    • [v] I have commented my code, particularly in hard-to-understand areas
    • [v] My changes generate no new warnings or errors

    Types of changes

    Please describe the types of changes! (ex. Bugfix, New feature, Documentation, ...) New feature

    Test Configuration

    • OS: Linux Ubuntu
    • Python version: 3.8
    • Additional libraries: None

    Description

    Please describe the details of your contribution Envs which have Multi modal (image, vector) input can be applied to all agents.

    opened by leonard-q 2
  • Ray Out Of Memory Error

    Ray Out Of Memory Error

    Describe the bug A clear and concise description of what the bug is.

    To Reproduce python main.py --async --config config.r2d2.atari --env.name breakout python main.py --async --config config.muzero.atari --env.name qbert

    Expected behavior RayOutOfMemoryError

    Screenshots 스크린샷 2022-05-30 오후 6 46 40 스크린샷 2022-05-30 오후 5 07 28

    Development Env. (OS, version, libraries): Linux python 3.7.11 jorldy:0.3.0

    Additional context Add any other context about the problem here. https://stackoverflow.com/questions/60175137/out-of-memory-with-ray-python-framework https://github.com/ray-project/ray/issues/5572

    It seems that GC for ray shared memory doesn't work properly.

    bug 
    opened by kan-s0 1
  • Non-episodic update of Multistep agent

    Non-episodic update of Multistep agent

    Describe the bug A clear and concise description of what the bug is.

    Samples of Multistep agent has trash value about post-terminal state.

    To Reproduce Steps to reproduce the behavior:

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem.

    Development Env. (OS, version, libraries): Please describe current development environment

    Additional context Add any other context about the problem here.

    bug 
    opened by erinn-lee 1
  • update put&timeout to put_nowait

    update put&timeout to put_nowait

    update put&timeout to put_nowait

    :star2: Hello! Thanks for contributing JORLDY!

    Checklist

    Please check if you consider the following items.

    • [x] My code follows the style guidelines of this project contributing
    • [x] My code follows the naming convention of documentation
    • [x] I have commented my code, particularly in hard-to-understand areas
    • [x] My changes generate no new warnings or errors

    Types of changes

    Please describe the types of changes! (ex. Bugfix, New feature, Documentation, ...)

    Test Configuration

    • OS:
    • Python version:
    • Additional libraries:

    Description

    Please describe the details of your contribution

    optimize put method

    opened by ramanuzan 1
  • memory size in test_r2d2_agent.py

    memory size in test_r2d2_agent.py

    Describe the bug A clear and concise description of what the bug is.

    agent.memory.size is not defined correctly

    To Reproduce Steps to reproduce the behavior:

    run pytest after uncomment agent.memory.size

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem. image

    Development Env. (OS, version, libraries): Please describe current development environment Linux Ubuntu

    Additional context Add any other context about the problem here.

    bug 
    opened by leonard-q 1
  • Couldn't launch the

    Couldn't launch the "Server/DroneDelivery"

    Describe the bug

    mlagents_envs.exception.UnityEnvironmentException:
    
    Couldn't launch the ./core/env/mlagents/DroneDelivery/Server/DroneDelivery environment. 
    Provided filename does not match any environments.
    

    To Reproduce

    # docker
    docker build -t jorldy -f ./docker/Dockerfile .
    docker run -it --rm --name jorldy -v `pwd`:/JORLDY jorldy /bin/bash
    
    python sync_distributed_train.py --config=config.ppo.drone_delivery_mlagent
    

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots

    Development Env. (OS, version, libraries): Ubuntu 18.04.5 LTS", mlagents-envs 0.26.0

    Additional context Add any other context about the problem here.

    bug 
    opened by zenoengine 1
  • Errors when running Drone_Challenge

    Errors when running Drone_Challenge

    Describe the bug

    1. Not running mlagents until I stalled hiredis
    2. DroneDelivery env error, I think it's corrupted.

    To Reproduce pip install -r requirements.txt python sync_distributed_train.py --config=config.ppo.drone_delivery_mlagent

    Expected behavior

    First, After I had installed requirements.txt I followed the commands "python sync_distributed_train.py --config=config.ppo.drone_delivery_mlagent" Then I saw "redis-py works best with hiredis please consider installing" in my case it's not causing any problem to run mlagents. but one on my friend couldn't run it until he installed hiredis.

    Second, When I run mlagents. I could barely see Drone, Destination points. (please see the pic I attached) By overwriting files with this I could solve the problem.

    Please check these errors. Thanks

    Screenshots image

    Development Env. (OS, version, libraries): Windows 10, Anaconda, Python3.8.8

    bug 
    opened by pnltoen 1
  • pre-check discrete or continuous action by algorithms

    pre-check discrete or continuous action by algorithms

    Is your feature request related to a problem? Please describe. Hi, thank you for sharing this project. For now it seems DQN doesn't check discrete or continuous in advance. When I change dqn.cartpole config

    env = {
        "name":"cartpole",
        "render":False,
    }
    

    to

    env = {
        "name":"cartpole",
        "render":False,
        "mode":"continuous",
    }
    

    it doesn't give any errors and isn't trained well. Since DQN is an algorithm for discrete action and buffer gives integer actions so continuous Cartpole env only run action = 1. (and I didn't really look into that other algorithms check the actions, but DQN doesn't)

    Describe the solution you'd like It might be possible to insert assert statement in each algorithm codes.

    Describe alternatives you've considered x

    Additional context x

    enhancement 
    opened by HanbumKo 1
  • Unavailable moduels ['mlagent', 'mujoco', 'nes', 'procgen']

    Unavailable moduels ['mlagent', 'mujoco', 'nes', 'procgen']

    Describe the bug Unavailable moduels ['mlagent', 'mujoco', 'nes', 'procgen'] module: mlagent error: Traceback (most recent call last): File "e:\study\machineStudy\project\Jorldy\JORLDY\jorldy\core\env_init_.py", line 21, in module = import(module_path, fromlist=[None]) File "e:\study\machineStudy\project\Jorldy\JORLDY\jorldy\core\env\mlagent.py", line 1, in from mlagents_envs.environment import UnityEnvironment, ActionTuple ModuleNotFoundError: No module named 'mlagents_envs'

    and ModuleNotFoundError: No module named 'mujoco_py' ModuleNotFoundError: No module named 'nes_py'

    and ImportError: cannot import name 'ProcgenEnv' from partially initialized module 'procgen' (most likely due to a circular import) (e:\study\machineStudy\project\Jorldy\JORLDY\jorldy\core\env\procgen.py)

    To Reproduce Steps to reproduce the behavior: main.py default_config_path = "config.ppo.pong_mlagent" and run

    when i pip install mlagents-envs Couldn't launch the ./core/env/mlagents/Pong/Windows/Pong environment. Provided filename does not match any environments. File "E:\study\machineStudy\project\Jorldy\JORLDY\jorldy\core\env\mlagent.py", line 37, in init self.env = UnityEnvironment(

    I change the mlagent code

        rootPath = os.path.abspath(os.path.dirname(__file__))+"/../../"
        env_path =rootPath+ f"./core/env/mlagents/{env_name}/{match_build()}/{env_name}"
    

    and it is run

    when it is run end program is no end when use async_distributed_train mlagent

    the last log: Interact process done.

    Expected behavior no error and run train success,and end success

    Development Env. (OS, version, libraries): windows 10

    bug 
    opened by xiezhipeng-git 0
  • R2D2 optimize and benchmark

    R2D2 optimize and benchmark

    Is your feature request related to a problem? Please describe. Currently, the state type stored as a transition in R2D2 is too large as float64. And if the sequence length is lengthened accordingly, the existing buffer size is too large.

    Describe the solution you'd like

    • Change the state type of transition to unit8.
    • Reduce the buffer size of the config.
    • R2D2 atari benchmark

    Describe alternatives you've considered

    • Fixed size when adding state to _transition in agent interact callback.

    Additional context

    • R2D2 atari benchmark
    enhancement 
    opened by kan-s0 0
  • MuZero performance issue

    MuZero performance issue

    Describe the bug MuZero shows very good performance in some environment such as cartpole, pong mlagent, atari (pong, breakout). However, it shows bad performance in most of the Atari environment (spaceinvaders, qbert, enduro, seaquest, ...)

    To Reproduce Try running MuZero algorithm in environments other than pong and breakout

    Expected behavior It shows worse performance when compared to other algorithms.

    Screenshots

    Development Env. (OS, version, libraries): Linux, Python 3.8, jorldy 0.3.0 requirement

    Additional context Add any other context about the problem here.

    bug 
    opened by leonard-q 0
  • Multi-GPU

    Multi-GPU

    Please describe the feature you want to add. A clear and concise description of what the feature. Ex. I'm going to implement ...

    Use Multi-GPU

    Additional requirement A clear and concise description of additional requirement for the new feature

    Reference Please append the reference about the feature

    enhancement 
    opened by erinn-lee 0
  • Invalid probability value in tensor when running mpo

    Invalid probability value in tensor when running mpo

    Describe the bug RuntimeError when running mpo

    To Reproduce

    python main.py --config.mpo.atari --env.name breakout --sync
    

    When config is modified with the values shown in the paper, it occurs faster and more frequently.

    Expected behavior

    • An error occurred when calculating multinomial method with pi from Actor network.
    • RuntimeError: probability tensor contains either inf, nan or element < 0

    Screenshots

    training graph

    스크린샷 2022-04-18 오후 2 36 23

    • default config, green, also causes an error at 7M.

    error txt

    스크린샷 2022-04-18 오후 2 23 06

    mpo generated agent code

    스크린샷 2022-04-18 오후 2 28 12

    Development Env. (OS, version, libraries):

    • linux
    • V4XLARGE
    • python 3.7.11
    • jorldy:0.3.0

    Additional context

    • Even with default config, an error sometimes occurs after a lot of learning.
    • If you set the config to the value shown in the paper, you get a much higher score at the beginning, but an error quickly occurs.
    bug 
    opened by kan-s0 0
Releases(v0.5.0)
  • v0.5.0(Apr 18, 2022)

    ❗Important

    • JORLDY ArXiv Paper is published! (link)
    • Algorithm description is added! (#168) (link)

    🛠️ Fixes & Improvements

    • PPO continuous debugging is done (#157)
    • Initialize actors network as a learner network (#165)

    🔩 Minor fix

    • Modify to reset rollout buffer stamp to 0 (#165)

    ⏰ Known Issues

    • R2D2 need to be optimized
    • IQN based algorithms debugging should be done
    • VMPO performance is unstable (#164)

    🙏 Acknowledgement

    • Thanks to all who contributes JORLDY v0.5.0: @leonard-q , @ramanuzan, @kan-s0, @erinn-lee
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Apr 4, 2022)

    🛠️ Fixes & Improvements

    • Update Pytorch version to 1.10 and other packages (#139)
    • ICM and RND debugging is done (#145)
    • APE-X debugging is done (#147)
    • SAC discrete implemented (#150)

    🔩 Minor fix

    • Update Readme (contributors) (#138)
    • Update distributed architecture flowchart and timeline (#143)
    • Learning rate decay can be set as optional (#151)
    • Split optimizer of ICM and RND from PPO (#152)
    • modify calculating async step (#154)

    ⏰ Known Issues

    • R2D2 need to be optimized
    • IQN based algorithms have to be evaluated

    🙏 Acknowledgement

    • Thanks to all who contributes JORLDY v0.4.0: @leonard-q , @ramanuzan, @kan-s0, @erinn-lee
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Mar 10, 2022)

    ❗Important

    • Integrate scripts into one main script (#125)
    • TD3 is implemented (#127)
    • R2D2 is implemented, but it needs to be optimized (#104)

    🛠️ Fixes & Improvements

    • Edit stamp step calc; reset to 0 → -= period step(#130)
    • implement gather thread to process get from queue with thread(update manage process with it)(#130)
    • Intergrate dqn network, deterministic policy actor, critic (#129)
    • Add lr scheduler to all RL algorithms (#108)

    🔩 Minor fix

    • Delete unused variable in ddqn (#128)

    ⏰ Known Issues

    • ICM PPO and RND PPO performance degrades after ppo is modified. It needs to be fixed
    • R2D2 need to be optimized
    • APE-X debugging has to be done
    • IQN based algorithms have to be evaluated

    🙏 Acknowledgement

    • Thanks to all who contributes JORLDY v0.3.0: @leonard-q , @ramanuzan, @kan-s0, @erinn-lee
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jan 27, 2022)

    ❗Important

    • Atari wrapper is modified with reference to openai baselines wrapper(#92)
      • EpisodicLifeEnv, MaxAndSkipEnv, ClipRewardEnv(sign) are applied
      • reference: https://github.com/openai/baselines/blob/master/baselines/common/atari_wrappers.py

    🛠️ Fixes & Improvements

    • Error in Drone Delivery Env Mac build is fixed (#94)
    • Mujoco is supported in docker (#96)
    • PPO algorithm debugging is done (#103)
      • Implement value-clip
        • reference: https://github.com/openai/baselines/blob/ea25b9e8b234e6ee1bca43083f8f3cf974143998/baselines/ppo2/model.py#L133
      • Update log clac to prevent gradient divergence; prob_tensor.log() → Categorical.log_prob()
      • Change the advantage standardization order; before value calc → after value calc
      • Add custom LR scheduler (DQN, PPO) (#103)

    ⏰ Known Issues

    • ICM PPO and RND PPO performance degrades after ppo is modified. It needs to be fixed

    🙏 Acknowledgement

    • Thanks to all who contributes JORLDY v0.2.0: @leonard-q , @ramanuzan
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Dec 23, 2021)

    ❗Important - Unit test codes are implemented! - M-DQN, M-IQN are implemented! (#79) - Mujoco envs are supported! (#83)

    🛠️Fixes & Improvements - RND code refactoring (#52) occurs fatal error → It is solved with changing parameter name of RND (#71) - Change default initialization method (Xavier → Orthogonal) (#81) - Change Softmax to exp(log_softmax) (#82) - Unit test for Mujoco env is done (#93)

    🙏Acknowledgement - Thanks to all who contributes JORLDY v0.1.0: @leonard-q @ramanuzan @lkm2835

    Source code(tar.gz)
    Source code(zip)
  • v0.0.3(Nov 23, 2021)

    • Important
      • Github action is applied for Python code style (PEP8). Please refer to style guide of CONTRIBUTING.md
      • New environment: Drone Delivery ML-Agents Environment is added! 🛸
      • ML-Agents Server builds are removed! Linux build with no_graphics option can be run on the Server. (#58)
    • Fixes & Improvements
      • JORLDY supports envs which provides multi modal input (image, vector)
      • mlagents Windows issue
        • Issue #44 was occurred when mlagents envs were run in Windows
        • #46 solved this problem (Thank you so much @zenoengine )
      • mlagents Linux build Issue
        • mlagents envs had error, because .gitignore contains *.so. It removes all the .so files in mlagents envs. Therefore, all the .so files are restored and .gitignore is modified.
      • ICM, RND code refactoring is conducted because of the duplicated functions (#52)
      • ICM PPO bug fix: remove softmax before calc cross-entropy (#49)
      • *_timers.json files in mlagent envs caused conflict when using git, *_timers.json files are added to .gitignore (#59)
      • Benchmark is developed! → config, script, spec are added
    • Acknowledgement
      • Thanks to all who contributes JORLDY v0.0.3: @zenoengine @ramanuzan @leonard-q
    Source code(tar.gz)
    Source code(zip)
  • v0.0.2(Nov 6, 2021)

    📢 Important

    • Now JORLDY fully supports Windows, Mac and Linux!

    🛠️ Fixes & Improvements

    • README minor fix
      • Remove $, >
      • fixed typos
    • modify gitignore; add python gitignore template
    • supports WSL, Windows and Mac
      • change agent instantiation code #28
      • custom dict can be pickled
      • multiprocessing qsize() → empty, full
    • remove _nomp.py files
      • solve multiprocessing issue on all OS

    🙏 Acknowledgement

    • Thanks to all who contributes JORLDY v0.0.2: @zenoengine, @ramanuzan, @leonard-q
    Source code(tar.gz)
    Source code(zip)
  • v0.0.1(Nov 3, 2021)

    Hello WoRLd! ✋ This is first version of JORLDY, which is open-source Reinforcement Learning (RL) framework provided by KakaoEnterprise! We expect that JORLDY helps researchers and students who study RL. The features of JORLDY are as follows ⭐.

    • 20+ RL Algorithms and various RL environment are provided
    • Algorithms and environment can be added and customized
    • The running of RL algorithm and environment is conducted using single command
    • Distributed RL algorithms are provided using ray
    • Benchmark of the algorithms is conducted in many RL environment

    🤖 The implemented algorithms are as follows:

    • Deep Q Network (DQN), Double DQN, Dueling DQN, Multistep DQN, Prioritized Experience Replay (PER), C51, Noisy Network, Rainbow (DQN, IQN), QR-DQN, IQN, Curiosity Driven Exploration (ICM), Random Network Distillation (RND), APE-X, REINFORCE, DDPG, PPO, SAC, MPO, V-MPO

    🌎 The provided environments are as follows

    • GYM classic control, Unity ML-Agents, Procgen,
      • GYM Atari and Super Mario Bros are excluded from the requirement because of the license issue. You should install these environments manually.
    Source code(tar.gz)
    Source code(zip)
Owner
Kakao Enterprise Corp.
Kakao Enterprise Corp.
Deep Learning and Reinforcement Learning Library for Scientists and Engineers 🔥

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extens

TensorLayer Community 7.1k Dec 27, 2022
An open-source Deep Learning Engine for Healthcare that aims to treat & prevent major diseases

AlphaCare Background AlphaCare is a work-in-progress, open-source Deep Learning Engine for Healthcare that aims to treat and prevent major diseases. T

Siraj Raval 44 Nov 05, 2022
Federated_learning codes used for the the paper "Evaluation of Federated Learning Aggregation Algorithms" and "A Federated Learning Aggregation Algorithm for Pervasive Computing: Evaluation and Comparison"

Federated Distance (FedDist) This is the code accompanying the Percom2021 paper "A Federated Learning Aggregation Algorithm for Pervasive Computing: E

GETALP 8 Jan 03, 2023
An Approach to Explore Logistic Regression Models

User-centered Regression An Approach to Explore Logistic Regression Models This tool applies the potential of Attribute-RadViz in identifying correlat

0 Nov 12, 2021
A TikTok-like recommender system for GitHub repositories based on Gorse

GitRec GitRec is the missing recommender system for GitHub repositories based on Gorse. Architecture The trending crawler crawls trending repositories

337 Jan 04, 2023
Simple implementation of Mobile-Former on Pytorch

Simple-implementation-of-Mobile-Former At present, only the model but no trained. There may be some bug in the code, and some details may be different

Acheung 103 Dec 31, 2022
[NeurIPS2021] Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks

Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks Code for NeurIPS 2021 Paper "Exploring Architectural Ingredients of A

Hanxun Huang 26 Dec 01, 2022
Unofficial implement with paper SpeakerGAN: Speaker identification with conditional generative adversarial network

Introduction This repository is about paper SpeakerGAN , and is unofficially implemented by Mingming Huang ( 7 Jan 03, 2023

StyleMapGAN - Official PyTorch Implementation

StyleMapGAN - Official PyTorch Implementation StyleMapGAN: Exploiting Spatial Dimensions of Latent in GAN for Real-time Image Editing Hyunsu Kim, Yunj

NAVER AI 425 Dec 23, 2022
Speckle-free Holography with Partially Coherent Light Sources and Camera-in-the-loop Calibration

Speckle-free Holography with Partially Coherent Light Sources and Camera-in-the-loop Calibration Project Page | Paper Yifan Peng*, Suyeon Choi*, Jongh

Stanford Computational Imaging Lab 19 Dec 11, 2022
Code accompanying the NeurIPS 2021 paper "Generating High-Quality Explanations for Navigation in Partially-Revealed Environments"

Generating High-Quality Explanations for Navigation in Partially-Revealed Environments This work presents an approach to explainable navigation under

RAIL Group @ George Mason University 1 Oct 28, 2022
Generating synthetic mobility data for a realistic population with RNNs to improve utility and privacy

lbs-data Motivation Location data is collected from the public by private firms via mobile devices. Can this data also be used to serve the public goo

Alex 11 Sep 22, 2022
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Shipeng Wang 34 Dec 21, 2022
Open-sourcing the Slates Dataset for recommender systems research

FINN.no Recommender Systems Slate Dataset This repository accompany the paper "Dynamic Slate Recommendation with Gated Recurrent Units and Thompson Sa

FINN.no 48 Nov 28, 2022
The repository contains source code and models to use PixelNet architecture used for various pixel-level tasks. More details can be accessed at .

PixelNet: Representation of the pixels, by the pixels, and for the pixels. We explore design principles for general pixel-level prediction problems, f

Aayush Bansal 196 Aug 10, 2022
Does Pretraining for Summarization Reuqire Knowledge Transfer?

Pretraining summarization models using a corpus of nonsense

Approximately Correct Machine Intelligence (ACMI) Lab 12 Dec 19, 2022
Experiments with differentiable stacks and queues in PyTorch

Please use stacknn-core instead! StackNN This project implements differentiable stacks and queues in PyTorch. The data structures are implemented in s

Will Merrill 141 Oct 06, 2022
Tracking Progress in Question Answering over Knowledge Graphs

Tracking Progress in Question Answering over Knowledge Graphs Table of contents Question Answering Systems with Descriptions The QA Systems Table cont

Knowledge Graph Question Answering 47 Jan 02, 2023
3DIAS: 3D Shape Reconstruction with Implicit Algebraic Surfaces (ICCV 2021)

3DIAS_Pytorch This repository contains the official code to reproduce the results from the paper: 3DIAS: 3D Shape Reconstruction with Implicit Algebra

Mohsen Yavartanoo 21 Dec 12, 2022
Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"

Easy-To-Hard The official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks". Gett

Avi Schwarzschild 52 Sep 08, 2022