Skip to content

yoosan/deeprl

Repository files navigation

DeepRL

This code implements the standard deep Q-learning and dueling network with experience replay (memory buffer) for playing simple games.

DQN algorithm implemented in this code is from the Google DeepMind's paper Playing Atari with Deep Reinforcement Learning[link].

Dueling network is from the paper Dueling Network Architectures for Deep Reinforcement Learning [link]

Requirement

DeepRL is implemented with Torch and the packages of its ecosystem. This code is well worked on my Mac Pro with CPU (I haven't tested it on Linux and GPU). Install Torch7 firstly, then you should install the following packages by luarocks

luarocks install nn
luarocks install image
luarocks install qt
luarocks install optim

Running

You can run this code by tapping the command in the project dir.

qlua main.lua

The result looks like

DQN: I got the accuracy of 93.2% (932 success of 1000 epochs).

Dueling: I got the accuracy of 99.2% (992 success of 1000 epochs).

Code

The envir.lua indicates the environment in reinforcement learning stage, which receives the action and produces the states and a reward for agent.

The agent.lua is the implementation of agent which receives the states and reward to produce the action directed by the policy network.

The learner.lua is the learning algorithm of DQN with experience replay as the following.

MISC

I completed this code when I was an intern at Horizon Robotics. I will greatly thank the article of Andrej Karpathy and other implementations:SeanNaren's code and EderSantana's gist.

LICENSE

MIT

About

Standard DQN and dueling network for simple games

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages