The mini-AlphaStar (mini-AS, or mAS) - mini-scale version (non-official) of the AlphaStar (AS)

Overview

mini-AlphaStar

Introduction

The mini-AlphaStar (mini-AS, or mAS) project is a mini-scale version (non-official) of the AlphaStar (AS). AlphaStar is the intelligent AI proposed by DeepMind to play StarCraft II.

The "mini-scale" means making the original AS's hyper-parameters adjustable so that mini-AS can be trained and running on a small scale. E.g., we can train this model in a single commercial server machine.

We referred to the "Occam's Razor Principle" when designing the mini-AS": simple is sound. Therefore, we build mini-AS from scratch. Unless the function significantly impacts speed and performance, we shall omit it.

Meanwhile, we also try not to use too many dependency packages so that mini-AS should only depend on the PyTorch. In this way, we simplify the learning cost of the mini-AS and make the architecture of mini-AS relatively easy.

The Chinese shows a simple readme in Chinese.

Below 4 GIFs are mini-AS' trained performance on Simple64, supervised learning on 50 expert replays.

Left: At the start of the game. Right: In the middle period of the game.

Left: The agent's 1st attack. Right: The agent's 2nd Attack.

Update

This release is the "v_1.07" version. In this version, we give an agent which grows from 0.016 to 0.5667 win rate against the level-2 built-in bot training by reinforcement learning. Other improvements are shown below:

  • Use mimic_forward to replace forward in "rl_unroll", which increase the training accuracy;
  • Make RL training supports multi-GPU now;
  • Make RL training supports multi-process training based on multi-GPU now;
  • Use new architecture for RL loss, which reduces 86% GPU memory;
  • Use new architecture for RL to increase the sampling speed by 6x faster;
  • Validate UPGO and V-trace loss again;
  • By a "multi-process plus multi-thread" training, increase the sampling speed more by 197%;
  • Fix the GPU memory leak and reduce the CPU memory leak;
  • Increase the RL training win rate (without units loss) on level-2 to 0.57!

Hints

Warning: SC2 is extremely difficult, and AlphaStar is also very complex. Even our project is a mini-AlphaStar, it has almost the similar technologies as AS, and the training resource also costs very high. We can hardly train mini-AS on a laptop. The recommended way is to use a commercial server with a GPU card and enough large memory and disk space. For someone interested in this project for the first time, we recommend you collect (star) this project and devolve deeply into researching it when you have enough free time and training resources.

Location

We store the codes and show videos in two places.

Codes location Result video location Usage
Github Youtube for global users
Gitee Bilibili for users in China

Contents

The table below shows the corresponding packages in the project.

Packages Content
alphastarmini.core.arch deep neural architecture
alphastarmini.core.sl supervised learning
alphastarmini.core.rl reinforcement learning
alphastarmini.core.ma multi-agent league traning
alphastarmini.lib lib functions
alphastarmini.third third party functions

Requirements

PyTorch >= 1.5, others please see requirements.txt.

Install

The SCRIPT Guide gives some commands to install PyTorch by conda (this will automatically install CUDA and cudnn, which is convenient).

E.g., like (to install PyTorch 1.5 with accompanied CUDA and cudnn):

conda create -n th_1_5 python=3.7 pytorch=1.5 -c pytorch

Next, activate the conda environment, like:

conda activate th_1_5

Then you can install other python packages by pip, e.g., the command in the below line:

pip install -r requirements.txt

Usage

After you have done all requirements, run the below python file to run the program:

python run.py

You may use comments and uncomments in "run.py" to select the training process you want.

The USAGE Guide provides answers to some problems and questions.

You should follow the following instructions to get results similar and/or better than the provided gifs on the main page.

The processing sequences can be summarised as the following:

  1. Transform replays: download the replays for training, then use the script in mAS to transform the replays to trainable data;
  2. Supervised learning: use the trainable data to supervise learning an initial model;
  3. Evaluate SL model: the trained SL model should be evaluated on the RL environment to make sure it behaves right;
  4. Reinforcement learning: use the trained SL model to do reinforcement learning in the SC environment, seeing the win rate starts growing.

We give detailed descriptions below.

Transofrm replays

In supervised learning, you first need to download SC2 replays.

The REPLAY Guide shows a guide to download these SC2 replays.

The ZHIHU Guide provides Chinese users who are not convenient to use Battle.net (outside China) a guide to download replays.

After downloading replays, you should move the replays to "./data/Replays/filtered_replays_1" (you can change the name in transform_replay_data.py).

Then use transform_replay_data.py to transform these replays to pickles or tensors (you can change the output type in the code of that file).

You don't need to run the transform_replay_data.py directly. Only run "run.py" is OK. Make the run.py has the following code

    # from alphastarmini.core.sl import transform_replay_data
    # transform_replay_data.test(on_server=P.on_server)

uncommented. Then you can directly run "run.py".

Note: To get the effect of the trained agent in the gifs, use the replays in Useful-Big-Resources. These replays are generatedy by our experts, to get an agent having the ability to win the built-in bot.

Supervised learning

After getting the trainable data (we use tensor data). Make the run.py has the following code

    # from alphastarmini.core.sl import sl_train_by_tensor
    # sl_train_by_tensor.test(on_server=P.on_server)

uncommented. Then you can directly run "run.py" to do supervised learning.

The default learning rate is 1e-4, and the training epochs should best be 10 (more epochs may cause the training effect overfitting).

From the v_1.05 version, we start to support multi-GPU supervised learning training for mini-AS, improving the training speed. The way to use multi-GPU training is straightforward, as follows:

python run_multi-gpu.py

Multi-GPU training has some unstable factors (caused because of PyTorch). If you find your multi-GPU training has training instability errors, please switch to the single-GPU training.

We currently support four types of supervised training, which all reside in the "alphastarmini.core.sl" package.

File Content
sl_train_by_pickle.py pickle (data not preprocessed) training: Slow, but need small disk space.
sl_train_by_tensor.py tensor (data preprocessed) training: Fast, but cost colossal disk space.
sl_multi_gpu_by_pickle.py multi-GPU, pickle training: It has a requirement need for large shared memory.
sl_multi_gpu_by_tensor.py multi-GPU, tensor training: It needs both large memory and large shared memory.

You can use the load_pickle.py to transform the generated pickles (in "./data/replay_data") to tensors (in "./data/replay_data_tensor").

Note: from v_1.06, we still recommend using single-GPU training. We provide the new training ways in the single-GPU type. This is due to multi-GPU training cost so much memory.

Evaluate SL model

After getting the supervised learning model. We should test the performance of the model in the SC2 environment this is due to there is domain shift from SL data and RL environment.

Make the run.py has the following code

    # from alphastarmini.core.rl import rl_eval_sl
    # rl_eval_sl.test(on_server=P.on_server)

uncommented. Then you can directly run "run.py" to do an evaluation of the SL model.

The evaluation is similar to RL training but the learning is closed and the running is single-thread and single-process, to make the randomness due to multi-thread not affect the evaluation.

Reinforcement learning

After making sure the supervised learning model is OK and suitable for RL training. We do RL training based on the learned supervised learning model.

Make the run.py has the following code

    # from alphastarmini.core.rl import rl_vs_inner_bot_mp
    # rl_vs_inner_bot_mp.test(on_server=P.on_server, replay_path=P.replay_path)

uncommented. Then you can directly run "run.py" to do reinforcement learning.

Note, this training will use a multi-process plus multi-thread RL training (to accelerate the learning speed), so make sure to run this codes on a high-performance computer.

E.g., we run 15 processes, and each process has 2 actor threads and 1 learner thread in a commercial server. If your computer is not strong as that, reduce the parallel and thread nums.

The learning rate should be very small (below 1e-5, because you are training on an initially trained model), and the training iterations should be as long as best (more training iterations can reduce the unstable of RL training).

If you find the training is not as like as you imagine, please open an issue to ask us or discuss with us (though we can not make sure to respond to it in time or there is a solution to every problem).

Results

Here are some illustration figures of the SL training process below:

SL training process

We can see the loss (one primary loss and six argument losses) fall quickly.

The trained behavior of the agents can be seen in the gifs on this page.

A more detailed illustration of the experiments (such as the effects of the different hyper-parameters) will be provided in our later paper.

History

The HISTORY is the historical introduction of the previous versions of mini-AS.

Citing

If you find our repository useful, please cite our project or the below technical report:

@misc{liu2021mAS,
  author = {Ruo{-}Ze Liu and Wenhai Wang and Yang Yu and Tong Lu},
  title = {mini-AlphaStar},
  year = {2021},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/liuruoze/mini-AlphaStar}},
}

The An Introduction of mini-AlphaStar is a technical report introducing the mini-AS (not full version).

@article{liu2021mASreport,
  author    = {Ruo{-}Ze Liu and
               Wenhai Wang and
               Yanjie Shen and
               Zhiqi Li and
               Yang Yu and
               Tong Lu},
  title     = {An Introduction of mini-AlphaStar},
  journal   = {CoRR},
  volume    = {abs/2104.06890},
  year      = {2021},
}

Rethinking

The Rethinking of AlphaStar is our thinking of the advantages and disadvantages of AlphaStar.

Paper

We will give a paper (which is now under peer-review) that may be available in the future, presenting detailed experiments and evaluations using the mini-AS.

Owner
Ruo-Ze Liu
Think deep, work hard.
Ruo-Ze Liu
i-RevNet Pytorch Code

i-RevNet: Deep Invertible Networks Pytorch implementation of i-RevNets. i-RevNets define a family of fully invertible deep networks, built from a succ

Jörn Jacobsen 378 Dec 06, 2022
Official repository for the NeurIPS 2021 paper Get Fooled for the Right Reason: Improving Adversarial Robustness through a Teacher-guided curriculum Learning Approach

Get Fooled for the Right Reason Official repository for the NeurIPS 2021 paper Get Fooled for the Right Reason: Improving Adversarial Robustness throu

Sowrya Gali 1 Apr 25, 2022
A tool for calculating distortion parameters in coordination complexes.

OctaDist Octahedral distortion calculator: A tool for calculating distortion parameters in coordination complexes. https://octadist.github.io/ Registe

OctaDist 12 Oct 04, 2022
CVPR 2020 oral paper: Overcoming Classifier Imbalance for Long-tail Object Detection with Balanced Group Softmax.

Overcoming Classifier Imbalance for Long-tail Object Detection with Balanced Group Softmax ⚠️ Latest: Current repo is a complete version. But we delet

FishYuLi 341 Dec 23, 2022
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

49 Nov 23, 2022
Official implementation of the paper Visual Parser: Representing Part-whole Hierarchies with Transformers

Visual Parser (ViP) This is the official implementation of the paper Visual Parser: Representing Part-whole Hierarchies with Transformers. Key Feature

Shuyang Sun 117 Dec 11, 2022
A machine learning library for spiking neural networks. Supports training with both torch and jax pipelines, and deployment to neuromorphic hardware.

Rockpool Rockpool is a Python package for developing signal processing applications with spiking neural networks. Rockpool allows you to build network

SynSense 21 Dec 14, 2022
VOGUE: Try-On by StyleGAN Interpolation Optimization

VOGUE is a StyleGAN interpolation optimization algorithm for photo-realistic try-on. Top: shirt try-on automatically synthesized by our method in two different examples.

Wei ZHANG 66 Dec 09, 2022
Multi Agent Reinforcement Learning for ROS in 2D Simulation Environments

IROS21 information To test the code and reproduce the experiments, follow the installation steps in Installation.md. Afterwards, follow the steps in E

11 Oct 29, 2022
Conditional Generative Adversarial Networks (CGAN) for Mobility Data Fusion

This code implements the paper, Kim et al. (2021). Imputing Qualitative Attributes for Trip Chains Extracted from Smart Card Data Using a Conditional Generative Adversarial Network. Transportation Re

Eui-Jin Kim 2 Feb 03, 2022
Semiconductor Machine learning project

Wafer Fault Detection Problem Statement: Wafer (In electronics), also called a slice or substrate, is a thin slice of semiconductor, such as a crystal

kunal suryawanshi 1 Jan 15, 2022
Official project repository for 'Normality-Calibrated Autoencoder for Unsupervised Anomaly Detection on Data Contamination'

NCAE_UAD Official project repository of 'Normality-Calibrated Autoencoder for Unsupervised Anomaly Detection on Data Contamination' Abstract In this p

Jongmin Andrew Yu 2 Feb 10, 2022
TeST: Temporal-Stable Thresholding for Semi-supervised Learning

TeST: Temporal-Stable Thresholding for Semi-supervised Learning TeST Illustration Semi-supervised learning (SSL) offers an effective method for large-

Xiong Weiyu 1 Jul 14, 2022
Temporal Segment Networks (TSN) in PyTorch

TSN-Pytorch We have released MMAction, a full-fledged action understanding toolbox based on PyTorch. It includes implementation for TSN as well as oth

1k Jan 03, 2023
Bringing sanity to world of messed-up data

Sanitize sanitize is a Python module for making sure various things (e.g. HTML) are safe to use. It was originally written by Mark Pilgrim and is dist

Alireza Savand 63 Oct 26, 2021
duralava is a neural network which can simulate a lava lamp in an infinite loop.

duralava duralava is a neural network which can simulate a lava lamp in an infinite loop. Example This is not a real lava lamp but a "fake" one genera

Maximilian Bachl 87 Dec 20, 2022
Activating More Pixels in Image Super-Resolution Transformer

HAT [Paper Link] Activating More Pixels in Image Super-Resolution Transformer Xiangyu Chen, Xintao Wang, Jiantao Zhou and Chao Dong BibTeX @article{ch

XyChen 270 Dec 27, 2022
Unofficial implementation of PatchCore anomaly detection

PatchCore anomaly detection Unofficial implementation of PatchCore(new SOTA) anomaly detection model Original Paper : Towards Total Recall in Industri

Changwoo Ha 268 Dec 22, 2022
Simple and Distributed Machine Learning

Synapse Machine Learning SynapseML (previously MMLSpark) is an open source library to simplify the creation of scalable machine learning pipelines. Sy

Microsoft 3.9k Dec 30, 2022
Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localization and Semantic Segmentation (CVPR 2022)

CCAM (Unsupervised) Code repository for our paper "CCAM: Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localizati

Computer Vision Insitute, SZU 113 Dec 27, 2022