Automated Hyperparameter Optimization Competition

Overview

QQ浏览器2021AI算法大赛 - 自动超参数优化竞赛

ACM CIKM 2021 AnalyticCup

在信息流推荐业务场景中普遍存在模型或策略效果依赖于“超参数”的问题,而“超参数"的设定往往依赖人工经验调参,不仅效率低下维护成本高,而且难以实现更优效果。因此,本次赛题以超参数优化为主题,从真实业务场景问题出发,并基于脱敏后的数据集来评测各个参赛队伍的超参数优化算法。本赛题为超参数优化问题或黑盒优化问题:给定超参数的取值空间,每一轮可以获取一组超参数对应的Reward,要求超参数优化算法在限定的迭代轮次内找到Reward尽可能大的一组超参数,最终按照找到的最大Reward来计算排名。

1. 重要资源

2.代码结构

|--example_random_searcher  随机算法代码提交示例
|  `--searcher.py
|
|--example_bayesian_optimization 贝叶斯优化算法提交示例
|  |--requirements.txt     提交附加程序包示例
|  `--searcher.py
|
|--input                   测试评估函数数据
|  |--data-2
|  `--data-30
|
|--thpo                    thpo比赛工具包
|  |--__init__.py
|  |--abstract_searcher.py
|  |--common.py
|  |--evaluate_function.py
|  |--reward_calculation.py
|  |--run_search_one_time.py
|  `--run_search.py
|
|--main.py                 测试主程序文件
|--local_test.sh           本地测试脚本
|--prepare_submission.sh   提交代码前打包脚本
|--environments.txt        评测环境已经安装的包
`--requirements.txt        demo程序依赖的包环境

3. 快速入门

3.1 环境搭建

THPO-Kit程序工具包使用python3编写,程序依赖包在requirements.txt中,需要安装依赖包才能执行,使用pip3安装依赖包:

pip3 install -r requirements.txt

3.2 算法创建

  1. 参照 example_randon_searcher,新建一个自己算法的目录my_algo
  2. my_algo目录下新建searcher.py文件
  3. searcher.py文件里实现自己的Searcher类(文件名和类名不允许自定义)
  4. 实现 __init__suggest 函数
  5. 修改 local_test.sh,将SEARCHER修改为my_algo
  6. 执行 local_test.sh 脚本,将得到算法的执行结果

Step 1 - Step 2:[root folder]

|--my_algo
|  |--requirements.txt
|  `--searcher.py 
|--local_test.sh

Step 3 - Step 4:[searcher.py]

# 必须引入searcher抽象类,必不可少
from thpo.abstract_searcher import AbstractSearcher
from random import randint

class Searcher(AbstractSearcher):
    searcher_name = "RandomSearcher"

    def __init__(self, parameters_config, n_iter, n_suggestion):
        AbstractSearcher.__init__(self, 
                                  parameters_config, 
                                  n_iter,
                                  n_suggestion)

    def suggest(self, suggestion_history, n_suggestions=1):
        next_suggestions = []
        for i in range(n_suggestions):
            next_suggest = {
                name: 
                conf["coords"][randint(0,len(conf["coords"])-1)]
                for name, conf in self.parameters_config.items()
            }
            next_suggestions.append(next_suggest)
        return next_suggestions

Step 5:[local_test.sh]

SEARCHER="my_algo"

3.3 本地运行

执行脚本local_test.sh进行本地评测

./local_test.sh

执行结果:

====================== run search result ========================
 err_code:  0  err_msg:  
========================= iters means ===========================
func: data-2 iteration best: [25.24271821 26.36435157 12.77928619 10.19180929 11.3147711  10.17430656
 12.77928619 27.79752169 26.36793589 11.12007615]
func: data-30 iteration best: [-0.95264345 -0.27725879 -0.36873091 -0.68088963 -0.28840479 -0.50006427
 -0.32088949 -0.78627201 -0.53204227 -0.98427191]
========================= fianl score ============================
example_bayesian_optimization final score:  0.47173337831255463
==================================================================

3.4 提交比赛代码

使用prepare_submission.sh 脚本打包,提交打包后的searcher程序包到比赛代码提交入口

./prepare_submission.sh example_random_searcher

执行结果:

upload_example_random_searcher_08131917
  adding: requirements.txt (stored 0%)
  adding: searcher.py (deflated 66%)
----------------------------------------------------------------
Built achive for upload
Archive:  ./upload_example_random_searcher_08131917.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
        0  08-13-2021 19:17   requirements.txt
     3767  08-13-2021 19:17   searcher.py
---------                     -------
     3767                     2 files
For scoring, upload upload_example_random_searcher_08131917.zip at address:
https://algo.browser.qq.com/


QQ Browser 2021 AI Algorithm Competiton - Automated Hyperparameter Optimization Contest

ACM CIKM 2021 AnalyticCup

The choices of hyperparameters have critical effects on models or strategies in recommendation systems. But the hyperparameters are mostly chosen based on experience, which brings high maintenance costs and sub-optimal results. Thus, this track aims at automated hyperparameters optimization based on anonymized realistic industrial tasks and datasets. Given the space of all possible hyperparameters' values, a reward could be achieved with a set of hyperparameters in each iteration. The participants are asked to maximize the reward within a given limit of iterations with a hyperparameters optimization algorithm. The final rank of the participants will be the rank of their maximum reward.

1.Resource

2.Repo structure

|--example_random_searcher   	    # example of random search
|  `--searcher.py
|
|--example_bayesian_optimization    # example of bayesian optimization
|  |--requirements.txt              # extra paackge requirement
|  `--searcher.py
|
|--input                            # testcases
|  |--data-2
|  `--data-30
|
|--thpo                             # thpo-kit
|  |--__init__.py
|  |--abstract_searcher.py
|  |--common.py
|  |--evaluate_function.py
|  |--reward_calculation.py
|  |--run_search_one_time.py
|  `--run_search.py
|
|--main.py                          # main
|--local_test.sh                    # script for local test
|--prepare_submission.sh            # script for submission
|--environments.txt                 # packages installed in remote envrionment
`--requirements.txt                 # demo requirements

3. Quick start

3.1 Environment setup

The THPO-Kit program toolkit is written in python3. The program dependency packages are in requirements.txt, and the dependency packages needs to be installed to execute scripts. Use pip3 to install the dependency package:

pip3 install -r requirements.txt

3.2 Create a searcher

  1. Refer to example_randon_searcher, create a new directory my_algo for your algorithm
  2. Create a new searcher.py file in the my_algo directory
  3. Implement your own Searcher class in the searcher.py file (the file name and class name are not allowed to be customized)
  4. Implement __init__ and suggest functions
  5. Modify local_test.sh and change SEARCHER to my_algo
  6. Execute the local_test.sh script to get the results of the algorithm

Step 1 - Step 2:[root folder]

|--my_algo
|  |--requirements.txt
|  `--searcher.py 
|--local_test.sh

Step 3 - Step 4:[searcher.py]

# MUST import AbstractSearcher from thpo.abstract_searcher
from thpo.abstract_searcher import AbstractSearcher
from random import randint

class Searcher(AbstractSearcher):
    searcher_name = "RandomSearcher"

    def __init__(self, parameters_config, n_iter, n_suggestion):
        AbstractSearcher.__init__(self, 
                                  parameters_config, 
                                  n_iter,
                                  n_suggestion)

    def suggest(self, suggestion_history, n_suggestions=1):
        next_suggestions = []
        for i in range(n_suggestions):
            next_suggest = {
                name: 
                conf["coords"][randint(0,len(conf["coords"])-1)]
                for name, conf in self.parameters_config.items()
            }
            next_suggestions.append(next_suggest)
        return next_suggestions

Step 5:[local_test.sh]

SEARCHER="my_algo"

3.3 Local test

Execute the script local_test.sh for local evaluation

./local_test.sh

Execution output:

====================== run search result ========================
 err_code:  0  err_msg:  
========================= iters means ===========================
func: data-2 iteration best: [25.24271821 26.36435157 12.77928619 10.19180929 11.3147711  10.17430656
 12.77928619 27.79752169 26.36793589 11.12007615]
func: data-30 iteration best: [-0.95264345 -0.27725879 -0.36873091 -0.68088963 -0.28840479 -0.50006427
 -0.32088949 -0.78627201 -0.53204227 -0.98427191]
========================= fianl score ============================
example_bayesian_optimization final score:  0.47173337831255463
==================================================================

3.4 Submission

Use prepare_submission.sh script to create a zip file, and submit the zip file to competition website Code submission entry.

./prepare_submission.sh example_random_searcher

Execution output:

upload_example_random_searcher_08131917
  adding: requirements.txt (stored 0%)
  adding: searcher.py (deflated 66%)
----------------------------------------------------------------
Built achive for upload
Archive:  ./upload_example_random_searcher_08131917.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
        0  08-13-2021 19:17   requirements.txt
     3767  08-13-2021 19:17   searcher.py
---------                     -------
     3767                     2 files
For scoring, upload upload_example_random_searcher_08131917.zip at address:
https://algo.browser.qq.com/
A toolkit for Lagrangian-based constrained optimization in Pytorch

Cooper About Cooper is a toolkit for Lagrangian-based constrained optimization in Pytorch. This library aims to encourage and facilitate the study of

Cooper 34 Jan 01, 2023
From Canonical Correlation Analysis to Self-supervised Graph Neural Networks

Code for CCA-SSG model proposed in the NeurIPS 2021 paper From Canonical Correlation Analysis to Self-supervised Graph Neural Networks.

Hengrui Zhang 44 Nov 27, 2022
Session-based Recommendation, CoHHN, price preferences, interest preferences, Heterogeneous Hypergraph, Co-guided Learning, SIGIR2022

This is our implementation for the paper: Price DOES Matter! Modeling Price and Interest Preferences in Session-based Recommendation Xiaokun Zhang, Bo

Xiaokun Zhang 27 Dec 02, 2022
The Environment I built to study Reinforcement Learning + Pokemon Showdown

pokemon-showdown-rl-environment The Environment I built to study Reinforcement Learning + Pokemon Showdown Been a while since I ran this. Think it is

3 Jan 16, 2022
Post-Training Quantization for Vision transformers.

PTQ4ViT Post-Training Quantization Framework for Vision Transformers. We use the twin uniform quantization method to reduce the quantization error on

Zhihang Yuan 61 Dec 28, 2022
Learning Efficient Online 3D Bin Packing on Packing Configuration Trees

Learning Efficient Online 3D Bin Packing on Packing Configuration Trees This repository is being continuously updated, please stay tuned! Any code con

86 Dec 28, 2022
This is the winning solution of the Endocv-2021 grand challange.

Endocv2021-winner [Paper] This is the winning solution of the Endocv-2021 grand challange. Dependencies pytorch # tested with 1.7 and 1.8 torchvision

Vajira Thambawita 14 Dec 03, 2022
mmdetection version of TinyBenchmark.

introduction This project is an mmdetection version of TinyBenchmark. TODO list: add TinyPerson dataset and evaluation add crop and merge for image du

34 Aug 27, 2022
Seeing All the Angles: Learning Multiview Manipulation Policies for Contact-Rich Tasks from Demonstrations

Seeing All the Angles: Learning Multiview Manipulation Policies for Contact-Rich Tasks from Demonstrations Trevor Ablett, Daniel (Yifan) Zhai, Jonatha

STARS Laboratory 3 Feb 01, 2022
AdamW optimizer and cosine learning rate annealing with restarts

AdamW optimizer and cosine learning rate annealing with restarts This repository contains an implementation of AdamW optimization algorithm and cosine

Maksym Pyrozhok 133 Dec 20, 2022
Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers"

Recurrent Fast Weight Programmers This is the official repository containing the code we used to produce the experimental results reported in the pape

IDSIA 36 Nov 15, 2022
A Pytorch implementation of "Splitter: Learning Node Representations that Capture Multiple Social Contexts" (WWW 2019).

Splitter ⠀⠀ A PyTorch implementation of Splitter: Learning Node Representations that Capture Multiple Social Contexts (WWW 2019). Abstract Recent inte

Benedek Rozemberczki 201 Nov 09, 2022
Bayesian dessert for Lasagne

Gelato Bayesian dessert for Lasagne Recent results in Bayesian statistics for constructing robust neural networks have proved that it is one of the be

Maxim Kochurov 84 May 11, 2020
Think Big, Teach Small: Do Language Models Distil Occam’s Razor?

Think Big, Teach Small: Do Language Models Distil Occam’s Razor? Software related to the paper "Think Big, Teach Small: Do Language Models Distil Occa

0 Dec 07, 2021
functorch is a prototype of JAX-like composable function transforms for PyTorch.

functorch is a prototype of JAX-like composable function transforms for PyTorch.

Facebook Research 1.2k Jan 09, 2023
DeLighT: Very Deep and Light-Weight Transformers

DeLighT: Very Deep and Light-weight Transformers This repository contains the source code of our work on building efficient sequence models: DeFINE (I

Sachin Mehta 440 Dec 18, 2022
Official Pytorch implementation of "Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes", CVPR 2022

Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes / 3DCrowdNet News 💪 3DCrowdNet achieves the state-of-the-art accuracy on 3D

Hongsuk Choi 113 Dec 21, 2022
PyTorch implementation of TSception V2 using DEAP dataset

TSception This is the PyTorch implementation of TSception V2 using DEAP dataset in our paper: Yi Ding, Neethu Robinson, Su Zhang, Qiuhao Zeng, Cuntai

Yi Ding 27 Dec 15, 2022
OpenAi's gym environment wrapper to vectorize them with Ray

Ray Vector Environment Wrapper You would like to use Ray to vectorize your environment but you don't want to use RLLib ? You came to the right place !

Pierre TASSEL 15 Nov 10, 2022