- Please put the original data in
Data Augmentation Scheme
- Complex number is special :
a+bjhas a same square similarity with
- With the strategy above, you can quadruple the amount of training data compared to the raw.
- We randomly scale the data with a factor between 0.8~1.2, random gaussian noise with
meanequals to 0 and
stdequals to 1e-4 are adopted.
- Auto encoder with reconstruction loss.
- ResNet18 as an Encoder.
- 3D Conv as a Decoder.
- Position Attention Module and Channel Attention Module are important.
- Normalization such as
BatchNorm2dafter Decoder is important.
- Latent Quantization.
We provide several pretrained models in the folder of
- Sim : similarity score tested on the raw data.
- Multi : multi score tested on the raw data.
- Score : tested on the local raw data.
- Feel free to use the pretrained weights or training from scratch.
- Modify the
train.py, maybe you have to choose a suitable GPU id.
- Online validation, only save the models with best scores so far.
- Hints : smaller batch size may result in higher similarity score and higher multi score.
- Epochs : we perform no ablation study on this parameter, you can just let it run.
- Benchmark : data1: local score approx 0.82~0.83
- Benchmark : data2: local score approx 0.76~0.77
- We use adaboost weights to ensemble several models for acquiring performance gain.
- Without model ensembles, you can still achieve an online score up to 0.72 easily.
- You can just use the single model without ensembles which is much easier.
- Without deep ensembles, it is still trivial to achieve a score up to 0.72
- Deep Residual Learning for Image Recognition (CVPR, 2016)
- Dual Attention Network for Scene Segmentation (CVPR, 2019)
- Deep Learning-based Implicit CSI Feedback in Massive MIMO (IEEE Transactions on Communications, 2021)