- 2022.07.03 : Our paper has been accepted by ECCV 2022.
- 2022.01.25 : Our work (https://arxiv.org/abs/2201.10419) is on arxiv.
Snapshot compressive imaging (SCI) can record the 3D information by a 2D measurement and from this 2D measurement to reconstruct the original 3D information by reconstruction algorithm. As we can see, the reconstruction algorithm plays a vital role in SCI. Recently, deep learning algorithm show its outstanding ability, outperforming the traditional algorithm. Therefore, to improve deep learning algorithm reconstruction accuracy is an inevitable topic for SCI. Besides, deep learning algorithms are usually limited by scalability, and a well trained model in general can not be applied to new systems if lacking the new training process. To address these problems, we develop the ensemble learning priors to further improve the reconstruction accuracy and propose the scalable learning to empower deep learning the scalability just like the traditional algorithm. What's more, our algorithm has achieved the state-of-the-art results, outperforming existing algorithms. Extensive results on both simulation and real datasets demonstrate the superiority of our proposed algorithm.
$ pip install pytorch=1.9
$ pip install tqdm
$ pip install random
$ pip install wandb
$ pip install argparse
$ pip install scipy
Download our trained model from the Google Drive and place it under the log_dir (your path) folder.
cd ./ELP_Unfolding
python test.py or bash test.sh
Download our trained model from the Google Drive and place it under the log_dir (your path)folder.
cd ./ELP_Unfolding/scalable
python test.py or bash test.sh
Download our trained model from the Google Drive and place it under the traindata folder.
cd ./ELP_Unfolding
python train.py or bash tain.sh
The default setting is for a small memory GPU. If you want to get the same trained model with the Google Drive, the first period (pretrained) should be run
cd ./ELP_Unfolding
python train.py --init_channels 512 --pres_channels 512 --epochs 200 --lr 1e-4 --priors 1
Then the second model should be run
cd ./ELP_Unfolding
python train.py --init_channels 512 --pres_channels 512 --epochs 320 --lr 2e-5 --priors 6 --resume_training --use_first_stage
cd ./ELP_Unfolding/scalable
python train.py or bash train.sh
The default setting is for a small memory GPU. If you want to get the same trained model with the Google Drive, the first period (pretrained) should be run
cd ./ELP_Unfolding/scalable
python train.py --init_channels 512 --pres_channels 512 --epochs 200 --lr 1e-4 --priors 1
Then the second model should be run
cd ./ELP_Unfolding/scalable
python train.py --init_channels 512 --pres_channels 512 --epochs 320 --lr 2e-5 --priors 6 --resume_training --use_first_stage
If you find the code helpful in your resarch or work, please cite the following paper.
@inproceedings{,
title={Ensemble Learning Priors Driven Deep Unfolding for Scalable Video Snapshot Compressive Imaging},
author={Chengshuai Yang, Shiyu Zhang, Xin Yuan},
booktitle={IEEE European Conference on Computer Vision (ECCV)},
year={2022}
}