Skip to content

OliverRensu/GLSTR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GLSTR (Global-Local Saliency Transformer)

This is the official implementation of paper "Unifying Global-Local Representations in Salient Object Detection with Transformer" by Sucheng Ren, Qiang Wen, Nanxuan Zhao, Guoqiang Han, Shengfeng He avatar

Prerequisites

The whole training process can be done on eight RTX2080Ti or four RTX3090.

  • Pytorch 1.6

Datasets

Training Set

We use the training set of DUTS (DUTS-TR) to train our model.

/path/to/DUTS-TR/
   img/
      img1.jpg
   label/
      label1.png

Testing Set

We test our model on the testing set of DUTS, ECSSD, HKU-IS, PASCAL-S, DUT-OMRON, and SOD to test our model.

Training

Download the pretrained transformer backbone on ImageNet.

# input the path to training data and pretrained backbone in train.sh
bash train.sh

Testing

Download the pretrained model from Baidu pan(code: uo0a), Google drive, and put it int ./ckpt/

python test.py

Evaluation

The precomputed saliency maps (DUTS-TE, ECSSD, HKU-IS, PASCAL-S, DUT-OMRON, and SOD) can be found at Baidu pan(code: uo0a), Google drive.

After paper submission, we retrain the model, and the performance is improved. Feel free to use the results of our paper or the precomputed saliency maps.

Contact

If you have any questions, feel free to email Sucheng Ren :) (oliverrensu@gmail.com)

Citation

Please cite our paper if you think the code and paper are helpful.

@article{ren2021unifying,
  title={Unifying Global-Local Representations in Salient Object Detection with Transformer},
  author={Ren, Sucheng and Wen, Qiang and Zhao, Nanxuan and Han, Guoqiang and He, Shengfeng},
  journal={arXiv preprint arXiv:2108.02759},
  year={2021}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published