Skip to content

lyndonzheng/F-LSeSim

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spatially-Correlative Loss

arXiv | website


We provide the Pytorch implementation of "The Spatially-Correlative Loss for Various Image Translation Tasks". Based on the inherent self-similarity of object, we propose a new structure-preserving loss for one-sided unsupervised I2I network. The new loss will deal only with spatial relationship of repeated signal, regardless of their original absolute value.

The Spatially-Correlative Loss for Various Image Translation Tasks
Chuanxia Zheng, Tat-Jen Cham, Jianfei Cai
NTU and Monash University
In CVPR2021

ToDo

  • a simple example to use the proposed loss

Example Results

Unpaired Image-to-Image Translation

Single Image Translation

Getting Started

Installation

This code was tested with Pytorch 1.7.0, CUDA 10.2, and Python 3.7

pip install visdom dominate
  • Clone this repo:
git clone https://github.com/lyndonzheng/F-LSeSim
cd F-LSeSim

Please refer to the original CUT and CycleGAN to download datasets and learn how to create your own datasets.

Training

  • Train the single-modal I2I translation model:
sh ./scripts/train_sc.sh 
  • Set --use_norm for cosine similarity map, the default similarity is dot-based attention score. --learned_attn, --augment for the learned self-similarity.

  • To view training results and loss plots, run python -m visdom.server and copy the URL http://localhost:port.

  • Training models will be saved under the checkpoints folder.

  • The more training options can be found in the options folder.

  • Train the single-image translation model:

sh ./scripts/train_sinsc.sh 

As the multi-modal I2I translation model was trained on MUNIT, we would not plan to merge the code to this repository. If you wish to obtain multi-modal results, please contact us at chuanxia001@e.ntu.edu.sg.

Testing

  • Test the single-modal I2I translation model:
sh ./scripts/test_sc.sh
  • Test the single-image translation model:
sh ./scripts/test_sinsc.sh
  • Test the FID score for all training epochs:
sh ./scripts/test_fid.sh

Pretrained Models

Download the pre-trained models (will be released soon) using the following links and put them undercheckpoints/ directory.

Citation

@inproceedings{zheng2021spatiallycorrelative,
  title={The Spatially-Correlative Loss for Various Image Translation Tasks},
  author={Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2021}
}

Acknowledge

Our code is developed based on CUT and CycleGAN. We also thank pytorch-fid for FID computation, LPIPS for diversity score, and D&C for density and coverage evaluation.

About

[CVPR 2021]: The Spatially-Correlative Loss for Various Image Translation Tasks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages