Skip to content

luping-liu/PNDM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pseudo Numerical Methods for Diffusion Models on Manifolds (PNDM, PLMS | ICLR2022)

PWC

This repo is the official PyTorch implementation for the paper Pseudo Numerical Methods for Diffusion Models on Manifolds (PNDM, PLMS | ICLR2022)

by Luping Liu, Yi Ren, Zhijie Lin, Zhou Zhao (Zhejiang University).

What does this code do?

This code is not only the official implementation for PNDM, but also a generic framework for DDIM-like models including:

Structure

This code contains three main objects including method, schedule and model. The following table shows the options supported by this code and the role of each object.

Object Option Role
method DDIM, S-PNDM, F-PNDM, FON, PF the numerical method used to generate samples
schedule linear, quad, cosine the schedule of adding noise to images
model DDIM, iDDPM, PF, PF_deep the neural network used to fit noise

All of them can be combined at will, so this code provide at least 5x3x4=60 choices to generate samples.

Integration with 🤗 Diffusers library

PNDM is now also available in 🧨 Diffusers and accesible via the PNDMPipeline. Diffusers allows you to test PNDM in PyTorch in just a couple lines of code.

You can install diffusers as follows:

pip install diffusers torch accelerate

And then try out the sampler/scheduler with just a couple lines of code:

from diffusers import PNDMPipeline

model_id = "google/ddpm-cifar10-32"

# load model and scheduler
pndm = PNDMPipeline.from_pretrained(model_id)

# run pipeline in inference (sample random noise and denoise)
image = pndm(num_inference_steps=50).images[0]

# save image
image.save("pndm_generated_image.png")

The PNDM scheduler can also be used with more powerful diffusion models such as Stable Diffusion

You simply need to accept the license on the Hub, login with huggingface-cli login and install transformers:

pip install transformers

Then you can run:

from diffusers import StableDiffusionPipeline, PNDMScheduler

pndm = PNDMScheduler.from_config("runwayml/stable-diffusion-v1-5", subfolder="scheduler")
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", scheduler=pndm)

image = pipeline("An astronaut riding a horse.").images[0]
image.save("astronaut_riding_a_horse.png")

How to run the code

Dependencies

Run the following to install a subset of necessary python packages for our code.

pip install -r requirements.txt

Tip: mpi4py can make the generation process faster using multi-gpus. It is not necessary and can be removed freely.

Usage

Evaluate our models through main.py.

python main.py --runner sample --method F-PNDM --sample_speed 50 --device cuda --config ddim_cifar10.yml --image_path temp/results --model_path temp/models/ddim/ema_cifar10.ckpt
  • runner (train|sample): choose the mode of runner
  • method (DDIM|FON|S-PNDM|F-PNDM|PF): choose the numerical methods
  • sample_speed: control the total generation step
  • device (cpu|cuda:0): choose the device to use
  • config: choose the config file
  • image_path: choose the path to save images
  • model_path: choose the path of model

Train our models through main.py.

python main.py --runner train --device cuda --config ddim_cifar10.yml --train_path temp/train
  • train_path: choose the path to save training status

Checkpoints & statistics

All checkpoints of models and precalculated statistics for FID are provided in this Google Drive.

References

If you find the code useful for your research, please consider citing:

@inproceedings{liu2022pseudo,
    title={Pseudo Numerical Methods for Diffusion Models on Manifolds},
    author={Luping Liu and Yi Ren and Zhijie Lin and Zhou Zhao},
    booktitle={International Conference on Learning Representations},
    year={2022},
    url={https://openreview.net/forum?id=PlKWVd2yBkY}
}

This work is built upon some previous papers which might also interest you:

  • Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33 (2020): 6840-6851.
  • Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising Diffusion Implicit Models. International Conference on Learning Representations. 2021.
  • Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-Based Generative Modeling through Stochastic Differential Equations. International Conference on Learning Representations. 2021.

About

The official implementation for Pseudo Numerical Methods for Diffusion Models on Manifolds (PNDM, PLMS | ICLR2022)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages