Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt

Overview

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt. This is done by training a model that takes as input a text prompt, and returns as an output the VQGAN latent space, which is then transformed into an RGB image. The model is trained on a dataset of text prompts and can be used on unseen text prompts. The loss function is minimizing the distance between the CLIP generated image features and the CLIP input text features. Additionally, a diversity loss can be used to make increase the diversity of the generated images given the same prompt.

Open In Colab

How to install?

Download the 16384 Dimension Imagenet VQGAN (f=16)

Links:

Install dependencies.

conda

conda create -n ff_vqgan_clip_env python=3.8
conda activate ff_vqgan_clip_env
# Install pytorch/torchvision - See https://pytorch.org/get-started/locally/ for more info.
(ff_vqgan_clip_env) conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia
(ff_vqgan_clip_env) pip install -r requirements.txt

pip/venv

conda deactivate # Make sure to use your global python3
python3 -m pip install venv
python3 -m venv ./ff_vqgan_clip_venv
source ./ff_vqgan_clip_venv/bin/activate
$ (ff_vqgan_clip_venv) python -m pip install -r requirements.txt

How to use?

(Optional) Pre-tokenize Text

$ (ff_vqgan_clip_venv) python main.py tokenize data/list_of_captions.txt cembeds 128

Train

Modify configs/example.yaml as needed.

$ (ff_vqgan_clip_venv) python main.py train configs/example.yaml

Tensorboard:

Loss will be output for tensorboard.

# in a new terminal/session
(ff_vqgan_clip_venv) pip install tensorboard
(ff_vqgan_clip_venv) tensorboard --logdir results

Pre-trained models

Name Type Size Dataset Link Author
cc12m_8x128 VitGAN 12.1MB Conceptual captions 12M Download @mehdidc
cc12m_16x256 VitGAN 60.1MB Conceptual captions 12M Download @mehdidc
cc12m_32x512 VitGAN 408.4MB Conceptual captions 12M Download @mehdidc
cc12m_32x1024 VitGAN 1.55GB Conceptual captions 12M Download @mehdidc
cc12m_64x1024 VitGAN 3.05GB Conceptual captions 12M Download @mehdidc
bcaptmod_8x128 VitGAN 11.2MB Modified blog captions Download @afiaka87
bcapt_16x128 MLPMixer 168.8MB Blog captions Download @mehdidc

You can also access them from here

NB: cc12m_AxB means a model trained on conceptual captions 12M, with depth A and hidden state dimension B

After downloading a model or finishing training your own model, you can test it with new prompts, e.g.,

python -u main.py test pretrained_models/cc12m_32x1024/model.th "an armchair in the shape of an avocado"

You can also try it in the Colab Notebook. Using the notebook you can generate images from pre-trained models and do interpolations between text prompts to create videos, see for instance video 1 or video 2 or video 3

Acknowledgements

Comments
  • Models are broken in the new `torch` version

    Models are broken in the new `torch` version

    PyTorch introduced approximate GELU. This breaks the MLP-Mixer models. The fix is to save pre-trained models as weight dicts and not complete pickle objects.

    opened by neverix 12
  • Allow different models in replicate.ai interface

    Allow different models in replicate.ai interface

    @CJWBW Thanks again for providing an interface to the model in replicate.ai. I would like now to allow the user to select between different models. I modified predict.py and download-weights.sh accordingly.

    I would like to update the image on https://replicate.ai/mehdidc/feed_forward_vqgan_clip/ , is cog push r8.im/mehdidc/feed_forward_vqgan_clip the correct way to do it ? or it should be done on your side ? I tried the command but I got "docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]." since I don't have an nvidia GPU on my local machine (assuming that's the reason it failed).

    opened by mehdidc 12
  • Goal?

    Goal?

    Hey!

    Is the idea here to use CLIP embeds through a transformer similar to alstroemeria's CLIP Decision Transformer?

    edit: https://github.com/crowsonkb/cond_transformer_2

    opened by afiaka87 10
  • Error in Load Model

    Error in Load Model

    Two issues found:

    (1) A Restart Runtime occurs on !pip install requirements.txt . This, in turn, resets the current directory to /current. But even after manually updating the current directory....

    (2) Under Load Model: ImportError: /usr/local/lib/python3.7/dist-packages/torchtext/_torchtext.so: undefined symbol: _ZN3c106ivalue6Future15extractDataPtrsERKNS_6IValueE

    opened by metaphorz 9
  • Unavailable and broken links

    Unavailable and broken links

    When I run the notebook, some links seem unavailable. I don't know why this happens, because it seems that I can manually download the files in my web browser.

    Unavailable links

    Moreover, the links in the README are broken.

    Broken links

    opened by woctezuma 7
  • Observations training with different modifying words/phrases

    Observations training with different modifying words/phrases

    Searching for a more photo-realistic output - I've found that training on certain words is likely to bias the output heavily.

    "illustration"/"cartoon" biases heavily towards a complete lack of photorealism in favor of very abstract interpretations that are often too simple in fact.

    Here - an example from training on the blog post captions with the word "minimalist" prepended to each caption (and a removal of all mannequin captions which are about a 1/16 of all the captions)

    progress_0000019700

    In the Eleuther AI discord; a user @kingdomakrillic posted a very useful link https://imgur.com/a/SnSIQRu showing the effect a starting caption/modifier caption has on various other words when generating an image using the VQGAN + CLIP method.

    With those captions; I decided to randomly prepend all the modifying words/phrases which produced a (subjectively) photo-realistic output to the blog post captions.

            "8k resolution",
            "Flickr",
            "Ambient occlusion",
            "filmic",
            "global illumination",
            "Photo taken with Nikon D750",
            "DSLR",
            "20 megapixels",
            "photo taken with Ektachrome",
            "photo taken with Fugifilm Superia",
            "photo taken with Provia",
            "criterion collection",
            "National Geographic photo ",
            "Associated Press photo",
            "detailed",
            "shot on 70mm",
            "3840x2160",
            "ISO 200",
            "Tri-X 400 TX",
            "Ilford HPS",
            "matte photo",
            "Kodak Gold 200",
            "Kodak Ektar",
            "Kodak Portra",
            "geometric",
    

    With this in place - outputs tend to much more photorealistic (similar caption to above, less than 1 epoch trained): <|startoftext|>2 0 megapixels photo of richmond district , san francisco , from a tall vantage point in the morning <|endoftext|>!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! progress_0000005100

    None of this is very principled however and my next attempts were indeed going to be either "add noise to the captions" or "train on image-text pairs as well" - both of which seem to be in the codebase already! So I'm going to have a try with that.

    In the meantime - here is a checkpoint from the first round of captions (prepend "minimalist" to every blog caption, removing all captions containing "mannequin"). I trained it using the vitgan for 8 epochs, 128 dim, 8 depth, ViT-B16, 32 cutn. The loss was perhaps still going down at this point; but with very diminished returns.

    model.th.zip

    opened by afiaka87 6
  • Support new CLIP models (back to old install)

    Support new CLIP models (back to old install)

    Wasn't expecting an update from openai so soon but I think we have to do this (unfortunately) again until rom1504's branch for the clip-anytorch package is even with main.

    opened by afiaka87 4
  • VQGAN - blended models

    VQGAN - blended models

    I want to take a film (say the Shining )

    • caption it using amazon ai label detection (maybe 1 every 100 frames)
    • throw these image + text paris into training -
    • then take trained model have the neural nets spit out something in the style of the movie....

    Is it possible? In the nerdyrodent/VQGAN-CLIP repo - there's a style transfer

    • but I'm in an enquiry of how to merge the model layers so that the content is skewed to a certain style / astethic.

    @norod + @justinpinkney were successful in blending models together (the FFHQ + cartoon designs) which could easily - could it be achieved in this VQGAN domain? They kind of perform some neural surgery / hacking the layers to force the results. https://github.com/justinpinkney/toonify

    Does the VQGAN give us some access to hack these layers?

    UPDATE @JCBrouwer - seems to have a combined a style transfer via video here https://github.com/JCBrouwer/maua-style

    fyi @nerdyrodent

    opened by johndpope 3
  • How to condition model output z that looks like it can from a standard normal distribution?

    How to condition model output z that looks like it can from a standard normal distribution?

    Hi, this is a nice repo and I'm trying to reimplement something similar for StyleGAN2. Using a list of texts, I'm trying to map CLIP text embeddings to StyleGAN2 latent vectors which is input to StyleGAN2 generator for generating images and then optimize this MLP mapper model using CLIP loss. However, I'm quickly getting blown out images for entire batches. I'm suspecting perhaps this is due to the output of the MLP not conditioned to output something that looks like it can from a standard normal distribution. I wonder if you could perhaps point me in the right direction how to do this.

    opened by xiankgx 2
  • Add Docker environment & web demo

    Add Docker environment & web demo

    Hey @mehdidc! 👋

    We find your model so cool that it generates images from prompts ultra fast!

    This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.

    This also means we can make a web page where other people can try out your model! View it here: https://replicate.ai/mehdidc/feed_forward_vqgan_clip

    Claim your page here so you can edit it, and we'll feature it on our website and tweet about it too.

    In case you're wondering who I am, I'm from Replicate, where we're trying to make machine learning reproducible. We got frustrated that we couldn't run all the really interesting ML work being done. So, we're going round implementing models we like. 😊

    opened by chenxwh 1
  • How to get more variation in the null image

    How to get more variation in the null image

    I've been generating images using this model, which is delightfully fast, but I've noticed that it produces images that are all alike. I tried generating the "null" image by doing:

    H = perceptor.encode_text(toks.to(device)).float()
    z = net(0 * H)
    

    This resulted in:

    base image

    And indeed, everything I generated kind of matched that: you can see the fleshly protrusion on the left in "gold coin":

    gold-coin--0 0

    The object and matching mini-object in "tent":

    tent-0 5

    And it always seems to try to caption the image with nonsense lettering ("lion"):

    lion--0 0

    So I'm wondering if there's a way to "prime" the model and suggest it use a different zero image for each run. Is there a variable I can set, or is this deeply ingrained in training data?

    Any advice would be appreciated, thank you!

    (Apologies if this is the same as #8, but it sounded like #8 was solved by using priors which doesn't seem to help with this.)

    opened by kchodorow 0
  • training GPU configuration

    training GPU configuration

    Thanks for your excellent repo.

    When training cc12m_32x1024 with type VitGAN or MLP Mixer, what kinds of GPU environment do you use? Tesla V100 with 32G mem or others?

    Thanks

    opened by CrossLee1 1
  • Slow Training Speed

    Slow Training Speed

    Hi, First of all great work! I really loved it. To replicate, I tried training on the Conceptual 12M Dataset with the depth and dims same as the pretrained models but the training was too slow. Even in 4 days it was going through the first (or 0th) epoch. I'm training it on NVIDIA Quadro RTX A6000 which I don't think is that much slow. Any suggestions to improve the speed of training? I have multi-gpu access but seems it isn't supported rn. Thanks !

    opened by s13kman 3
  • clarifying differences between available models

    clarifying differences between available models

    Hi @mehdidc 👋🏼 I'm a new team member at @replicate.

    I was trying out your model on replicate.ai and noticed that the names of the models are a bit cryptic, so it's hard to know what differences to expect when using each:

    Screen Shot 2021-09-23 at 6 21 40 PM

    Here's where those are declared:

    https://github.com/mehdidc/feed_forward_vqgan_clip/blob/dd640c0ee5f023ddf83379e6b3906529511ce025/predict.py#L10-L14

    Looking at the source for cog's Input class it looks like options can be a list of anything:

    options: Optional[List[Any]] = None
    

    I'm not sure if this is right, but maybe this means that each model could be declared as a tuple with an accompanying label:

    MODELS = [
        ("cc12m_32x1024_vitgan_v0.1.th", "This model does x"),
        ("cc12m_32x1024_vitgan_v0.2.th" "This model does y"),,
        ("cc12m_32x1024_mlp_mixer_v0.2.th", "This model does z"),
    ]
    

    We could then display those labels on the model form on replicate.ai to make the available options more clear to users.

    Curious to hear your thoughts!

    cc @CJWBW @bfirsh @andreasjansson

    opened by zeke 2
  • How to improve so we could get results closer to the

    How to improve so we could get results closer to the "regular" VQGAN+CLIP?

    Hi! I really love this idea and think that this concept solves the main bottleneck of current VQGAN+CLIP approach which is the optimisation for each prompt. I love how instantaneous this approach is to generating new images. However results with the different CC12M or blog captions model fall short in comparison to the most recent VQGAN+CLIP optimisation approaches

    I am wondering where it could potentially be improved. I guess one thing could be trying to embed the MSE regularised and z+quantize most recent VQGAN+CLIP approaches. The other is that I am wondering whether a bigger training dataset would improve the quality. Would it make sense to train it on ImageNet captions or maybe even a bigger 100M+ caption dataset? (maybe [email protected]?)

    As you can see, I can't actually contribute much (but I could help with a bigger dataset training effort) but I'm cheering for this project to not die!

    opened by apolinario 2
  • Finetuing CLIP to improve domain-specific performance

    Finetuing CLIP to improve domain-specific performance

    It's quite easy to finetune one of the Open AI CLIP checkpoints with this codebase:

    https://github.com/Zasder3/train-CLIP-FT

    Uses pytorch-lightning. May be worth pursuing

    opened by afiaka87 1
Owner
Mehdi Cherti
Deep Learning Researcher
Mehdi Cherti
This is the official pytorch implementation of the BoxEL for the description logic EL++

BoxEL: Box EL++ Embedding This is the official pytorch implementation of the BoxEL for the description logic EL++. BoxEL++ is a geometric approach bas

1 Nov 03, 2022
[ICCV 2021] Target Adaptive Context Aggregation for Video Scene Graph Generation

Target Adaptive Context Aggregation for Video Scene Graph Generation This is a PyTorch implementation for Target Adaptive Context Aggregation for Vide

Multimedia Computing Group, Nanjing University 44 Dec 14, 2022
Trading Gym is an open source project for the development of reinforcement learning algorithms in the context of trading.

Trading Gym Trading Gym is an open-source project for the development of reinforcement learning algorithms in the context of trading. It is currently

Dimitry Foures 535 Nov 15, 2022
Human annotated noisy labels for CIFAR-10 and CIFAR-100.

Dataloader for CIFAR-N CIFAR-10N noise_label = torch.load('./data/CIFAR-10_human.pt') clean_label = noise_label['clean_label'] worst_label = noise_lab

<a href=[email protected]"> 117 Nov 30, 2022
RoFormer_pytorch

PyTorch RoFormer 原版Tensorflow权重(https://github.com/ZhuiyiTechnology/roformer) chinese_roformer_L-12_H-768_A-12.zip (提取码:xy9x) 已经转化为PyTorch权重 chinese_r

yujun 283 Dec 12, 2022
Deep-Learning-Book-Chapter-Summaries - Attempting to make the Deep Learning Book easier to understand.

Deep-Learning-Book-Chapter-Summaries This repository provides a summary for each chapter of the Deep Learning book by Ian Goodfellow, Yoshua Bengio an

Aman Dalmia 1k Dec 27, 2022
Datasets and source code for our paper Webly Supervised Fine-Grained Recognition: Benchmark Datasets and An Approach

Introduction Datasets and source code for our paper Webly Supervised Fine-Grained Recognition: Benchmark Datasets and An Approach Datasets: WebFG-496

21 Sep 30, 2022
A DCGAN to generate anime faces using custom mined dataset

Anime-Face-GAN-Keras A DCGAN to generate anime faces using custom dataset in Keras. Dataset The dataset is created by crawling anime database websites

Pavitrakumar P 190 Jan 03, 2023
VR-Caps: A Virtual Environment for Active Capsule Endoscopy

VR-Caps: A Virtual Environment for Capsule Endoscopy Overview We introduce a virtual active capsule endoscopy environment developed in Unity that prov

DeepMIA Lab 90 Dec 27, 2022
This is the official PyTorch implementation of our paper: "Artistic Style Transfer with Internal-external Learning and Contrastive Learning".

Artistic Style Transfer with Internal-external Learning and Contrastive Learning This is the official PyTorch implementation of our paper: "Artistic S

51 Dec 20, 2022
The official repo of the CVPR2021 oral paper: Representative Batch Normalization with Feature Calibration

Representative Batch Normalization (RBN) with Feature Calibration The official implementation of the CVPR2021 oral paper: Representative Batch Normali

Open source projects of ShangHua-Gao 76 Nov 09, 2022
Only works with the dashboard version / branch of jesse

Jesse optuna Only works with the dashboard version / branch of jesse. The config.yml should be self-explainatory. Installation # install from git pip

Markus K. 8 Dec 04, 2022
A Fast Monotone Rotating Shallow Water model

pyRSW A Fast Monotone Rotating Shallow Water model How fast? As fast as a sustained 2 Gflop/s per core on a 2.5 GHz cpu (or 2048 Gflop/s with 1024 cor

Guillaume Roullet 13 Sep 28, 2022
GAN example for Keras. Cuz MNIST is too small and there should be something more realistic.

Keras-GAN-Animeface-Character GAN example for Keras. Cuz MNIST is too small and there should an example on something more realistic. Some results Trai

160 Sep 20, 2022
[SIGGRAPH Asia 2021] Pose with Style: Detail-Preserving Pose-Guided Image Synthesis with Conditional StyleGAN

Pose with Style: Detail-Preserving Pose-Guided Image Synthesis with Conditional StyleGAN [Paper] [Project Website] [Output resutls] Official Pytorch i

Badour AlBahar 215 Dec 17, 2022
Official implement of Paper:A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sening images

A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images 深度监督影像融合网络DSIFN用于高分辨率双时相遥感影像变化检测 Of

Chenxiao Zhang 135 Dec 19, 2022
Neural style transfer in PyTorch.

style-transfer-pytorch An implementation of neural style transfer (A Neural Algorithm of Artistic Style) in PyTorch, supporting CPUs and Nvidia GPUs.

Katherine Crowson 395 Jan 06, 2023
The end-to-end platform for building voice products at scale

Picovoice Made in Vancouver, Canada by Picovoice Picovoice is the end-to-end platform for building voice products on your terms. Unlike Alexa and Goog

Picovoice 318 Jan 07, 2023
This python-based package offers a way of creating a parametric OpenMC plasma source from plasma parameters.

openmc-plasma-source This python-based package offers a way of creating a parametric OpenMC plasma source from plasma parameters. The OpenMC sources a

Fusion Energy 10 Oct 18, 2022
This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by Divam Gupta, Wei Pu, Trenton Tabor, Jeff Schneider

SBEVNet: End-to-End Deep Stereo Layout Estimation This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by D

Divam Gupta 19 Dec 17, 2022