Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt

Overview

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt. This is done by training a model that takes as input a text prompt, and returns as an output the VQGAN latent space, which is then transformed into an RGB image. The model is trained on a dataset of text prompts and can be used on unseen text prompts. The loss function is minimizing the distance between the CLIP generated image features and the CLIP input text features. Additionally, a diversity loss can be used to make increase the diversity of the generated images given the same prompt.

Open In Colab

How to install?

Download the 16384 Dimension Imagenet VQGAN (f=16)

Links:

Install dependencies.

conda

conda create -n ff_vqgan_clip_env python=3.8
conda activate ff_vqgan_clip_env
# Install pytorch/torchvision - See https://pytorch.org/get-started/locally/ for more info.
(ff_vqgan_clip_env) conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia
(ff_vqgan_clip_env) pip install -r requirements.txt

pip/venv

conda deactivate # Make sure to use your global python3
python3 -m pip install venv
python3 -m venv ./ff_vqgan_clip_venv
source ./ff_vqgan_clip_venv/bin/activate
$ (ff_vqgan_clip_venv) python -m pip install -r requirements.txt

How to use?

(Optional) Pre-tokenize Text

$ (ff_vqgan_clip_venv) python main.py tokenize data/list_of_captions.txt cembeds 128

Train

Modify configs/example.yaml as needed.

$ (ff_vqgan_clip_venv) python main.py train configs/example.yaml

Tensorboard:

Loss will be output for tensorboard.

# in a new terminal/session
(ff_vqgan_clip_venv) pip install tensorboard
(ff_vqgan_clip_venv) tensorboard --logdir results

Pre-trained models

Name Type Size Dataset Link Author
cc12m_8x128 VitGAN 12.1MB Conceptual captions 12M Download @mehdidc
cc12m_16x256 VitGAN 60.1MB Conceptual captions 12M Download @mehdidc
cc12m_32x512 VitGAN 408.4MB Conceptual captions 12M Download @mehdidc
cc12m_32x1024 VitGAN 1.55GB Conceptual captions 12M Download @mehdidc
cc12m_64x1024 VitGAN 3.05GB Conceptual captions 12M Download @mehdidc
bcaptmod_8x128 VitGAN 11.2MB Modified blog captions Download @afiaka87
bcapt_16x128 MLPMixer 168.8MB Blog captions Download @mehdidc

You can also access them from here

NB: cc12m_AxB means a model trained on conceptual captions 12M, with depth A and hidden state dimension B

After downloading a model or finishing training your own model, you can test it with new prompts, e.g.,

python -u main.py test pretrained_models/cc12m_32x1024/model.th "an armchair in the shape of an avocado"

You can also try it in the Colab Notebook. Using the notebook you can generate images from pre-trained models and do interpolations between text prompts to create videos, see for instance video 1 or video 2 or video 3

Acknowledgements

Comments
  • Models are broken in the new `torch` version

    Models are broken in the new `torch` version

    PyTorch introduced approximate GELU. This breaks the MLP-Mixer models. The fix is to save pre-trained models as weight dicts and not complete pickle objects.

    opened by neverix 12
  • Allow different models in replicate.ai interface

    Allow different models in replicate.ai interface

    @CJWBW Thanks again for providing an interface to the model in replicate.ai. I would like now to allow the user to select between different models. I modified predict.py and download-weights.sh accordingly.

    I would like to update the image on https://replicate.ai/mehdidc/feed_forward_vqgan_clip/ , is cog push r8.im/mehdidc/feed_forward_vqgan_clip the correct way to do it ? or it should be done on your side ? I tried the command but I got "docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]." since I don't have an nvidia GPU on my local machine (assuming that's the reason it failed).

    opened by mehdidc 12
  • Goal?

    Goal?

    Hey!

    Is the idea here to use CLIP embeds through a transformer similar to alstroemeria's CLIP Decision Transformer?

    edit: https://github.com/crowsonkb/cond_transformer_2

    opened by afiaka87 10
  • Error in Load Model

    Error in Load Model

    Two issues found:

    (1) A Restart Runtime occurs on !pip install requirements.txt . This, in turn, resets the current directory to /current. But even after manually updating the current directory....

    (2) Under Load Model: ImportError: /usr/local/lib/python3.7/dist-packages/torchtext/_torchtext.so: undefined symbol: _ZN3c106ivalue6Future15extractDataPtrsERKNS_6IValueE

    opened by metaphorz 9
  • Unavailable and broken links

    Unavailable and broken links

    When I run the notebook, some links seem unavailable. I don't know why this happens, because it seems that I can manually download the files in my web browser.

    Unavailable links

    Moreover, the links in the README are broken.

    Broken links

    opened by woctezuma 7
  • Observations training with different modifying words/phrases

    Observations training with different modifying words/phrases

    Searching for a more photo-realistic output - I've found that training on certain words is likely to bias the output heavily.

    "illustration"/"cartoon" biases heavily towards a complete lack of photorealism in favor of very abstract interpretations that are often too simple in fact.

    Here - an example from training on the blog post captions with the word "minimalist" prepended to each caption (and a removal of all mannequin captions which are about a 1/16 of all the captions)

    progress_0000019700

    In the Eleuther AI discord; a user @kingdomakrillic posted a very useful link https://imgur.com/a/SnSIQRu showing the effect a starting caption/modifier caption has on various other words when generating an image using the VQGAN + CLIP method.

    With those captions; I decided to randomly prepend all the modifying words/phrases which produced a (subjectively) photo-realistic output to the blog post captions.

            "8k resolution",
            "Flickr",
            "Ambient occlusion",
            "filmic",
            "global illumination",
            "Photo taken with Nikon D750",
            "DSLR",
            "20 megapixels",
            "photo taken with Ektachrome",
            "photo taken with Fugifilm Superia",
            "photo taken with Provia",
            "criterion collection",
            "National Geographic photo ",
            "Associated Press photo",
            "detailed",
            "shot on 70mm",
            "3840x2160",
            "ISO 200",
            "Tri-X 400 TX",
            "Ilford HPS",
            "matte photo",
            "Kodak Gold 200",
            "Kodak Ektar",
            "Kodak Portra",
            "geometric",
    

    With this in place - outputs tend to much more photorealistic (similar caption to above, less than 1 epoch trained): <|startoftext|>2 0 megapixels photo of richmond district , san francisco , from a tall vantage point in the morning <|endoftext|>!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! progress_0000005100

    None of this is very principled however and my next attempts were indeed going to be either "add noise to the captions" or "train on image-text pairs as well" - both of which seem to be in the codebase already! So I'm going to have a try with that.

    In the meantime - here is a checkpoint from the first round of captions (prepend "minimalist" to every blog caption, removing all captions containing "mannequin"). I trained it using the vitgan for 8 epochs, 128 dim, 8 depth, ViT-B16, 32 cutn. The loss was perhaps still going down at this point; but with very diminished returns.

    model.th.zip

    opened by afiaka87 6
  • Support new CLIP models (back to old install)

    Support new CLIP models (back to old install)

    Wasn't expecting an update from openai so soon but I think we have to do this (unfortunately) again until rom1504's branch for the clip-anytorch package is even with main.

    opened by afiaka87 4
  • VQGAN - blended models

    VQGAN - blended models

    I want to take a film (say the Shining )

    • caption it using amazon ai label detection (maybe 1 every 100 frames)
    • throw these image + text paris into training -
    • then take trained model have the neural nets spit out something in the style of the movie....

    Is it possible? In the nerdyrodent/VQGAN-CLIP repo - there's a style transfer

    • but I'm in an enquiry of how to merge the model layers so that the content is skewed to a certain style / astethic.

    @norod + @justinpinkney were successful in blending models together (the FFHQ + cartoon designs) which could easily - could it be achieved in this VQGAN domain? They kind of perform some neural surgery / hacking the layers to force the results. https://github.com/justinpinkney/toonify

    Does the VQGAN give us some access to hack these layers?

    UPDATE @JCBrouwer - seems to have a combined a style transfer via video here https://github.com/JCBrouwer/maua-style

    fyi @nerdyrodent

    opened by johndpope 3
  • How to condition model output z that looks like it can from a standard normal distribution?

    How to condition model output z that looks like it can from a standard normal distribution?

    Hi, this is a nice repo and I'm trying to reimplement something similar for StyleGAN2. Using a list of texts, I'm trying to map CLIP text embeddings to StyleGAN2 latent vectors which is input to StyleGAN2 generator for generating images and then optimize this MLP mapper model using CLIP loss. However, I'm quickly getting blown out images for entire batches. I'm suspecting perhaps this is due to the output of the MLP not conditioned to output something that looks like it can from a standard normal distribution. I wonder if you could perhaps point me in the right direction how to do this.

    opened by xiankgx 2
  • Add Docker environment & web demo

    Add Docker environment & web demo

    Hey @mehdidc! πŸ‘‹

    We find your model so cool that it generates images from prompts ultra fast!

    This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.

    This also means we can make a web page where other people can try out your model! View it here: https://replicate.ai/mehdidc/feed_forward_vqgan_clip

    Claim your page here so you can edit it, and we'll feature it on our website and tweet about it too.

    In case you're wondering who I am, I'm from Replicate, where we're trying to make machine learning reproducible. We got frustrated that we couldn't run all the really interesting ML work being done. So, we're going round implementing models we like. 😊

    opened by chenxwh 1
  • How to get more variation in the null image

    How to get more variation in the null image

    I've been generating images using this model, which is delightfully fast, but I've noticed that it produces images that are all alike. I tried generating the "null" image by doing:

    H = perceptor.encode_text(toks.to(device)).float()
    z = net(0 * H)
    

    This resulted in:

    base image

    And indeed, everything I generated kind of matched that: you can see the fleshly protrusion on the left in "gold coin":

    gold-coin--0 0

    The object and matching mini-object in "tent":

    tent-0 5

    And it always seems to try to caption the image with nonsense lettering ("lion"):

    lion--0 0

    So I'm wondering if there's a way to "prime" the model and suggest it use a different zero image for each run. Is there a variable I can set, or is this deeply ingrained in training data?

    Any advice would be appreciated, thank you!

    (Apologies if this is the same as #8, but it sounded like #8 was solved by using priors which doesn't seem to help with this.)

    opened by kchodorow 0
  • training GPU configuration

    training GPU configuration

    Thanks for your excellent repo.

    When training cc12m_32x1024 with type VitGAN or MLP Mixer, what kinds of GPU environment do you use? Tesla V100 with 32G mem or others?

    Thanks

    opened by CrossLee1 1
  • Slow Training Speed

    Slow Training Speed

    Hi, First of all great work! I really loved it. To replicate, I tried training on the Conceptual 12M Dataset with the depth and dims same as the pretrained models but the training was too slow. Even in 4 days it was going through the first (or 0th) epoch. I'm training it on NVIDIA Quadro RTX A6000 which I don't think is that much slow. Any suggestions to improve the speed of training? I have multi-gpu access but seems it isn't supported rn. Thanks !

    opened by s13kman 3
  • clarifying differences between available models

    clarifying differences between available models

    Hi @mehdidc πŸ‘‹πŸΌ I'm a new team member at @replicate.

    I was trying out your model on replicate.ai and noticed that the names of the models are a bit cryptic, so it's hard to know what differences to expect when using each:

    Screen Shot 2021-09-23 at 6 21 40 PM

    Here's where those are declared:

    https://github.com/mehdidc/feed_forward_vqgan_clip/blob/dd640c0ee5f023ddf83379e6b3906529511ce025/predict.py#L10-L14

    Looking at the source for cog's Input class it looks like options can be a list of anything:

    options: Optional[List[Any]] = None
    

    I'm not sure if this is right, but maybe this means that each model could be declared as a tuple with an accompanying label:

    MODELS = [
        ("cc12m_32x1024_vitgan_v0.1.th", "This model does x"),
        ("cc12m_32x1024_vitgan_v0.2.th" "This model does y"),,
        ("cc12m_32x1024_mlp_mixer_v0.2.th", "This model does z"),
    ]
    

    We could then display those labels on the model form on replicate.ai to make the available options more clear to users.

    Curious to hear your thoughts!

    cc @CJWBW @bfirsh @andreasjansson

    opened by zeke 2
  • How to improve so we could get results closer to the

    How to improve so we could get results closer to the "regular" VQGAN+CLIP?

    Hi! I really love this idea and think that this concept solves the main bottleneck of current VQGAN+CLIP approach which is the optimisation for each prompt. I love how instantaneous this approach is to generating new images. However results with the different CC12M or blog captions model fall short in comparison to the most recent VQGAN+CLIP optimisation approaches

    I am wondering where it could potentially be improved. I guess one thing could be trying to embed the MSE regularised and z+quantize most recent VQGAN+CLIP approaches. The other is that I am wondering whether a bigger training dataset would improve the quality. Would it make sense to train it on ImageNet captions or maybe even a bigger 100M+ caption dataset? (maybe [email protected]?)

    As you can see, I can't actually contribute much (but I could help with a bigger dataset training effort) but I'm cheering for this project to not die!

    opened by apolinario 2
  • Finetuing CLIP to improve domain-specific performance

    Finetuing CLIP to improve domain-specific performance

    It's quite easy to finetune one of the Open AI CLIP checkpoints with this codebase:

    https://github.com/Zasder3/train-CLIP-FT

    Uses pytorch-lightning. May be worth pursuing

    opened by afiaka87 1
Owner
Mehdi Cherti
Deep Learning Researcher
Mehdi Cherti
QA-GNN: Question Answering using Language Models and Knowledge Graphs

QA-GNN: Question Answering using Language Models and Knowledge Graphs This repo provides the source code & data of our paper: QA-GNN: Reasoning with L

Michihiro Yasunaga 434 Jan 04, 2023
Intrusion Detection System using ensemble learning (machine learning)

IDS-ML implementation of an intrusion detection system using ensemble machine learning methods Data set This project is carried out using the UNSW-15

4 Nov 25, 2022
PyTorch implementation for ACL 2021 paper "Maria: A Visual Experience Powered Conversational Agent".

Maria: A Visual Experience Powered Conversational Agent This repository is the Pytorch implementation of our paper "Maria: A Visual Experience Powered

Jokie 22 Dec 12, 2022
A Differentiable Recipe for Learning Visual Non-Prehensile Planar Manipulation

A Differentiable Recipe for Learning Visual Non-Prehensile Planar Manipulation This repository contains the source code of the paper A Differentiable

Bernardo Aceituno 2 May 05, 2022
Adabelief-Optimizer - Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"

AdaBelief Optimizer NeurIPS 2020 Spotlight, trains fast as Adam, generalizes well as SGD, and is stable to train GANs. Release of package We have rele

Juntang Zhuang 998 Dec 29, 2022
Deep Learning Specialization by Andrew Ng, deeplearning.ai.

Deep Learning Specialization on Coursera Master Deep Learning, and Break into AI This is my personal projects for the course. The course covers deep l

Engen 1.5k Jan 07, 2023
A modern pure-Python library for reading PDF files

pdf A modern pure-Python library for reading PDF files. The goal is to have a modern interface to handle PDF files which is consistent with itself and

6 Apr 06, 2022
learning and feeling SLAM together with hands-on-experiments

modern-slam-tutorial-python Learning and feeling SLAM together with hands-on-experiments πŸ˜€ πŸ˜ƒ πŸ˜† Dependencies Most of the examples are based on GTSAM

Giseop Kim 59 Dec 22, 2022
Foreground-Action Consistency Network for Weakly Supervised Temporal Action Localization

FAC-Net Foreground-Action Consistency Network for Weakly Supervised Temporal Action Localization Linjiang Huang (CUHK), Liang Wang (CASIA), Hongsheng

21 Nov 22, 2022
Pi-NAS: Improving Neural Architecture Search by Reducing Supernet Training Consistency Shift (ICCV 2021)

Ξ -NAS This repository provides the evaluation code of our submitted paper: Pi-NAS: Improving Neural Architecture Search by Reducing Supernet Training

Jiqi Zhang 18 Aug 18, 2022
Sharing of contents on mitochondrial encounter networks

mito-network-sharing Sharing of contents on mitochondrial encounter networks Required: R with igraph, brainGraph, ggplot2, and XML libraries; igraph l

Stochastic Biology Group 0 Oct 01, 2021
This is an unofficial PyTorch implementation of Meta Pseudo Labels

This is an unofficial PyTorch implementation of Meta Pseudo Labels. The official Tensorflow implementation is here.

Jungdae Kim 320 Jan 08, 2023
ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

Hao Su's Lab, UCSD 48 Dec 30, 2022
Video Matting via Consistency-Regularized Graph Neural Networks

Video Matting via Consistency-Regularized Graph Neural Networks Project Page | Real Data | Paper Installation Our code has been tested on Python 3.7,

41 Dec 26, 2022
PyTorch implementation of DD3D: Is Pseudo-Lidar needed for Monocular 3D Object detection?

PyTorch implementation of DD3D: Is Pseudo-Lidar needed for Monocular 3D Object detection? (ICCV 2021), Dennis Park*, Rares Ambrus*, Vitor Guizilini, Jie Li, and Adrien Gaidon.

Toyota Research Institute - Machine Learning 364 Dec 27, 2022
Official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo'

IterMVS official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo' Introduction IterMVS is a novel lear

Fangjinhua Wang 127 Jan 04, 2023
Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François Charton, Amaury Hayat (ENPC-Rutgers) and Guillaume Lample

Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a

Facebook Research 171 Nov 23, 2022
A keras-based real-time model for medical image segmentation (CFPNet-M)

CFPNet-M: A Light-Weight Encoder-Decoder Based Network for Multimodal Biomedical Image Real-Time Segmentation This repository contains the implementat

268 Nov 27, 2022
The Environment I built to study Reinforcement Learning + Pokemon Showdown

pokemon-showdown-rl-environment The Environment I built to study Reinforcement Learning + Pokemon Showdown Been a while since I ran this. Think it is

3 Jan 16, 2022
Source code of all the projects of Udacity Self-Driving Car Engineer Nanodegree.

self-driving-car In this repository I will share the source code of all the projects of Udacity Self-Driving Car Engineer Nanodegree. Hope this might

Andrea Palazzi 2.4k Dec 29, 2022