Generate text captions for images from their CLIP embeddings. Includes PyTorch model code and example training script.

Overview

clip-text-decoder

Generate text captions for images from their CLIP embeddings. Includes PyTorch model code and example training script.

Example Predictions

Example captions were computed with the pretrained model mentioned below.

"A man riding a wave on top of a surfboard."

A surfer riding a wave

A baseball player is swinging a bat at a ball.

Baseball player

"A dog running across a field with a frisbee."

Dog with frisbee

Installation

Install for easier access to the following objects/classes:

  • clip_text_decoder.datasets.ClipCocoCaptionsDataset
  • clip_text_decoder.models.ClipDecoder
  • clip_text_decoder.models.ClipDecoderInferenceModel
  • clip_text_decoder.tokenizer.Tokenizer

The train.py script will not be available in the installed package, since it's located in the root directory. To train new models, either clone this repository or recreate train.py locally.

Using pip:

pip install clip-text-decoder

From source:

git clone https://github.com/fkodom/clip-text-decoder.git
cd clip-text-decoder
pip install .

NOTE: You'll also need to install openai/CLIP to encode images with CLIP. This is also required by ClipCocoCaptionsDataset to build the captions dataset the first time (cached for subsequent calls).

pip install "clip @ git+https://github.com/openai/CLIP.git"

For technical reasons, the CLIP dependency can't be included in the PyPI package, since it's not an officially published package.

Training

Open In Colab

Launch your own training session using the provided script (train.py):

python train.py --max-epochs 5

Training CLI arguments, along with their default values:

--max-epochs 5  # (int)
--num-layers 6  # (int)
--dim-feedforward 256  # (int)
--precision 16  # (16 or 32)
--seed 0  # (int)

Inference

The training script will produce a model.zip archive, containing the Tokenizer and trained model parameters. To perform inference with it:

import clip
from PIL import Image
import torch

from clip_text_decoder.model import ClipDecoderInferenceModel

device = "cuda" if torch.cuda.is_available() else "cpu"
model = ClipDecoderInferenceModel.load("path/to/model.zip").to(device)
clip_model, clip_preprocessor = clip.load("ViT-B/32", device=device, jit=False)

# Create a blank dummy image
dummy_image = Image.new("RGB", (224, 224))
preprocessed = clip_preprocessor(dummy_image).to(device)
# Add a batch dimension using '.unsqueeze(0)'
encoded = clip_model.encode_image(preprocessed.unsqueeze(0))
text = model(encoded)

print(text)
# Probably some nonsense, because we used a dummy image.

Pretrained Models

A pretrained CLIP decoder is hosted in my Google Drive, and can easily be downloaded by:

from clip_text_decoder.model import ClipDecoderInferenceModel

model = ClipDecoderInferenceModel.download_pretrained()

To cache the pretrained model locally, so that it's not re-downloaded each time:

model = ClipDecoderInferenceModel.download_pretrained("/path/to/model.zip")

Shortcomings

  • Only works well with COCO-style images. If you go outside the distribution of COCO objects, you'll get nonsense text captions.
  • Relatively short training time. Even within the COCO domain, you'll occasionally see incorrect captions. Quite a few captions will have bad grammar, repetitive descriptors, etc.
Comments
  • Decoding Text Embeddings Coded Using Hugging Face ClipTextModel

    Decoding Text Embeddings Coded Using Hugging Face ClipTextModel

    Suppose that I have text embeddings created using Hugging Face's ClipTextModel using the following method:

    import torch
    from transformers import CLIPTokenizer, CLIPTextModel
    
    class_list = ["i love going home and playing with my wife and kids", "i love going home", "playing with my wife and kids", 
    "family", "war", "writing"]
    
    model = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14")
    tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
    
    inputs = tokenizer(class_list, padding=True, return_tensors="pt")
    outputs = model(**inputs)
    hidden_state = outputs.last_hidden_state
    embeddings = outputs.pooler_output
    

    Questions:

    1. Is It possible to use the clip-text-decoder to convert the embeddings back to text?
    2. If it is indeed possible to do so, could you provide an example of how?

    Looking forward to receiving your feedback.

    opened by mbdzi 6
  • Fix string error when loading clip models.

    Fix string error when loading clip models.

    error

    The model name string ( VIT-xxx ) in the check_vision_backbone function is not compatible with the model name string ( ViT-xxx ) of the clip repository, which will cause at least one error in check_vision_backbone function or when loading the clip model.

    solution

    In this PR, the model name string in the check_vision_backbone function is modified to ViT-xxx to make it compatible with the clip repository.

    opened by Adenialzz 1
  • BLIP vision backbone

    BLIP vision backbone

    • Added blip backbone; still cleaning up last pieces
    • Bug fixes for training script, and remove debug code.
    • Fix dependencies in test workflow; update README statistics
    • Fix test issue with CUDA device
    • Update unit tests for newer Python, torch versions
    • Test up to Python 3.10
    • Test up to Python 3.9
    • Install lavis first
    opened by fkodom 0
  • Feature: Beam Search

    Feature: Beam Search

    • Add beam search, clip dependency to setup.py
    • Fix installation instructions
    • Remove main clause
    • Add '--beam-size' option to 'train.py' script.
    • Update README; propagate the '--beam-size' arg through eval functions
    • Update setup.cfg, add pre-commit hooks
    • Reformat images
    • Remove fixed image width
    • Add detail to README; comments to call method for beam search
    • Updated README headline
    opened by fkodom 0
  • Bug Fixes for Broken Tests

    Bug Fixes for Broken Tests

    • Cache the old fashioned way :)
    • Fix silly typo in test for image caption model
    • Apply black and isort formatting
    • Install latest version of 'black', reapply formatting
    • Fix flake8 issue (duplicate function definition), and install latest patch version of pytorch for tests.
    • Skip slow tests by default, add 'slow' marker to inference model tests.
    opened by fkodom 0
  • GPT2 Decoder

    GPT2 Decoder

    • Update model to use DistilGPT2 as a pre-trained decoder.
    • Removed tokenizer (no longer used), fixed bugs in Model source file, and updated model unit tests.
    • Backwards compatibility for 'gdown.download' method.
    • Update installation requirements, caption examples in README
    opened by fkodom 0
  • Upgrade CodeSee workflow to version 2

    Upgrade CodeSee workflow to version 2

    CodeSee is a code visibility platform.

    This change updates the CodeSee workflow file to the latest version for security, maintenance, and support improvements (see changelog below).

    That workflow file:

    • runs CodeSee's code analysis on every PR push and merge
    • uploads that analysis to CodeSee.
    • It does not transmit your code.

    The code analysis is used to generate maps and insights about this codebase.

    CodeSee workflow changelog:

    • Improved security: Updates permission to be read-only.
    • Improved future maintenance: Replaces the body of the workflow with a single github action: codesee-action. This makes it significantly easier for CodeSee to introduce future improvements and fixes without requiring another PR like this.
    • Improved Python support: The action now properly supports Python 3.11, and will continue to support new Python versions as they are released.
    opened by codesee-maps[bot] 1
  • Incompatible checksum error

    Incompatible checksum error

    I see the following error when trying to load the pretrained model.

        tokenizer=pickle.loads(tokenizer_buffer.read()),
      File "stringsource", line 6, in spacy.pipeline.trainable_pipe.__pyx_unpickle_TrainablePipe
    _pickle.PickleError: Incompatible checksums (102742709 vs 0x417ddeb = (cfg, model, name, vocab))
    

    Am I missing something?

    opened by dapurv5 0
Releases(1.4.4)
  • 1.4.4(Nov 7, 2022)

    What's Changed

    • Fix string error when loading clip models. by @Adenialzz in https://github.com/fkodom/clip-text-decoder/pull/12

    New Contributors

    • @Adenialzz made their first contribution in https://github.com/fkodom/clip-text-decoder/pull/12

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.4.3...1.4.4

    Source code(tar.gz)
    Source code(zip)
  • 1.4.3(Nov 7, 2022)

    What's Changed

    • Refactor Dataset by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/11

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.4.2...1.4.3

    Source code(tar.gz)
    Source code(zip)
  • 1.4.2(Oct 26, 2022)

    What's Changed

    • Huggingface Evaluate by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/9

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.4.1...1.4.2

    Source code(tar.gz)
    Source code(zip)
  • 1.4.1(Oct 26, 2022)

    What's Changed

    • Datapipes by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/8

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.4.0...1.4.1

    Source code(tar.gz)
    Source code(zip)
  • 1.4.0(Oct 23, 2022)

    What's Changed

    • BLIP vision backbone by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/7

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.3.0...1.4.0

    Source code(tar.gz)
    Source code(zip)
  • 1.3.0(Oct 2, 2022)

    What's Changed

    • Feature: Beam Search by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/5
    • Bug Fix: PyPI Release by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/6

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.2.0...1.3.0

    Source code(tar.gz)
    Source code(zip)
  • 1.2.0(Jan 29, 2022)

    What's Changed

    • Cache CLIP embeddings for the dataset, rather than recomputing them each time.

    • Reduce model file sizes by storing at lower precision

    • Add an ImageCaptionInferenceModel class for easier out-of-the-box use

    • Fix some broken unit tests

    • Better Data Caching by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/3

    • Bug Fixes for Broken Tests by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/4

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.1.0...1.2.0

    Source code(tar.gz)
    Source code(zip)
  • 1.1.0(Dec 22, 2021)

    What's Changed

    • GPT2 Decoder by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/2

    New Contributors

    • @fkodom made their first contribution in https://github.com/fkodom/clip-text-decoder/pull/2

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.0.0...1.1.0

    Source code(tar.gz)
    Source code(zip)
  • 0.1.1(Nov 14, 2021)

  • 0.1.0(Nov 14, 2021)

Owner
Frank Odom
Director of Innovation at Plainsight. I like neural nets, and neural nets like me.
Frank Odom
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation

Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation Requirements This repository needs mmsegmentation Training To train

Adelaide Intelligent Machines (AIM) Group 7 Sep 12, 2022
Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019)

Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019) Introduction Official implementation of Dynamic Multi-scale Filters for Semant

23 Oct 21, 2022
Python based Advanced AI Assistant

Knick is a virtual artificial intelligence project, fully developed in python. The objective of this project is to develop a virtual assistant that can handle our minor, intermediate as well as heavy

19 Nov 15, 2022
Official Implementation of "Transformers Can Do Bayesian Inference"

Official Code for the Paper "Transformers Can Do Bayesian Inference" We train Transformers to do Bayesian Prediction on novel datasets for a large var

AutoML-Freiburg-Hannover 103 Dec 25, 2022
交互式标注软件,暂定名 iann

iann 交互式标注软件,暂定名iann。 安装 按照官网介绍安装paddle。 安装其他依赖 pip install -r requirements.txt 运行 git clone https://github.com/PaddleCV-SIG/iann/ cd iann python iann

294 Dec 30, 2022
Reproduced Code for Image Forgery Detection papers.

Image Forgery Detection With over 4.5 billion active internet users, the amount of multimedia content being shared every day has surpassed everyone’s

Umar Masud 15 Dec 06, 2022
Chainer Implementation of Fully Convolutional Networks. (Training code to reproduce the original result is available.)

fcn - Fully Convolutional Networks Chainer implementation of Fully Convolutional Networks. Installation pip install fcn Inference Inference is done as

Kentaro Wada 218 Oct 27, 2022
Code for HLA-Face: Joint High-Low Adaptation for Low Light Face Detection (CVPR21)

HLA-Face: Joint High-Low Adaptation for Low Light Face Detection The official PyTorch implementation for HLA-Face: Joint High-Low Adaptation for Low L

Wenjing Wang 77 Dec 08, 2022
Kaggle | 9th place single model solution for TGS Salt Identification Challenge

UNet for segmenting salt deposits from seismic images with PyTorch. General We, tugstugi and xuyuan, have participated in the Kaggle competition TGS S

Erdene-Ochir Tuguldur 276 Dec 20, 2022
A small library for doing fluid simulation with neural networks.

Neural Fluid Fields This is a small library for doing fluid simulation with neural fields. Check out our review paper, Neural Fields in Visual Computi

Towaki 23 Jun 23, 2022
Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.

Learning Associative Inference Using Fast Weight Memory This repository contains the offical code for the paper Learning Associative Inference Using F

Imanol Schlag 18 Oct 12, 2022
Code for Understanding Pooling in Graph Neural Networks

Select, Reduce, Connect This repository contains the code used for the experiments of: "Understanding Pooling in Graph Neural Networks" Setup Install

Daniele Grattarola 37 Dec 13, 2022
JAX-based neural network library

Haiku: Sonnet for JAX Overview | Why Haiku? | Quickstart | Installation | Examples | User manual | Documentation | Citing Haiku What is Haiku? Haiku i

DeepMind 2.3k Jan 04, 2023
CVPR2022 paper "Dense Learning based Semi-Supervised Object Detection"

[CVPR2022] DSL: Dense Learning based Semi-Supervised Object Detection DSL is the first work on Anchor-Free detector for Semi-Supervised Object Detecti

Bhchen 69 Dec 08, 2022
face_recognization (FaceNet) + TFHE (HNP) + hand_face_detection (Mediapipe)

SuperControlSystem Face_Recognization (FaceNet) 面部识别 (FaceNet) Fully Homomorphic Encryption over the Torus (HNP) 环面全同态加密 (TFHE) Hand_Face_Detection (M

liziyu0104 2 Dec 30, 2021
Simple image captioning model - CLIP prefix captioning.

CLIP prefix captioning. Inference Notebook: 🥳 New: 🥳 Our technical papar is finally out! Official implementation for the paper "ClipCap: CLIP Prefix

688 Jan 04, 2023
A GPT, made only of MLPs, in Jax

MLP GPT - Jax (wip) A GPT, made only of MLPs, in Jax. The specific MLP to be used are gMLPs with the Spatial Gating Units. Working Pytorch implementat

Phil Wang 53 Sep 27, 2022
A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.

WebDataset WebDataset is a PyTorch Dataset (IterableDataset) implementation providing efficient access to datasets stored in POSIX tar archives and us

1.1k Jan 08, 2023
Self-supervised learning on Graph Representation Learning (node-level task)

graph_SSL Self-supervised learning on Graph Representation Learning (node-level task) How to run the code To run GRACE, sh run_GRACE.sh To run GCA, sh

Namkyeong Lee 3 Dec 31, 2021
Improving Transferability of Representations via Augmentation-Aware Self-Supervision

Improving Transferability of Representations via Augmentation-Aware Self-Supervision Accepted to NeurIPS 2021 TL;DR: Learning augmentation-aware infor

hankook 38 Sep 16, 2022