Code and pre-trained models for MultiMAE: Multi-modal Multi-task Masked Autoencoders

Overview

MultiMAE: Multi-modal Multi-task Masked Autoencoders

Roman Bachmann*, David Mizrahi*, Andrei Atanov, Amir Zamir

Website | arXiv | BibTeX

Open in Colab Hugging Face Spaces

Official PyTorch implementation and pre-trained models for MultiMAE: Multi-modal Multi-task Masked Autoencoders.

We introduce Multi-modal Multi-task Masked Autoencoders (MultiMAE), an efficient and effective pre-training strategy for Vision Transformers. Given a small random sample of visible patches from multiple modalities, the MultiMAE pre-training objective is to reconstruct the masked-out regions. Once pre-trained, a single MultiMAE encoder can then be used for both single-modal and multi-modal downstream transfer, yielding competitive to or significantly better results than the baselines.

Catalog

  • Pre-trained models
  • MultiMAE pre-training code
  • ImageNet-1K classification fine-tuning code
  • Semantic segmentation fine-tuning code (single-modal & multi-modal)
  • Depth estimation fine-tuning code
  • Taskonomy fine-tuning code
  • Colab & Hugging Face demos

Pre-trained models

We provide the weights of our pre-trained MultiMAE ViT-B model, in MultiViT (multi-modal) format and timm (RGB-only) format.

For comparison, we also provide the weights of a MAE ViT-B model that we pre-trained using the official MAE codebase following the recommended settings.

Method Arch. Pre-training
modalities
Pre-training
epochs
Weights
(MultiViT)
Weights
(timm)
Config
MAE ViT-B RGB 1600 download download See MAE
MultiMAE ViT-B RGB+D+S 1600 download download link

These pre-trained models can then be fine-tuned using this codebase to reach the following performance:

Method Classif. (@1) Semantic Segmentation (mIoU) Depth (δ1)
ImageNet-1K
(RGB)
ADE20K
(RGB)
Hypersim
(RGB / D / RGB + D)
NYUv2
(RGB / D / RGB + D)
NYUv2
(RGB)
Sup. (DeiT) 81.8 45.8 33.9 - - 50.1 - - 80.7
MAE 83.3 46.2 36.5 - -
50.8 - - 85.1
MultiMAE 83.3 46.2 37.0 38.5 47.6 52.0 41.4 56.0 86.4

Model formats

We provide pre-trained weights in two different formats: the single-modal ViT / timm format, which is compatible with other popular ViT repositories (e.g., timm, DINO, MAE), and the multi-modal MultiMAE / MultiViT format, which is used throughout this codebase for multi-modal pre-training and fine-tuning. See multimae/multimae.py for the documentation and implementation of MultiMAE / MultiViT.

You can convert between these formats using the provided vit2multimae_converter.py and multimae2vit_converter.py scripts.

Usage

Set-up

See SETUP.md for set-up instructions.

Pre-training

See PRETRAINING.md for pre-training instructions.

Fine-tuning

See FINETUNING.md for fine-tuning instructions.

Demo & visualizations

For interactive demos, please see our website. Open our Colab notebook to play around with the visualization code, or simply upload an image to our Hugging Face Spaces demo.

Acknowledgement

This repository is built using the timm, DeiT, DINO, MoCo v3, BEiT, MAE-priv, and MAE repositories.

License

This project is under the CC-BY-NC 4.0 license. See LICENSE for details.

Citation

If you find this repository helpful, please consider citing our work:

@article{bachmann2022multimae,
  author    = {Roman Bachmann and David Mizrahi and Andrei Atanov and Amir Zamir},
  title     = {{MultiMAE}: Multi-modal Multi-task Masked Autoencoders},
  journal   = {arXiv preprint arXiv:2204.01678},
  year      = {2022},
}
Issues
  • add web demo/model to Huggingface

    add web demo/model to Huggingface

    Hi, would you be interested in adding MultiMAE to Hugging Face? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community. Models/datasets/spaces(web demos) can be added to a user account or organization similar to github.

    Example from other organizations: Keras: https://huggingface.co/keras-io Microsoft: https://huggingface.co/microsoft Facebook: https://huggingface.co/facebook

    Example spaces with repos: github: https://github.com/salesforce/BLIP Spaces: https://huggingface.co/spaces/salesforce/BLIP

    github: https://github.com/facebookresearch/omnivore Spaces: https://huggingface.co/spaces/akhaliq/omnivore

    and here are guides for adding spaces/models/datasets to your org

    How to add a Space: https://huggingface.co/blog/gradio-spaces how to add models: https://huggingface.co/docs/hub/adding-a-model uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html

    Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.

    opened by AK391 4
  • Linear probing results

    Linear probing results

    Hey, Thank you for providing the code for the paper. The paper is really interesting and the project page is very well done!

    I was wondering whether you've tested the performance of linear probing on the RGB image when trained with all 3 modalities. The results of the original MAE paper were not very good, it is interesting to understand if the additional supervision creates better representations that translate into better linear probing scores.

    Thanks, Eliahu

    opened by eliahuhorwitz 4
  • Query about semseg domain in pre-training

    Query about semseg domain in pre-training

    Hi, I have successful made the pesudo labels and trained ‘rgb’ in/out-domain multimae model.

    But when I trained model with 'rgb-semseg' in/out-domain, I met an error in multimae/input_adapters.py line 232

    # Create patches [B, C, H, W] -> [B, (H*W), C]
    x_patch = rearrange(self.proj(x), 'b d nh nw -> b (nh nw) d')
    

    The full log is log.txt. x.size() is [batchsize, 64, 56, 56] before line 232. I can't find out what's wrong.

    What's more, I don't know why the pseudo semeg label image resize into 1/4 (that is 224*224->56*56)in utils/datasets.py line 105

    # Convert to Tensor
    for task in task_dict:
        if task in ['depth']:
            img = torch.Tensor(np.array(task_dict[task]) / 2 ** 16)
            img = img.unsqueeze(0)  # 1 x H x W
        elif task in ['rgb']:
            img = TF.to_tensor(task_dict[task])
            img = TF.normalize(img, mean=self.rgb_mean, std=self.rgb_std)
        elif task in ['semseg', 'semseg_coco']:
            # TODO: add this to a config instead
            # Rescale to 0.25x size (stride 4)
            scale_factor = 0.25
            img = task_dict[task].resize((int(self.input_size * scale_factor), int(self.input_size * scale_factor)))
            # Using pil_to_tensor keeps it in uint8, to_tensor converts it to float (rescaled to [0, 1])
            img = TF.pil_to_tensor(img).to(torch.long).squeeze(0)
    

    and then use nn.Conv2d in multimae/input_adapters.py line 198

    if self.interpolate_class_emb:
        self.proj = nn.Sequential(
            nn.Upsample(scale_factor=(1 / self.P_H, 1 / self.P_W),
                        mode='bilinear'),  # Actually a downsample operation
            nn.Conv2d(in_channels=self.dim_class_emb, out_channels=self.dim_tokens,
                        kernel_size=1, stride=1),
        )
    else:
        self.proj = nn.Conv2d(
            in_channels=self.dim_class_emb, out_channels=self.dim_tokens,
            kernel_size=(self.P_H, self.P_W), stride=(self.P_H, self.P_W)
        )
    )
    

    Thank you for any help.

    opened by Chianghui-Wong 3
  • Example usage of regular MAE Weights

    Example usage of regular MAE Weights

    Hey awesome work! I am trying to figure out how to modify the demo notebook to use the regular MAE instead of multiMAE. In particular i comment out all depth and semseg info but the resulting image infilling looks corrupted. Could you by chance share an example of proper usage of the regular MAE weights? Thanks so much for the help!

    opened by mhamilton723 2
  • Some doubts about pseudo labels

    Some doubts about pseudo labels

    Hi, I am pseudo-tagging the imagenet-1k, and encountering some difficulties.

    Firstly, I wonder what would happen if the classes of semeg are more than 255? How to use one channel depth png image to represent them? (Although COCO datasets is only 80 classes, the imagenet is more than 255 classes when fine-tuning)

    Secondly, on the example of Colab notebook, the rgb2depth model of DPT could not input any size of imagenet pictures. How could we save all the pseudo labels down before the data augmentation cutting it into 224*224? We need to align the original images with the pseudo labeled image should we?

    Thank you for any help.

    opened by Chianghui-Wong 2
  • Query about data preparation for finetuning for nyuv2-depth

    Query about data preparation for finetuning for nyuv2-depth

    Hi, I think the following correction holds: Line 357 in https://github.com/EPFL-VILAB/MultiMAE/blob/main/run_finetuning_depth.py should be dataset_train = build_regression_dataset(args, data_path=args.train_data_path, transform=train_transform) instead of dataset_train = build_regression_dataset(args, data_path=args.data_path, transform=train_transform) or the argument train_data_path should be changed to data_path.

    Apart from that, I am trying to recreate your results on NYUv2 for depth. but the dataset preparation instructions are not clear from the instructions in SETUP. As explained about the folder structure , where should the GT be when finetuning for depth and evaluating. Apart from that, mask_valid for fine-tuning? RuntimeError: Found 0 logs in subfolders of: /tmp-network/user/varora/multimae/multimae_data/train/mask_valid

    opened by AntiLibrary5 1
  • Query regarding the output adapter heads

    Query regarding the output adapter heads

    Hi, Thank you for the interesting work and the extensive experiments. Your depth results are based on the DPT head in the paper. In the colab, you use the spatial adapter head for inference. I was wondering if your fine-tuning results with the spatial adapter head were better/worse than the DPT head? Was the intention to implement this spatial head more to test a pure transformer based head (compared to DPT's convolution based refineNet like approach?)?

    Thank you.

    opened by AntiLibrary5 1
  • about run_finetuning_semseg.py

    about run_finetuning_semseg.py

    HI, I find that:

    in MultiMae/run_finetuning_semseg.py line735

    seg_pred_argmax = seg_pred[:num_classes].argmax(dim=1) 
    

    I think it should be

    seg_pred_argmax = seg_pred[:,:num_classes,:,:].argmax(dim=1) 
    
    opened by Chianghui-Wong 1
Owner
Visual Intelligence & Learning Lab, Swiss Federal Institute of Technology (EPFL)
VILAB
Visual Intelligence & Learning Lab, Swiss Federal Institute of Technology (EPFL)
VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training

Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training [Arxiv] VideoMAE: Masked Autoencoders are Data-Efficient Learne

Multimedia Computing Group, Nanjing University 423 Jun 29, 2022
PyTorch code for the paper "Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval".

Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval (M2HSE) PyTorch code fo

Xinlei-Pei 9 Apr 23, 2022
Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners

Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners This repository is built upon BEiT, thanks very much! Now, we on

Zhiliang Peng 2.1k Jul 3, 2022
PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners for self-supervised ViT.

MAE for Self-supervised ViT Introduction This is an unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners for self-sup

null 31 Feb 4, 2022
An pytorch implementation of Masked Autoencoders Are Scalable Vision Learners

An pytorch implementation of Masked Autoencoders Are Scalable Vision Learners This is a coarse version for MAE, only make the pretrain model, the fine

FlyEgle 195 Jun 24, 2022
Re-implememtation of MAE (Masked Autoencoders Are Scalable Vision Learners) using PyTorch.

mae-repo PyTorch re-implememtation of "masked autoencoders are scalable vision learners". In this repo, it heavily borrows codes from codebase https:/

Peng Qiao 1 Dec 14, 2021
Implementation of PyTorch-based multi-task pre-trained models

mtdp Library containing implementation related to the research paper "Multi-task pre-training of deep neural networks for digital pathology" (Mormont

Romain Mormont 24 Jan 27, 2022
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval

CLIP4CMR A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval The original data and pre-calculate

null 9 Jan 12, 2022
A collection of pre-trained StyleGAN2 models trained on different datasets at different resolution.

Awesome Pretrained StyleGAN2 A collection of pre-trained StyleGAN2 models trained on different datasets at different resolution. Note the readme is a

Justin 1k Jul 7, 2022
The official code for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization

PRIMER The official code for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization. PRIMER is a pre-trained model for mu

AI2 75 Jun 28, 2022
Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts

t5-japanese Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts. The following is a list of models that

Kimio Kuramitsu 1 Dec 13, 2021
FocusFace: Multi-task Contrastive Learning for Masked Face Recognition

FocusFace This is the official repository of "FocusFace: Multi-task Contrastive Learning for Masked Face Recognition" accepted at IEEE International C

Pedro Neto 15 Jun 8, 2022
Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal, multi-exposure and multi-focus image fusion.

U2Fusion Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal (VIS-IR, medical), multi

Han Xu 97 Jun 28, 2022
VIMPAC: Video Pre-Training via Masked Token Prediction and Contrastive Learning

This is a release of our VIMPAC paper to illustrate the implementations. The pretrained checkpoints and scripts will be soon open-sourced in HuggingFace transformers.

Hao Tan 67 Jun 9, 2022
[CVPR 2022 Oral] Versatile Multi-Modal Pre-Training for Human-Centric Perception

Versatile Multi-Modal Pre-Training for Human-Centric Perception Fangzhou Hong1  Liang Pan1  Zhongang Cai1,2,3  Ziwei Liu1* 1S-Lab, Nanyang Technologic

Fangzhou Hong 67 Jul 6, 2022
Pre-trained BERT Models for Ancient and Medieval Greek, and associated code for LaTeCH 2021 paper titled - "A Pilot Study for BERT Language Modelling and Morphological Analysis for Ancient and Medieval Greek"

Ancient Greek BERT The first and only available Ancient Greek sub-word BERT model! State-of-the-art post fine-tuning on Part-of-Speech Tagging and Mor

Pranaydeep Singh 14 Apr 5, 2022
CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training

UC2 UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu,

Mingyang Zhou 26 Jun 28, 2022
Source code and dataset for ACL2021 paper: "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning".

ERICA Source code and dataset for ACL2021 paper: "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive L

THUNLP 69 Jun 9, 2022