Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)

Overview

Awesome Visual-Transformer Awesome

Collect some Transformer with Computer-Vision (CV) papers.

If you find some overlooked papers, please open issues or pull requests (recommended).

Papers

Transformer original paper

Technical blog

  • [Chinese Blog] 3W字长文带你轻松入门视觉transformer [Link]
  • [Chinese Blog] Vision Transformer 超详细解读 (原理分析+代码解读) [Link]

Survey

  • A Survey of Visual Transformers [paper] - 2021.11.30
  • Transformers in Vision: A Survey [paper] - 2021.02.22
  • A Survey on Visual Transformer [paper] - 2021.1.30
  • A Survey of Transformers [paper] - 2020.6.09

arXiv papers

  • [Discrete ViT] Discrete Representations Strengthen Vision Transformer Robustness [paper]
  • [StyleSwin] StyleSwin: Transformer-based GAN for High-resolution Image Generation [paper][code]
  • [SReT] Sliced Recursive Transformer [paper] [code]
  • Fast Point Transformer [paper]
  • Dynamic Token Normalization Improves Vision Transformer [paper]
  • TokenLearner: What Can 8 Learned Tokens Do for Images and Videos? [paper] [code]
  • Swin Transformer V2: Scaling Up Capacity and Resolution [paper]
  • [Restormer] Restormer: Efficient Transformer for High-Resolution Image Restoration [paper]
  • [MAE] Masked Autoencoders Are Scalable Vision Learners [paper]
  • Improved Robustness of Vision Transformer via PreLayerNorm in Patch Embedding [paper]
  • [ORViT] Object-Region Video Transformers [paper] [code]
  • Adaptively Multi-view and Temporal Fusing Transformer for 3D Human Pose Estimation [paper] [code]
  • [NViT] NViT: Vision Transformer Compression and Parameter Redistribution [paper]
  • 6D-ViT: Category-Level 6D Object Pose Estimation via Transformer-based Instance Representation Learning [paper]
  • Adversarial Token Attacks on Vision Transformers [paper]
  • Contextual Transformer Networks for Visual Recognition [paper] [code]
  • [TranSalNet] TranSalNet: Visual saliency prediction using transformers [paper]
  • [MobileViT] MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer [paper]
  • A free lunch from ViT: Adaptive Attention Multi-scale Fusion Transformer for Fine-grained Visual Recognition [paper]
  • [3D-Transformer] 3D-Transformer: Molecular Representation with Transformer in 3D Space [paper]
  • [CCTrans] CCTrans: Simplifying and Improving Crowd Counting with Transformer [paper]
  • [UFO-ViT] UFO-ViT: High Performance Linear Vision Transformer without Softmax [paper]
  • Sparse Spatial Transformers for Few-Shot Learning [paper]
  • Vision Transformer Hashing for Image Retrieval [paper]
  • [OH-Former] OH-Former: Omni-Relational High-Order Transformer for Person Re-Identification [paper]
  • [Pix2seq] Pix2seq: A Language Modeling Framework for Object Detection [paper]
  • [CoAtNet] CoAtNet: Marrying Convolution and Attention for All Data Sizes [paper]
  • [LOTR] LOTR: Face Landmark Localization Using Localization Transformer [paper]
  • Transformer-Unet: Raw Image Processing with Unet [paper]
  • [GraFormer] GraFormer: Graph Convolution Transformer for 3D Pose Estimation [paper]
  • [CDTrans] CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [paper]
  • PQ-Transformer: Jointly Parsing 3D Objects and Layouts from Point Clouds [paper] [code]
  • Anchor DETR: Query Design for Transformer-Based Detector [paper] [code]
  • [ESRT] Efficient Transformer for Single Image Super-Resolution [paper]
  • [MaskFormer] MaskFormer: Per-Pixel Classification is Not All You Need for Semantic Segmentation [paper] [code]
  • [SwinIR] SwinIR: Image Restoration Using Swin Transformer [paper] [code]
  • [Trans4Trans] Trans4Trans: Efficient Transformer for Transparent Object and Semantic Scene Segmentation in Real-World Navigation Assistance [paper]
  • Do Vision Transformers See Like Convolutional Neural Networks? [paper]
  • Boosting Salient Object Detection with Transformer-based Asymmetric Bilateral U-Net [paper]
  • Light Field Image Super-Resolution with Transformers [paper] [code]
  • Focal Self-attention for Local-Global Interactions in Vision Transformers [paper] [code]
  • Polyp-PVT: Polyp Segmentation with Pyramid Vision Transformers [paper] [code]
  • Mobile-Former: Bridging MobileNet and Transformer [paper]
  • [TriTransNet] TriTransNet: RGB-D Salient Object Detection with a Triplet Transformer Embedding Network [paper]
  • [PSViT] PSViT: Better Vision Transformer via Token Pooling and Attention Sharing [paper]
  • Boosting Few-shot Semantic Segmentation with Transformers [paper] [code]
  • Congested Crowd Instance Localization with Dilated Convolutional Swin Transformer [paper]
  • Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer [paper]
  • [CrossFormer] CrossFormer: A Versatile Vision Transformer Based on Cross-scale Attention [paper] [code]
  • [Styleformer] Styleformer: Transformer based Generative Adversarial Networks with Style Vector [paper] [code]
  • [CMT] CMT: Convolutional Neural Networks Meet Vision Transformers [paper]
  • [TransAttUnet] TransAttUnet: Multi-level Attention-guided U-Net with Transformer for Medical Image Segmentation [paper]
  • TransClaw U-Net: Claw U-Net with Transformers for Medical Image Segmentation [paper]
  • [ViTGAN] ViTGAN: Training GANs with Vision Transformers [paper]
  • What Makes for Hierarchical Vision Transformer? [paper]
  • CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows [paper] [code]
  • [Trans4Trans] Trans4Trans: Efficient Transformer for Transparent Object Segmentation to Help Visually Impaired People Navigate in the Real World [paper]
  • [FFVT] Feature Fusion Vision Transformer for Fine-Grained Visual Categorization [paper]
  • [TransformerFusion] TransformerFusion: Monocular RGB Scene Reconstruction using Transformers [paper]
  • Escaping the Big Data Paradigm with Compact Transformers [paper]
  • How to train your ViT? Data, Augmentation,and Regularization in Vision Transformers [paper]
  • Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks [paper]
  • [XCiT] XCiT: Cross-Covariance Image Transformers [paper] [code]
  • Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer [paper] [code]
  • Video Swin Transformer [paper] [code]
  • [VOLO] VOLO: Vision Outlooker for Visual Recognition [paper] [code]
  • Transformer Meets Convolution: A Bilateral Awareness Net-work for Semantic Segmentation of Very Fine Resolution Ur-ban Scene Images [paper]
  • [P2T] P2T: Pyramid Pooling Transformer for Scene Understanding [paper]
  • End-to-end Temporal Action Detection with Transformer [paper] [code]
  • How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers [paper]
  • Efficient Self-supervised Vision Transformers for Representation Learning [paper]
  • Space-time Mixing Attention for Video Transformer [paper]
  • Transformed CNNs: recasting pre-trained convolutional layers with self-attention [paper]
  • [CAT] CAT: Cross Attention in Vision Transformer [paper]
  • Scaling Vision Transformers [paper]
  • [DETReg] DETReg: Unsupervised Pretraining with Region Priors for Object Detection [paper] [code]
  • Chasing Sparsity in Vision Transformers:An End-to-End Exploration [paper]
  • [MViT] MViT: Mask Vision Transformer for Facial Expression Recognition in the wild [paper]
  • Demystifying Local Vision Transformer: Sparse Connectivity, Weight Sharing, and Dynamic Weight [paper]
  • On Improving Adversarial Transferability of Vision Transformers [paper]
  • Fully Transformer Networks for Semantic ImageSegmentation [paper]
  • Visual Transformer for Task-aware Active Learning [paper] [code]
  • Efficient Training of Visual Transformers with Small-Size Datasets [paper]
  • Reveal of Vision Transformers Robustness against Adversarial Attacks [paper]
  • Person Re-Identification with a Locally Aware Transformer [paper]
  • [Refiner] Refiner: Refining Self-attention for Vision Transformers [paper]
  • [ViTAE] ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias [paper]
  • Video Instance Segmentation using Inter-Frame Communication Transformers [paper]
  • Transformer in Convolutional Neural Networks [paper] [code]
  • [Uformer] Uformer: A General U-Shaped Transformer for Image Restoration [paper] [code]
  • Patch Slimming for Efficient Vision Transformers [paper]
  • [RegionViT] RegionViT: Regional-to-Local Attention for Vision Transformers [paper]
  • Associating Objects with Transformers for Video Object Segmentation [paper] [code]
  • Few-Shot Segmentation via Cycle-Consistent Transformer [paper]
  • Glance-and-Gaze Vision Transformer [paper] [code]
  • Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers [paper]
  • [DynamicViT] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification [paper] [code]
  • When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations [paper] [code]
  • Unsupervised Out-of-Domain Detection via Pre-trained Transformers [paper]
  • [TransMIL] TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classication [paper]
  • [TransVOS] TransVOS: Video Object Segmentation with Transformers [paper]
  • [KVT] KVT: k-NN Attention for Boosting Vision Transformers [paper]
  • [MSG-Transformer] MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens [paper] [code]
  • [SegFormer] SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers [paper] [code]
  • [SDNet] SDNet: mutil-branch for single image deraining using swin [paper] [code]
  • [DVT] Not All Images are Worth 16x16 Words: Dynamic Vision Transformers with Adaptive Sequence Length [paper]
  • [GazeTR] Gaze Estimation using Transformer [paper] [code]
  • Transformer-Based Deep Image Matching for Generalizable Person Re-identification [paper]
  • Less is More: Pay Less Attention in Vision Transformers [paper]
  • [FoveaTer] FoveaTer: Foveated Transformer for Image Classification [paper]
  • [TransDA] Transformer-Based Source-Free Domain Adaptation [paper] [code]
  • An Attention Free Transformer [paper]
  • [PTNet] PTNet: A High-Resolution Infant MRI Synthesizer Based on Transformer [paper]
  • [ResT] ResT: An Efficient Transformer for Visual Recognition [paper] [code]
  • [CogView] CogView: Mastering Text-to-Image Generation via Transformers [paper]
  • [NesT] Aggregating Nested Transformers [paper]
  • [TAPG] Temporal Action Proposal Generation with Transformers [paper]
  • Boosting Crowd Counting with Transformers [paper]
  • [COTR] COTR: Convolution in Transformer Network for End to End Polyp Detection [paper]
  • [TransVOD] End-to-End Video Object Detection with Spatial-Temporal Transformers [paper] [code]
  • Intriguing Properties of Vision Transformers [paper] [code]
  • Combining Transformer Generators with Convolutional Discriminators [paper]
  • Rethinking the Design Principles of Robust Vision Transformer [paper]
  • Vision Transformers are Robust Learners [paper] [code]
  • Manipulation Detection in Satellite Images Using Vision Transformer [paper]
  • [Swin-Unet] Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation [paper] [code]
  • Self-Supervised Learning with Swin Transformers [paper] [code]
  • [SCTN] SCTN: Sparse Convolution-Transformer Network for Scene Flow Estimation [paper]
  • [RelationTrack] RelationTrack: Relation-aware Multiple Object Tracking with Decoupled Representation [paper]
  • [VGTR] Visual Grounding with Transformers [paper]
  • [PST] Visual Composite Set Detection Using Part-and-Sum Transformers [paper]
  • [TrTr] TrTr: Visual Tracking with Transformer [paper] [code]
  • [MOTR] MOTR: End-to-End Multiple-Object Tracking with TRansformer [paper] [code]
  • Attention for Image Registration (AiR): an unsupervised Transformer approach [paper]
  • [TransHash] TransHash: Transformer-based Hamming Hashing for Efficient Image Retrieval [paper]
  • [ISTR] ISTR: End-to-End Instance Segmentation with Transformers [paper] [code]
  • [CAT] CAT: Cross-Attention Transformer for One-Shot Object Detection [paper]
  • [CoSformer] CoSformer: Detecting Co-Salient Object with Transformers [paper]
  • End-to-End Attention-based Image Captioning [paper]
  • [PMTrans] Pyramid Medical Transformer for Medical Image Segmentation [paper]
  • [HandsFormer] HandsFormer: Keypoint Transformer for Monocular 3D Pose Estimation ofHands and Object in Interaction [paper]
  • [GasHis-Transformer] GasHis-Transformer: A Multi-scale Visual Transformer Approach for Gastric Histopathology Image Classification [paper]
  • Emerging Properties in Self-Supervised Vision Transformers [paper]
  • [InTra] Inpainting Transformer for Anomaly Detection [paper]
  • [Twins] Twins: Revisiting Spatial Attention Design in Vision Transformers [paper] [code]
  • [MLMSPT] Point Cloud Learning with Transformer [paper]
  • Medical Transformer: Universal Brain Encoder for 3D MRI Analysis [paper]
  • [ConTNet] ConTNet: Why not use convolution and transformer at the same time? [paper] [code]
  • [DTNet] Dual Transformer for Point Cloud Analysis [paper]
  • Improve Vision Transformers Training by Suppressing Over-smoothing [paper] [code]
  • Transformer Meets DCFAM: A Novel Semantic Segmentation Scheme for Fine-Resolution Remote Sensing Images [paper]
  • [M3DeTR] M3DeTR: Multi-representation, Multi-scale, Mutual-relation 3D Object Detection with Transformers [paper] [code]
  • [Skeletor] Skeletor: Skeletal Transformers for Robust Body-Pose Estimation [paper]
  • [FaceT] Learning to Cluster Faces via Transformer [paper]
  • [MViT] Multiscale Vision Transformers [paper] [code]
  • [VATT] VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text [paper]
  • [So-ViT] So-ViT: Mind Visual Tokens for Vision Transformer [paper] [code]
  • Token Labeling: Training a 85.5% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet [paper] [code]
  • [TransRPPG] TransRPPG: Remote Photoplethysmography Transformer for 3D Mask Face Presentation Attack Detection [paper]
  • [VideoGPT] VideoGPT: Video Generation using VQ-VAE and Transformers [paper]
  • [M2TR] M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [paper]
  • Transformer Transforms Salient Object Detection and Camouflaged Object Detection [paper]
  • [TransCrowd] TransCrowd: Weakly-Supervised Crowd Counting with Transformer [paper] [code]
  • Visual Transformer Pruning [paper]
  • Self-supervised Video Retrieval Transformer Network [paper]
  • Vision Transformer using Low-level Chest X-ray Feature Corpus for COVID-19 Diagnosis and Severity Quantification [paper]
  • [TransGAN] TransGAN: Two Transformers Can Make One Strong GAN [paper] [code]
  • Geometry-Free View Synthesis: Transformers and no 3D Priors [paper] [code]
  • [CoaT] Co-Scale Conv-Attentional Image Transformers [paper] [code]
  • [LocalViT] LocalViT: Bringing Locality to Vision Transformers [paper] [code]
  • [CIT] Cloth Interactive Transformer for Virtual Try-On [paper] [code]
  • Handwriting Transformers [paper]
  • [SiT] SiT: Self-supervised vIsion Transformer [paper] [code]
  • On the Robustness of Vision Transformers to Adversarial Examples [paper]
  • An Empirical Study of Training Self-Supervised Visual Transformers [paper]
  • A Video Is Worth Three Views: Trigeminal Transformers for Video-based Person Re-identification [paper]
  • [AOT-GAN] Aggregated Contextual Transformations for High-Resolution Image Inpainting [paper] [code]
  • Deepfake Detection Scheme Based on Vision Transformer and Distillation [paper]
  • [ATAG] Augmented Transformer with Adaptive Graph for Temporal Action Proposal Generation [paper]
  • [TubeR] TubeR: Tube-Transformer for Action Detection [paper]
  • [AAformer] AAformer: Auto-Aligned Transformer for Person Re-Identification [paper]
  • [TFill] TFill: Image Completion via a Transformer-Based Architecture [paper]
  • Group-Free 3D Object Detection via Transformers [paper] [code]
  • [STGT] Spatial-Temporal Graph Transformer for Multiple Object Tracking [paper]
  • Going deeper with Image Transformers[paper]
  • [Meta-DETR] Meta-DETR: Few-Shot Object Detection via Unified Image-Level Meta-Learning [paper [code]
  • [DA-DETR] DA-DETR: Domain Adaptive Detection Transformer by Hybrid Attention [paper]
  • Robust Facial Expression Recognition with Convolutional Visual Transformers [paper]
  • Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers [paper]
  • Spatiotemporal Transformer for Video-based Person Re-identification[paper]
  • [TransUNet] TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation [paper] [code]
  • [CvT] CvT: Introducing Convolutions to Vision Transformers [paper] [code]
  • [TFPose] TFPose: Direct Human Pose Estimation with Transformers [paper]
  • [TransCenter] TransCenter: Transformers with Dense Queries for Multiple-Object Tracking [paper]
  • Face Transformer for Recognition [paper]
  • On the Adversarial Robustness of Visual Transformers [paper]
  • Understanding Robustness of Transformers for Image Classification [paper]
  • Lifting Transformer for 3D Human Pose Estimation in Video [paper]
  • [GSA-Net] Global Self-Attention Networks for Image Recognition[paper]
  • High-Fidelity Pluralistic Image Completion with Transformers [paper] [code]
  • [DPT] Vision Transformers for Dense Prediction [paper] [code]
  • [TransFG] TransFG: A Transformer Architecture for Fine-grained Recognition? [paper]
  • [TimeSformer] Is Space-Time Attention All You Need for Video Understanding? [paper]
  • Multi-view 3D Reconstruction with Transformer [paper]
  • Can Vision Transformers Learn without Natural Images? [paper] [code]
  • End-to-End Trainable Multi-Instance Pose Estimation with Transformers [paper]
  • Instance-level Image Retrieval using Reranking Transformers [paper] [code]
  • [BossNAS] BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search [paper] [code]
  • [CeiT] Incorporating Convolution Designs into Visual Transformers [paper]
  • [DeepViT] DeepViT: Towards Deeper Vision Transformer [paper]
  • Enhancing Transformer for Video Understanding Using Gated Multi-Level Attention and Temporal Adversarial Training [paper]
  • 3D Human Pose Estimation with Spatial and Temporal Transformers [paper] [code]
  • [SUNETR] SUNETR: Transformers for 3D Medical Image Segmentation [paper]
  • Scalable Visual Transformers with Hierarchical Pooling [paper]
  • [ConViT] ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases [paper]
  • [TransMed] TransMed: Transformers Advance Multi-modal Medical Image Classification [paper]
  • [U-Transformer] U-Net Transformer: Self and Cross Attention for Medical Image Segmentation [paper]
  • [SpecTr] SpecTr: Spectral Transformer for Hyperspectral Pathology Image Segmentation [paper] [code]
  • [TransBTS] TransBTS: Multimodal Brain Tumor Segmentation Using Transformer [paper] [code]
  • [SSTN] SSTN: Self-Supervised Domain Adaptation Thermal Object Detection for Autonomous Driving [paper]
  • Transformer is All You Need: Multimodal Multitask Learning with a Unified Transformer [paper] [code]
  • [CPVT] Do We Really Need Explicit Position Encodings for Vision Transformers? [paper] [code]
  • Deepfake Video Detection Using Convolutional Vision Transformer[paper]
  • Training Vision Transformers for Image Retrieval[paper]
  • [VTN] Video Transformer Network[paper]
  • [T2T-ViT] Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet [paper] [code]
  • [BoTNet] Bottleneck Transformers for Visual Recognition [paper]
  • [CPTR] CPTR: Full Transformer Network for Image Captioning [paper]
  • Learn to Dance with AIST++: Music Conditioned 3D Dance Generation [paper] [code]
  • [Trans2Seg] Segmenting Transparent Object in the Wild with Transformer [paper] [code]
  • Investigating the Vision Transformer Model for Image Retrieval Tasks [paper]
  • [Trear] Trear: Transformer-based RGB-D Egocentric Action Recognition [paper]
  • [VisualSparta] VisualSparta: Sparse Transformer Fragment-level Matching for Large-scale Text-to-Image Search [paper]
  • [TrackFormer] TrackFormer: Multi-Object Tracking with Transformers [paper]
  • [LETR] Line Segment Detection Using Transformers without Edges [paper]
  • [TAPE] Transformer Guided Geometry Model for Flow-Based Unsupervised Visual Odometry [paper]
  • [TRIQ] Transformer for Image Quality Assessment [paper] [code]
  • [TransTrack] TransTrack: Multiple-Object Tracking with Transformer [paper] [code]
  • [DeiT] Training data-efficient image transformers & distillation through attention [paper] [code]
  • [Pointformer] 3D Object Detection with Pointformer [paper]
  • [ViT-FRCNN] Toward Transformer-Based Object Detection [paper]
  • [Taming-transformers] Taming Transformers for High-Resolution Image Synthesis [paper] [code]
  • [SceneFormer] SceneFormer: Indoor Scene Generation with Transformers [paper]
  • [PCT] PCT: Point Cloud Transformer [paper]
  • [METRO] End-to-End Human Pose and Mesh Reconstruction with Transformers [paper]
  • [PED] DETR for Pedestrian Detection[paper]
  • Transformer Guided Geometry Model for Flow-Based Unsupervised Visual Odometry[paper]
  • [C-Tran] General Multi-label Image Classification with Transformers [paper]
  • [TSP-FCOS] Rethinking Transformer-based Set Prediction for Object Detection [paper]
  • [ACT] End-to-End Object Detection with Adaptive Clustering Transformer [paper]
  • [STTR] Revisiting Stereo Depth Estimation From a Sequence-to-Sequence Perspective with Transformers [paper] [code]

2021

NeurIPS

  • Augmented Shortcuts for Vision Transformers [paper]
  • [YOLOS] You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection [paper] [code]
  • [CATs] Semantic Correspondence with Transformers [paper] [code]
  • [Moment-DETR] QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries [paper] [code]
  • Dual-stream Network for Visual Recognition [paper] [code]
  • [Container] Container: Context Aggregation Network [paper] [code]
  • [TNT] Transformer in Transformer [paper] [code]
  • T6D-Direct: Transformers for Multi-Object 6D Pose Direct Regression [paper]

ICCV

  • Swin Transformer: Hierarchical Vision Transformer using Shifted Windows (Marr Prize) [paper] [code]
  • [PoinTr] PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers (oral) [paper] [code]
  • Paint Transformer: Feed Forward Neural Painting with Stroke Prediction (oral) ) [paper [code]
  • 3DVG-Transformer: Relation Modeling for Visual Grounding on Point Clouds [paper]
  • [THUNDR] THUNDR: Transformer-Based 3D Human Reconstruction With Markers [paper]
  • Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding [paper]
  • [PVT] Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions [paper] [code]
  • Spatial-Temporal Transformer for Dynamic Scene Graph Generation [paper]
  • [GLiT] GLiT: Neural Architecture Search for Global and Local Image Transformer [paper]
  • [TRAR] TRAR: Routing the Attention Spans in Transformer for Visual Question Answering [paper]
  • [UniT] UniT: Multimodal Multitask Learning With a Unified Transformer [paper] [code]
  • Stochastic Transformer Networks With Linear Competing Units: Application To End-to-End SL Translation [paper]
  • Transformer-Based Dual Relation Graph for Multi-Label Image Recognition [paper]
  • [LocalTrans] LocalTrans: A Multiscale Local Transformer Network for Cross-Resolution Homography Estimation [paper]
  • Improving 3D Object Detection With Channel-Wise Transformer [paper]
  • A Latent Transformer for Disentangled Face Editing in Images and Videos [paper] [code]
  • [GroupFormer] GroupFormer: Group Activity Recognition With Clustered Spatial-Temporal Transformer [paper]
  • Unified Questioner Transformer for Descriptive Question Generation in Goal-Oriented Visual Dialogue [paper]
  • [WB-DETR] WB-DETR: Transformer-Based Detector Without Backbone [paper]
  • The Animation Transformer: Visual Correspondence via Segment Matching [paper]
  • The Animation Transformer: Visual Correspondence via Segment Matching [paper]
  • Relaxed Transformer Decoders for Direct Action Proposal Generation [paper]
  • [PPT-Net] Pyramid Point Cloud Transformer for Large-Scale Place Recognition [paper] [code]
  • Multimodal Co-Attention Transformer for Survival Prediction in Gigapixel Whole Slide Images [paper]
  • Uncertainty-Guided Transformer Reasoning for Camouflaged Object Detection [paper]
  • Image Harmonization With Transformer [paper] [cpde]
  • [COTR] COTR: Correspondence Transformer for Matching Across Images [paper]
  • [MUSIQ] MUSIQ: Multi-Scale Image Quality Transformer [paper]
  • Episodic Transformer for Vision-and-Language Navigation [paper]
  • Action-Conditioned 3D Human Motion Synthesis With Transformer VAE [paper]
  • [CrackFormer] CrackFormer: Transformer Network for Fine-Grained Crack Detection [paper]
  • [HiT] HiT: Hierarchical Transformer With Momentum Contrast for Video-Text Retrieval [paper]
  • Event-Based Video Reconstruction Using Transformer [paper]
  • [STVGBert] STVGBert: A Visual-Linguistic Transformer Based Framework for Spatio-Temporal Video Grounding [paper]
  • [HiFT] HiFT: Hierarchical Feature Transformer for Aerial Tracking [paper] [code]
  • [DocFormer] DocFormer: End-to-End Transformer for Document Understanding [paper]
  • [LeViT] LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference [paper] [code]
  • [SignBERT] SignBERT: Pre-Training of Hand-Model-Aware Representation for Sign Language Recognition[paper]
  • [VidTr] VidTr: Video Transformer Without Convolutions [paper]
  • [ACTOR] Action-Conditioned 3D Human Motion Synthesis with Transformer VAE [paper]
  • [Segmenter] Segmenter: Transformer for Semantic Segmentation [paper] [code]
  • [Visformer] Visformer: The Vision-friendly Transformer [paper] [code]
  • [PnP-DETR] PnP-DETR: Towards Efficient Visual Analysis with Transformers (ICCV) [paper] [code]
  • [VoTr] Voxel Transformer for 3D Object Detection [paper]
  • [TransVG] TransVG: End-to-End Visual Grounding with Transformers [paper]
  • [3DETR] An End-to-End Transformer Model for 3D Object Detection [paper] [code]
  • [Eformer] Eformer: Edge Enhancement based Transformer for Medical Image Denoising [paper]
  • [TransFER] TransFER: Learning Relation-aware Facial Expression Representations with Transformers [paper]
  • [Oriented RCNN] Oriented Object Detection with Transformer [paper]
  • [ViViT] ViViT: A Video Vision Transformer [paper]
  • [Stark] Learning Spatio-Temporal Transformer for Visual Tracking [paper] [code]
  • [CT3D] Improving 3D Object Detection with Channel-wise Transformer [paper]
  • [VST] Visual Saliency Transformer [paper]
  • [PiT] Rethinking Spatial Dimensions of Vision Transformers [paper] [code]
  • [CrossViT] CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification [paper] [code]
  • [PointTransformer] Point Transformer [paper]
  • [TS-CAM] TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization [paper] [code]
  • [VTs] Visual Transformers: Token-based Image Representation and Processing for Computer Vision [paper]
  • [TransDepth] Transformer-Based Attention Networks for Continuous Pixel-Wise Prediction [paper] [code]
  • [Conditional DETR] Conditional DETR for Fast Training Convergence [paper] [code]
  • [PIT] PIT: Position-Invariant Transform for Cross-FoV Domain Adaptation [paper] [code]
  • [SOTR] SOTR: Segmenting Objects with Transformers [paper] [code]
  • [SnowflakeNet] SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer [paper] [code]
  • [TransPose] TransPose: Keypoint Localization via Transformer [paper] [code]
  • [TransReID] TransReID: Transformer-based Object Re-Identification [paper] [code]
  • [CWT] Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer [paper] [code]
  • Anticipative Video Transformer [paper] [code]
  • Rethinking and Improving Relative Position Encoding for Vision Transformer [paper] [code]
  • Vision Transformer with Progressive Sampling [paper] [code]
  • [SMCA] Fast Convergence of DETR with Spatially Modulated Co-Attention [paper] [code]
  • [AutoFormer] AutoFormer: Searching Transformers for Visual Recognition [paper] [code]

CVPR

  • Diverse Part Discovery: Occluded Person Re-identification with Part-Aware Transformer [paper]
  • [HOTR] HOTR: End-to-End Human-Object Interaction Detection with Transformers (oral) [paper]
  • [TransFuser] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [paper] [code]
  • Pose Recognition with Cascade Transformers [paper]
  • Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning [paper]
  • [LoFTR] LoFTR: Detector-Free Local Feature Matching with Transformers [paper] [code]
  • Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers [paper]
  • [SETR] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers [paper] [code]
  • [TransT] Transformer Tracking [paper] [code]
  • Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking (** oral**) [paper]
  • [VisTR] End-to-End Video Instance Segmentation with Transformers [paper]
  • Transformer Interpretability Beyond Attention Visualization [paper] [code]
  • [IPT] Pre-Trained Image Processing Transformer [paper]
  • [UP-DETR] UP-DETR: Unsupervised Pre-training for Object Detection with Transformers [paper]
  • [IQT] Perceptual Image Quality Assessment with Transformers (workshop) [paper]
  • High-Resolution Complex Scene Synthesis with Transformers (workshop) [paper]

ICML

  • Generative Video Transformer: Can Objects be the Words? [paper]
  • [GANsformer] Generative Adversarial Transformers [paper] [code]

ICRA

  • [NDT-Transformer] NDT-Transformer: Large-Scale 3D Point Cloud Localisation using the Normal Distribution Transform Representation [paper]

ICLR

  • [VTNet] VTNet: Visual Transformer Network for Object Goal Navigation [paper]
  • [Vision Transformer] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale [paper] [code]
  • [Deformable DETR] Deformable DETR: Deformable Transformers for End-to-End Object Detection [paper] [code]
  • [LAMBDANETWORKS] MODELING LONG-RANGE INTERACTIONS WITHOUT ATTENTION [paper] [code]

ACM MM

  • Video Transformer for Deepfake Detection with Incremental Learning[paper]
  • [HAT] HAT: Hierarchical Aggregation Transformers for Person Re-identification [paper]
  • Token Shift Transformer for Video Classification [paper] [code]
  • [DPT] DPT: Deformable Patch-based Transformer for Visual Recognition [paper] [code]

MICCAI

  • [UTNet] UTNet: A Hybrid Transformer Architecture for Medical Image Segmentation [paper]
  • [MedT] Medical Transformer: Gated Axial-Attention for Medical Image Segmentation [paper] [code]
  • [MCTrans] Multi-Compound Transformer for Accurate Biomedical Image Segmentation [paper]
  • [PNS-Net] Progressively Normalized Self-Attention Network for Video Polyp Segmentation [paper] [code]
  • [MBT-Net] A Multi-Branch Hybrid Transformer Networkfor Corneal Endothelial Cell Segmentation [paper]

ISIE

  • VT-ADL: A Vision Transformer Network for Image Anomaly Detection and Localization (ISIE) [paper]

CORL

  • [DETR3D] DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries [paper]

IJCAI

  • Medical Image Segmentation using Squeeze-and-Expansion Transformers [paper]

IROS

  • [YOGO] You Only Group Once: Efficient Point-Cloud Processing with Token Representation and Relation Inference Module (IROS) [paper] [code]
  • [PTT] PTT: Point-Track-Transformer Module for 3D Single Object Tracking in Point Clouds [paper] [code]

WACV

  • [LSTR] End-to-end Lane Shape Prediction with Transformers [paper] [code]

ICDAR

  • Vision Transformer for Fast and Efficient Scene Text Recognition [paper]

2020

  • [DETR] End-to-End Object Detection with Transformers (ECCV) [paper] [code]
  • [FPT] Feature Pyramid Transformer (CVPR) [paper] [code]
  • [TTSR] Learning Texture Transformer Network for Image Super-Resolution (CVPR) [paper] [code]
  • [STTN] Learning Joint Spatial-Temporal Transformations for Video Inpainting (ECCV) [paper] [code]

Acknowledgement

Thanks the template from Awesome-Crowd-Counting

Owner
dkliang
A Master student in Vlr group, HUST.
dkliang
Simple-Neural-Network From Scratch in Python

Simple-Neural-Network From Scratch in Python This is a simple Neural Network created without any Machine Learning Libraries. The only dependencies are

Aum Shah 1 Dec 28, 2021
ARKitScenes - A Diverse Real-World Dataset for 3D Indoor Scene Understanding Using Mobile RGB-D Data

ARKitScenes This repo accompanies the research paper, ARKitScenes - A Diverse Real-World Dataset for 3D Indoor Scene Understanding Using Mobile RGB-D

Apple 371 Jan 05, 2023
A library for implementing Decentralized Graph Neural Network algorithms.

decentralized-gnn A package for implementing and simulating decentralized Graph Neural Network algorithms for classification of peer-to-peer nodes. De

Multimedia Knowledge and Social Analytics Lab 5 Nov 07, 2022
Demo notebooks for Qiskit application modules demo sessions (Oct 8 & 15):

qiskit-application-modules-demo-sessions This repo hosts demo notebooks for the Qiskit application modules demo sessions hosted on Qiskit YouTube. Par

Qiskit Community 46 Nov 24, 2022
Visualizer using audio and semantic analysis to explore BigGAN (Brock et al., 2018) latent space.

BigGAN Audio Visualizer Description This visualizer explores BigGAN (Brock et al., 2018) latent space by using pitch/tempo of an audio file to generat

Rush Kapoor 2 Nov 21, 2022
Official code repository for "Exploring Neural Models for Query-Focused Summarization"

Query-Focused Summarization Official code repository for "Exploring Neural Models for Query-Focused Summarization" This is a work in progress. Expect

Salesforce 29 Dec 18, 2022
A bunch of random PyTorch models using PyTorch's C++ frontend

PyTorch Deep Learning Models using the C++ frontend Gettting started Clone the repo 1. https://github.com/mrdvince/pytorchcpp 2. cd fashionmnist or

Vince 0 Jul 13, 2021
Studying Python release adoptions by looking at PyPI downloads

Analysis of version adoptions on PyPI We get PyPI download statistics via Google's BigQuery using the pypinfo tool. Usage First you need to get an acc

Julien Palard 9 Nov 04, 2022
Make Watson Assistant send messages to your Discord Server

Make Watson Assistant send messages to your Discord Server Prerequisites Sign up for an IBM Cloud account. Fill in the required information and press

1 Jan 10, 2022
Wider or Deeper: Revisiting the ResNet Model for Visual Recognition

ademxapp Visual applications by the University of Adelaide In designing our Model A, we did not over-optimize its structure for efficiency unless it w

Zifeng Wu 338 Dec 12, 2022
harmonic-percussive-residual separation algorithm wrapped as a VST3 plugin (iPlug2)

Harmonic-percussive-residual separation plug-in This work is a study on the plausibility of a sines-transients-noise decomposition inspired algorithm

Derp Learning 9 Sep 01, 2022
Wordle Env: A Daily Word Environment for Reinforcement Learning

Wordle Env: A Daily Word Environment for Reinforcement Learning Setup Steps: git pull [email&#

2 Mar 28, 2022
Implementation of "A MLP-like Architecture for Dense Prediction"

A MLP-like Architecture for Dense Prediction (arXiv) Updates (22/07/2021) Initial release. Model Zoo We provide CycleMLP models pretrained on ImageNet

Shoufa Chen 244 Dec 27, 2022
Pretrained models for Jax/Haiku; MobileNet, ResNet, VGG, Xception.

Pre-trained image classification models for Jax/Haiku Jax/Haiku Applications are deep learning models that are made available alongside pre-trained we

Alper Baris CELIK 14 Dec 20, 2022
A smaller subset of 10 easily classified classes from Imagenet, and a little more French

Imagenette 🎶 Imagenette, gentille imagenette, Imagenette, je te plumerai. 🎶 (Imagenette theme song thanks to Samuel Finlayson) NB: Versions of Image

fast.ai 718 Jan 01, 2023
Code for the paper Relation Prediction as an Auxiliary Training Objective for Improving Multi-Relational Graph Representations (AKBC 2021).

Relation Prediction as an Auxiliary Training Objective for Knowledge Base Completion This repo provides the code for the paper Relation Prediction as

Facebook Research 85 Jan 02, 2023
Code for paper Novel View Synthesis via Depth-guided Skip Connections

Novel View Synthesis via Depth-guided Skip Connections Code for paper Novel View Synthesis via Depth-guided Skip Connections @InProceedings{Hou_2021_W

8 Mar 14, 2022
A PyTorch Implementation of SphereFace.

SphereFace A PyTorch Implementation of SphereFace. The code can be trained on CASIA-Webface and the best accuracy on LFW is 99.22%. SphereFace: Deep H

carwin 685 Dec 09, 2022
Code for CVPR2019 paper《Unequal Training for Deep Face Recognition with Long Tailed Noisy Data》

Unequal-Training-for-Deep-Face-Recognition-with-Long-Tailed-Noisy-Data. This is the code of CVPR 2019 paper《Unequal Training for Deep Face Recognition

Zhong Yaoyao 68 Jan 07, 2023
[ICML 2021] “ Self-Damaging Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang

Self-Damaging Contrastive Learning Introduction The recent breakthrough achieved by contrastive learning accelerates the pace for deploying unsupervis

VITA 51 Dec 29, 2022