Implementation of the Swin Transformer in PyTorch.

Overview

Linear Self Attention

Swin Transformer - PyTorch

Implementation of the Swin Transformer architecture. This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (86.4 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones.

This is NOT the official repository of the Swin Transformer. At the moment in time the official code of the authors is not available yet but can be found later at: https://github.com/microsoft/Swin-Transformer.

All credits go to the authors Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin and Baining Guo.

Install

$ pip install swin-transformer-pytorch

or (if you clone the repository)

$ pip install -r requirements.txt

Usage

import torch
from swin_transformer_pytorch import SwinTransformer

net = SwinTransformer(
    hidden_dim=96,
    layers=(2, 2, 6, 2),
    heads=(3, 6, 12, 24),
    channels=3,
    num_classes=3,
    head_dim=32,
    window_size=7,
    downscaling_factors=(4, 2, 2, 2),
    relative_pos_embedding=True
)
dummy_x = torch.randn(1, 3, 224, 224)
logits = net(dummy_x)  # (1,3)
print(net)
print(logits)

Parameters

  • hidden_dim: int.
    What hidden dimension you want to use for the architecture, noted C in the original paper
  • layers: 4-tuple of ints divisible by 2.
    How many layers in each stage to apply. Every int should be divisible by two because we are always applying a regular and a shifted SwinBlock together.
  • heads: 4-tuple of ints
    How many heads in each stage to apply.
  • channels: int.
    Number of channels of the input.
  • num_classes: int.
    Num classes the output should have.
  • head_dim: int.
    What dimension each head should have.
  • window_size: int.
    What window size to use, make sure that after each downscaling the image dimensions are still divisible by the window size.
  • downscaling_factors: 4-tuple of ints.
    What downscaling factor to use in each stage. Make sure image dimension is large enough for downscaling factors.
  • relative_pos_embedding: bool.
    Whether to use learnable relative position embedding (2M-1)x(2M-1) or full positional embeddings (M²xM²).

TODO

  • Adjust code for and validate on ImageNet-1K and COCO 2017

References

Some part of the code is adapted from the PyTorch - VisionTransformer repository https://github.com/lucidrains/vit-pytorch , which provides a very clean VisionTransformer implementation to start with.

Citations

@misc{liu2021swin,
      title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows}, 
      author={Ze Liu and Yutong Lin and Yue Cao and Han Hu and Yixuan Wei and Zheng Zhang and Stephen Lin and Baining Guo},
      year={2021},
      eprint={2103.14030},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Comments
  • about widow-size

    about widow-size

    Dear Sir, Thank you very much for your great work. I would like to ask if you have any suggestions on how to set the window size. For 224x224 input, window size set to 7 is reasonable because it can divide by 7, but for other sizes, such as 768x768 in cityscapes, 7 will undoubtedly report an error since 768 / 32=24 , so it looks like the window setting is very subtle. The close value is 8, but is the window setting the same as the convolution kernel, where odd numbers work better? Also, is it possible to set different window sizes at different stages, which seems to be feasible for non-regular image sizes. Since the window size is a very critical hyperparameter that determines the perceptual field and the amount of computation, would like to request your opinion, thanks!

    opened by huixiancheng 9
  • relative pos embedding errs out with

    relative pos embedding errs out with "IndexError: tensors used as indices must be long, byte or bool tensors"

    Very big thanks for making this implementation! I just upgraded to the relative pos embedding update from an hour ago and in trying to train get this type error.

    ---> 32         y_pred = model(images)
         33         #print(f" y_pred = {y_pred}")
         34         #print(f" y_pred shape = {y_pred.shape}")
    
    ~\anaconda3\envs\fastai2\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),
    
    ~\cdetr\cdetr_utils\transformer\swin_transformer.py in forward(self, img)
        229 
        230     def forward(self, img):
    --> 231         x = self.stage1(img)
        232         x = self.stage2(x)
        233         x = self.stage3(x)
    
    ~\anaconda3\envs\fastai2\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),
    
    ~\cdetr\cdetr_utils\transformer\swin_transformer.py in forward(self, x)
        189         x = self.patch_partition(x)
        190         for regular_block, shifted_block in self.layers:
    --> 191             x = regular_block(x)
        192             x = shifted_block(x)
        193         return x.permute(0, 3, 1, 2)
    
    ~\anaconda3\envs\fastai2\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),
    
    ~\cdetr\cdetr_utils\transformer\swin_transformer.py in forward(self, x)
        148 
        149     def forward(self, x):
    --> 150         x = self.attention_block(x)
        151         x = self.mlp_block(x)
        152         return x
    
    ~\anaconda3\envs\fastai2\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),
    
    ~\cdetr\cdetr_utils\transformer\swin_transformer.py in forward(self, x, **kwargs)
         21 
         22     def forward(self, x, **kwargs):
    ---> 23         return self.fn(x, **kwargs) + x
         24 
         25 
    
    ~\anaconda3\envs\fastai2\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),
    
    ~\cdetr\cdetr_utils\transformer\swin_transformer.py in forward(self, x, **kwargs)
         31 
         32     def forward(self, x, **kwargs):
    ---> 33         return self.fn(self.norm(x), **kwargs)
         34 
         35 
    
    ~\anaconda3\envs\fastai2\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),
    
    ~\cdetr\cdetr_utils\transformer\swin_transformer.py in forward(self, x)
        116 
        117         if self.relative_pos_embedding:
    --> 118             dots += self.pos_embedding[self.relative_indices[:, :, 0], self.relative_indices[:, :, 1]]
        119         else:
        120             dots += self.pos_embedding
    
    IndexError: tensors used as indices must be long, byte or bool tensors
    
    opened by lessw2020 8
  • fail to run the code

    fail to run the code

    Hi, i'm intereted in your code! But when i run the example of it,

    Traceback (most recent call last): File "D:/Code/Pytorch/swin-transformer-pytorch-0.4/example.py", line 16, in logits = net(dummy_x) # (1,3) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 219, in forward x = self.stage1(img) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 190, in forward x = regular_block(x) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 149, in forward x = self.attention_block(x) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 22, in forward return self.fn(x, **kwargs) + x File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 32, in forward return self.fn(self.norm(x), **kwargs) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 117, in forward dots += self.pos_embedding[self.relative_indices[:, :, 0], self.relative_indices[:, :, 1]] IndexError: tensors used as indices must be long, byte or bool tensors

    And when i change the type to long, the code has another error.

    Traceback (most recent call last): File "D:/Code/Pytorch/swin-transformer-pytorch-0.4/example.py", line 16, in logits = net(dummy_x) # (1,3) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 219, in forward x = self.stage1(img) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 188, in forward x = self.patch_partition(x) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 164, in forward x = self.patch_merge(x).view(b, -1, new_h, new_w).permute(0, 2, 3, 1) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\fold.py", line 295, in forward self.padding, self.stride) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\functional.py", line 4313, in unfold return torch._C._nn.im2col(input, _pair(kernel_size), _pair(dilation), _pair(padding), _pair(stride)) RuntimeError: "im2col_out_cpu" not implemented for 'Long'

    opened by ShujinW 5
  • Cyclic shift with masking

    Cyclic shift with masking

    Hello sir, I'm trying to understand "efficient batch computation" which the authors suggested. Probably because of my short knowledge, it was hard to get how it works. Your implementation really helped me for understanding its mechanism, thanks a lot!

    Here's my question, it seems the masked area of q * k / sqrt(d) vanishes during the computation of self-attention. I'm not sure that I understood the code correctly, but is this originally intended in the paper? I'm wondering if each subwindow's self-attention might be computed before reversing. image

    Apology if I misunderstood something, and thanks again!

    opened by Hayoung93 4
  • Shifting attention-calculating windows

    Shifting attention-calculating windows

    Hello, sir. A question popped up again, unfortunately.

    I've followed your shifting code, and it seems to have a difference with (my comprehension of) the paper. I understood the behavior of the original paper's window shifting as a black arrow in the image below (self-attention is calculated with elements inside of bold lines). The left red arrow points to the result of patch-wise rolling and the right red arrow points results of rolling the entire feature map. In my opinion, self-attention should be computed according to the right-top figure, therefore, boxes of right-bottom should be used (green dot-line separates subwindows) which each region in the right-top figure preserves.

    Please let me know if I misunderstood your code or something in the paper. Thanks a lot!

    Additionally, this is how I mimicked your code:

    import torch
    from einops import rearrange
    A = torch.Tensor(list(range(1, 17))).view(1, 4, 4)
    A_patched = A.view(4, 2, 2).permute(1, 2, 0).view(1, 2, 2, 4)
    A_patched_rolled = torch.roll(A_patched, shifts=(-1, -1), dims=(1, 2))
    A_rearranged = rearrange(A, 'a (b c) (d e)->a (b d) (c e)', b=2, d=2)
    A_rearranged_rolled = torch.roll(A_rearranged, shifts=(-1, -1), dims=(1, 2))
    A_rearranged_rolled2 = torch.roll(A_rearranged, shifts=(1, 1), dims=(1, 2))
    

    where A can be considered as a 4x4 feature map (though element order is not matched with image above), A_patched is a divided version of A, and A_patched_rolled is patch-wise shifted version of A_patched, following torch.roll(x, shifts=(self.displacement, self.displacement), dims=(1, 2)) in your code. A_rearranged is rearranged to match the image above.

    <---A_patched<---A_patched_rolled

    >>> A
    tensor([[[ 1.,  2.,  3.,  4.],
             [ 5.,  6.,  7.,  8.],
             [ 9., 10., 11., 12.],
             [13., 14., 15., 16.]]])
    >>> A_patched
    tensor([[[[ 1.,  5.,  9., 13.],
              [ 2.,  6., 10., 14.]],
    
             [[ 3.,  7., 11., 15.],
              [ 4.,  8., 12., 16.]]]])
    >>> A_patched_rolled
    tensor([[[[ 4.,  8., 12., 16.],
              [ 3.,  7., 11., 15.]],
    
             [[ 2.,  6., 10., 14.],
              [ 1.,  5.,  9., 13.]]]])
    >>> A_rearranged
    tensor([[[ 1.,  2.,  5.,  6.],
             [ 3.,  4.,  7.,  8.],
             [ 9., 10., 13., 14.],
             [11., 12., 15., 16.]]])
    >>> A_rearranged_rolled
    tensor([[[ 4.,  7.,  8.,  3.],
             [10., 13., 14.,  9.],
             [12., 15., 16., 11.],
             [ 2.,  5.,  6.,  1.]]])
    >>> A_rearranged_rolled2
    tensor([[[16., 11., 12., 15.],
             [ 6.,  1.,  2.,  5.],
             [ 8.,  3.,  4.,  7.],
             [14.,  9., 10., 13.]]])
    
    opened by Hayoung93 2
  • How to use for generation work

    How to use for generation work

    Thanks for your great work. I do the task of image generation. In my opinion, the current swin-transformer is an encode structure. Is there a corresponding swin-transformer that can be used for decode?

    opened by yinyiyu 2
  • Runtime error

    Runtime error

    I'm running an error in your code at line 117 dots += self.pos_embedding[self.relative_indices[:, :, 0], self.relative_indices[:, :, 1]] IndexError: tensors used as indices must be long, byte or bool tensors

    opened by QinchengZhang 1
  • A question about qk_scale

    A question about qk_scale

    Hello @berniwal , I have a question about this: https://github.com/berniwal/swin-transformer-pytorch/blob/c921ebf914c6ea9734bb260ada395e3746c85402/swin_transformer_pytorch/swin_transformer.py#L76

    what's the function of the scale?I can't understand why do this.

    Best regards

    opened by Sample-design-alt 0
  • Issues related to patch merging implementation

    Issues related to patch merging implementation

    In this repository, patch merging is implemented with nn.Unfold, but it is expected to behave differently than the official implementation.

    https://github.com/microsoft/Swin-Transformer/blob/6ded2577413b68cbbd89f08391465788ed73030e/models/swin_transformer.py#L291

    Is there something I'm missing out on?

    opened by lee-gwang 1
  • why the createmask function is 49*49?

    why the createmask function is 49*49?

    def create_mask(window_size, displacement, upper_lower, left_right): mask = torch.zeros(window_size ** 2, window_size ** 2)

    it is 49*49 in all tne swin network,why?

    opened by henbucuoshanghai 1
  • apply to other dataset

    apply to other dataset

    hello,thanks for the work you had done very much and i have a question that how can i apply this code to train a vit model on other dataset,how can i to adjust those parameters?

    opened by jieweilai 0
  • deeplabv3 + swintransformer

    deeplabv3 + swintransformer

    i try this swintransformer on deeplabv3 (https://github.com/VainF/DeepLabV3Plus-Pytorch), errors are found:

    Exception has occurred: EinopsError Error while processing rearrange-reduction pattern "b (nw_h w_h) (nw_w w_w) (h d) -> b h (nw_h nw_w) (w_h w_w) d". Input tensor shape: torch.Size([1, 104, 104, 96]). Additional info: {'h': 3, 'w_h': 7, 'w_w': 7}. Shape mismatch, can't divide axis of length 104 in chunks of 7

    During handling of the above exception, another exception occurred:

    File "D:\TangYong\Src\VS\Python\PyTorch\deeplabv3-vainf\network\backbone\swintransformer.py", line 111, in lambda t: rearrange(t, 'b (nw_h w_h) (nw_w w_w) (h d) -> b h (nw_h nw_w) (w_h w_w) d', File "D:\TangYong\Src\VS\Python\PyTorch\deeplabv3-vainf\network\backbone\swintransformer.py", line 110, in forward q, k, v = map( File "D:\TangYong\Src\VS\Python\PyTorch\deeplabv3-vainf\network\backbone\swintransformer.py", line 32, in forward return self.fn(self.norm(x), **kwargs) File "D:\TangYong\Src\VS\Python\PyTorch\deeplabv3-vainf\network\backbone\swintransformer.py", line 22, in forward return self.fn(x, **kwargs) + x File "D:\TangYong\Src\VS\Python\PyTorch\deeplabv3-vainf\network\backbone\swintransformer.py", line 149, in forward

    thank you for your answer.

    opened by TangYong1975 1
Releases(0.4)
Offical code for the paper: "Growing 3D Artefacts and Functional Machines with Neural Cellular Automata" https://arxiv.org/abs/2103.08737

Growing 3D Artefacts and Functional Machines with Neural Cellular Automata Video of more results: https://www.youtube.com/watch?v=-EzztzKoPeo Requirem

Robotics Evolution and Art Lab 51 Jan 01, 2023
AirPose: Multi-View Fusion Network for Aerial 3D Human Pose and Shape Estimation

AirPose AirPose: Multi-View Fusion Network for Aerial 3D Human Pose and Shape Estimation Check the teaser video This repository contains the code of A

Robot Perception Group 41 Dec 05, 2022
Demo notebooks for Qiskit application modules demo sessions (Oct 8 & 15):

qiskit-application-modules-demo-sessions This repo hosts demo notebooks for the Qiskit application modules demo sessions hosted on Qiskit YouTube. Par

Qiskit Community 46 Nov 24, 2022
An Inverse Kinematics library aiming performance and modularity

IKPy Demo Live demos of what IKPy can do (click on the image below to see the video): Also, a presentation of IKPy: Presentation. Features With IKPy,

Pierre Manceron 481 Jan 02, 2023
Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays

Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays In this repo, you will find the instructions on how to requ

Intelligent Vision Research Lab 4 Jul 21, 2022
Python package for downloading ECMWF reanalysis data and converting it into a time series format.

ecmwf_models Readers and converters for data from the ECMWF reanalysis models. Written in Python. Works great in combination with pytesmo. Citation If

TU Wien - Department of Geodesy and Geoinformation 31 Dec 26, 2022
Implementation of Basic Machine Learning Algorithms on small datasets using Scikit Learn.

Basic Machine Learning Algorithms All the basic Machine Learning Algorithms are implemented in Python using libraries Acknowledgements Machine Learnin

Piyal Banik 47 Oct 16, 2022
Repository for tackling Kaggle Ultrasound Nerve Segmentation challenge using Torchnet.

Ultrasound Nerve Segmentation Challenge using Torchnet This repository acts as a starting point for someone who wants to start with the kaggle ultraso

Qure.ai 46 Jul 18, 2022
Codebase for Time-series Generative Adversarial Networks (TimeGAN)

Codebase for Time-series Generative Adversarial Networks (TimeGAN)

Jinsung Yoon 532 Dec 31, 2022
基于Flask开发后端、VUE开发前端框架,在WEB端部署YOLOv5目标检测模型

基于Flask开发后端、VUE开发前端框架,在WEB端部署YOLOv5目标检测模型

37 Jan 01, 2023
PyTorch CZSL framework containing GQA, the open-world setting, and the CGE and CompCos methods.

Compositional Zero-Shot Learning This is the official PyTorch code of the CVPR 2021 works Learning Graph Embeddings for Compositional Zero-shot Learni

EML Tübingen 70 Dec 27, 2022
ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectives

Status: Under development (expect bug fixes and huge updates) ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectiv

37 Dec 28, 2022
Predicts an answer in yes or no.

Oui-ou-non-prediction Predicts an answer in 'yes' or 'no'. It is based on the game 'effeuiller la marguerite' in which the person plucks flower petals

Ananya Gupta 1 Jan 15, 2022
Interactive Visualization to empower domain experts to align ML model behaviors with their knowledge.

An interactive visualization system designed to helps domain experts responsibly edit Generalized Additive Models (GAMs). For more information, check

InterpretML 83 Jan 04, 2023
《Dual-Resolution Correspondence Network》(NeurIPS 2020)

Dual-Resolution Correspondence Network Dual-Resolution Correspondence Network, NeurIPS 2020 Dependency All dependencies are included in asset/dualrcne

Active Vision Laboratory 45 Nov 21, 2022
This is a code repository for the paper "Graph Auto-Encoders for Financial Clustering".

Repository for the paper "Graph Auto-Encoders for Financial Clustering" Requirements Python 3.6 torch torch_geometric Instructions This is a simple c

Edward Turner 1 Dec 02, 2021
Measuring Coding Challenge Competence With APPS

Measuring Coding Challenge Competence With APPS This is the repository for Measuring Coding Challenge Competence With APPS by Dan Hendrycks*, Steven B

Dan Hendrycks 218 Dec 27, 2022
Learning Time-Critical Responses for Interactive Character Control

Learning Time-Critical Responses for Interactive Character Control Abstract This code implements the paper Learning Time-Critical Responses for Intera

Movement Research Lab 227 Dec 31, 2022
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Segmentation Transformer Implementation of Segmentation Transformer in PyTorch, a new model to achieve SOTA in semantic segmentation while using trans

Abhay Gupta 161 Dec 08, 2022
Using this you can control your PC/Laptop volume by Hand Gestures (pinch-in, pinch-out) created with Python.

Hand Gesture Volume Controller Using this you can control your PC/Laptop volume by Hand Gestures (pinch-in, pinch-out). Code Firstly I have created a

Tejas Prajapati 16 Sep 11, 2021