A fastai/PyTorch package for unpaired image-to-image translation.

Overview

Unpaired image-to-image translation

A fastai/PyTorch package for unpaired image-to-image translation currently with CycleGAN implementation.

This is a package for training and testing unpaired image-to-image translation models. It currently only includes the CycleGAN, DualGAN, and GANILLA models, but other models will be implemented in the future.

This package uses fastai to accelerate deep learning experimentation. Additionally, nbdev was used to develop the package and produce documentation based on a series of notebooks.

Install

To install, use pip:

pip install git+https://github.com/tmabraham/UPIT.git

The package uses torch 1.7.1, torchvision 0.8.2, and fastai 2.3.0 (and its dependencies). It also requires nbdev 1.1.13 if you would like to add features to the package. Finally, for creating a web app model interface, gradio 1.1.6 is used.

How to use

Training a CycleGAN model is easy with UPIT! Given the paths of the images from the two domains trainA_path and trainB_path, you can do the following:

#cuda
from upit.data.unpaired import *
from upit.models.cyclegan import *
from upit.train.cyclegan import *
dls = get_dls(trainA_path, trainB_path)
cycle_gan = CycleGAN(3,3,64)
learn = cycle_learner(dls, cycle_gan,opt_func=partial(Adam,mom=0.5,sqr_mom=0.999))
learn.fit_flat_lin(100,100,2e-4)

The GANILLA model is only a different generator model architecture (that's meant to strike a better balance between style and content), so the same cycle_learner class can be used.

#cuda
from upit.models.ganilla import *
ganilla = GANILLA(3,3,64)
learn = cycle_learner(dls, ganilla,opt_func=partial(Adam,mom=0.5,sqr_mom=0.999))
learn.fit_flat_lin(100,100,2e-4)

Finally, we provide separate functions/classes for DualGAN model and training:

#cuda
from upit.models.dualgan import *
from upit.train.dualgan import *
dual_gan = DualGAN(3,64,3)
learn = dual_learner(dls, dual_gan, opt_func=RMSProp)
learn.fit_flat_lin(100,100,2e-4)

Additionally, we provide metrics for quantitative evaluation of the models, as well as experiment tracking with Weights and Biases. Check the documentation for more information!

Citing UPIT

If you use UPIT in your research please use the following BibTeX entry:

@Misc{UPIT,
    author =       {Tanishq Mathew Abraham},
    title =        {UPIT - A fastai/PyTorch package for unpaired image-to-image translation.},
    howpublished = {Github},
    year =         {2021},
    url =          {https://github.com/tmabraham/UPIT}
}
Comments
  • AttributeError: 'Learner' object has no attribute 'pred'

    AttributeError: 'Learner' object has no attribute 'pred'

    Hi. I am getting the following error:

    from upit.data.unpaired import *
    from upit.models.cyclegan import *
    from upit.train.cyclegan import *
    from fastai.vision.all import *
    
    horse2zebra = untar_data('https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/horse2zebra.zip')
    
    
    folders = horse2zebra.ls().sorted()
    trainA_path = folders[2]
    trainB_path = folders[3]
    testA_path = folders[0]
    testB_path = folders[1]
    
    dls = get_dls(trainA_path, trainB_path,num_A=100)
    cycle_gan = CycleGAN(3,3,64)
    learn = cycle_learner(dls, cycle_gan,show_img_interval=1)
    learn.show_training_loop()
    learn.lr_find()
    
    AttributeError: 'Learner' object has no attribute 'pred'
    
    opened by turgut090 8
  • 'str' object has no attribute

    'str' object has no attribute "__stored_args__" (ISSUE #7)

    I opened an issue (#7) on 03/09/2020. The problem and the solution are mentioned below with the colab files as well.

    On 03/09/2020, a commit was made in fastai/fastcore/uitls.py (fastai/[email protected]) (line 86) which made sure that we need not use "self" when we call the method store_attr. ( separate the names using commas)

    I made a small change in the code ie. in the upit/train/cyclegan.py such that it follows the changed fastcore/utils.py structure. I removed the self component for store_attr in cyclegan.py.

    The error can be seen here

    Screenshot from 2020-09-04 22-27-26

    The colab link: https://colab.research.google.com/drive/1lbXhX-bWvTsQcLb2UUFI0UkRpvx_WK2Y?usp=sharing

    After the change, the error does not occur as shown here.

    Screenshot from 2020-09-04 22-50-17 The Colab link: https://colab.research.google.com/drive/13rrNLBgDulgeHclovxJNtQVbVYcGMKm0?usp=sharing

    opened by lohithmunakala 7
  • Expose tfms in dataset generation

    Expose tfms in dataset generation

    Hey there,

    I think it would be a good idea to expose the tfms of the Datasets in the get_dls-Function, so users can set their own. https://github.com/tmabraham/UPIT/blob/c6c769cf8cedeec42865deddba26c2e413772303/upit/data/unpaired.py#L30

    Same goes for dataloaders batch_tfms, (e.g. if I want to disable Flipping I have to rewrite the dataloaders)

    https://github.com/tmabraham/UPIT/blob/c6c769cf8cedeec42865deddba26c2e413772303/upit/data/unpaired.py#L33

    opened by hno2 4
  • How do I turn fake_A and fake_B into images and save them?

    How do I turn fake_A and fake_B into images and save them?

    I would like to see what fake_A and fake_B look like at this step in the process saved as images. I can't seem to figure out how to convert them properly.

    def forward(self, output, target): """ Forward function of the CycleGAN loss function. The generated images are passed in as output (which comes from the model) and the generator loss is returned. """ fake_A, fake_B, idt_A, idt_B = output #Save and look at png images of fake_A and fake_B here

    opened by rbunn80110 3
  • Possible typo in the loss

    Possible typo in the loss

    https://github.com/tmabraham/UPIT/blob/1f272eac299181348c31988289c1420936cb580b/upit/train/cyclegan.py#L132

    Was this intended or was this line supposed to be: self.learn.loss_func.D_B_loss = loss_D_B.detach().cpu() ?

    opened by many-hats 2
  • Inference - Can not load state_dict

    Inference - Can not load state_dict

    Hey,

    me again. Sorry to bother you again!

    So I am trying to do inference on a trained model (with default values). I exported the Generator with the export_generatorFunction. Now I try to load my generator as shown in the Web App Example.

    But I get errors in loading the state_dict. The state dict seems to have extra key for nine extra layers, if I understand the error message correctly:

    Error Message

    model.load_state_dict(torch.load("generator.pth", map_location=device))
      File "/Applications/Utilities/miniconda3/envs/ml/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in load_state_dict
        raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    RuntimeError: Error(s) in loading state_dict for Sequential:
            Missing key(s) in state_dict: "10.conv_block.5.weight", "10.conv_block.5.bias", "11.conv_block.5.weight", "11.conv_block.5.bias", "12.conv_block.5.weight", "12.conv_block.5.bias", "13.conv_block.5.weight", "13.conv_block.5.bias", "14.conv_block.5.weight", "14.conv_block.5.bias", "15.conv_block.5.weight", "15.conv_block.5.bias", "16.conv_block.5.weight", "16.conv_block.5.bias", "17.conv_block.5.weight", "17.conv_block.5.bias", "18.conv_block.5.weight", "18.conv_block.5.bias". 
            Unexpected key(s) in state_dict: "10.conv_block.6.weight", "10.conv_block.6.bias", "11.conv_block.6.weight", "11.conv_block.6.bias", "12.conv_block.6.weight", "12.conv_block.6.bias", "13.conv_block.6.weight", "13.conv_block.6.bias", "14.conv_block.6.weight", "14.conv_block.6.bias", "15.conv_block.6.weight", "15.conv_block.6.bias", "16.conv_block.6.weight", "16.conv_block.6.bias", "17.conv_block.6.weight", "17.conv_block.6.bias", "18.conv_block.6.weight", "18.conv_block.6.bias".
    

    Minimal Working Example

    
    import torch
    from upit.models.cyclegan import resnet_generator
    import torchvision.transforms
    from PIL import Image
    
    
    device = torch.device("cpu")
    model = resnet_generator(ch_in=3, ch_out=3)
    model.load_state_dict(torch.load("generator.pth", map_location=device))
    model.eval()
    
    
    totensor = torchvision.transforms.ToTensor()
    normalize_fn = torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
    topilimage = torchvision.transforms.ToPILImage()
    
    
    def predict(input):
        im = normalize_fn(totensor(input))
        print(im.shape)
        preds = model(im.unsqueeze(0)) / 2 + 0.5
        print(preds.shape)
        return topilimage(preds.squeeze(0).detach().cpu())
    
    
    im = predict(Image.open("test.jpg"))
    im.save("out.jpg")
    

    Thanks again for your support!

    opened by hno2 2
  • Disable Identity Loss

    Disable Identity Loss

    Hey, thanks for your awesome work. If I want to set l_idt of the CycleGANLoss to zero, how would I do this? Can I pass this some argument to the cycle_learner? On a quick look this seems to be hardcoded in the cycle_learner, right ? So I would have to right "my own" cycle_learner?

    Thanks for the answers to my - most likely - stupid questions!

    opened by hno2 2
  • How to show images after fit?

    How to show images after fit?

    Hi. The learner has a method learn.progress.show_cycle_gan_imgs. However, how to plot it with matplotlib's plt.show() if I use python repl. There is an argument event_name in learn.progress.show_cycle_gan_imgs. I would like to do it after fit:

    >>> learn.fit_flat_lin(1,1,2e-4)
    epoch     train_loss  id_loss_A  id_loss_B  gen_loss_A  gen_loss_B  cyc_loss_A  cyc_loss_B  D_A_loss  D_B_loss  time    
    /home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastprogress/fastprogress.py:74: UserWarning: Your generator is empty.
      warn("Your generator is empty.")
    0         10.809340   1.621755   1.690260   0.420364    0.452442    3.359353    3.504819    0.370692  0.370692  00:08     
    1         9.847495    1.283465   1.510985   0.353303    0.349454    2.682495    3.135504    0.253919  0.253919  00:07 
    

    https://github.com/tmabraham/UPIT/blob/020f8e2d8dbab6824cd4bef2690ea93e3ff69a6e/upit/train/cyclegan.py#L167-L176

    opened by turgut090 2
  • How to make predictions?

    How to make predictions?

    With image classifier I usually do:

    test_dl = object.dls.test_dl("n02381460_1052.jpg") # object is model/learner
    predictions = object.get_preds(dl = test_dl)
    

    However, it throws:

      TypeError: 'NoneType' object cannot be interpreted as an integer 
    
    Traceback (most recent call last):
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 155, in _with_events
        try:       self(f'before_{event_type}')       ;f()
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 161, in all_batches
        for o in enumerate(self.dl): self.one_batch(*o)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 179, in one_batch
        self._with_events(self._do_one_batch, 'batch', CancelBatchException)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 155, in _with_events
        try:       self(f'before_{event_type}')       ;f()
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 133, in __call__
        def __call__(self, event_name): L(event_name).map(self._call_one)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/foundation.py", line 226, in map
        def map(self, f, *args, gen=False, **kwargs): return self._new(map_ex(self, f, *args, gen=gen, **kwargs))
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/basics.py", line 537, in map_ex
        return list(res)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/basics.py", line 527, in __call__
        return self.fn(*fargs, **kwargs)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 137, in _call_one
        [cb(event_name) for cb in sort_by_run(self.cbs)]
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 137, in <listcomp>
        [cb(event_name) for cb in sort_by_run(self.cbs)]
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/callback/core.py", line 44, in __call__
        if self.run and _run: res = getattr(self, event_name, noop)()
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/upit/train/cyclegan.py", line 112, in before_batch
        self.learn.xb = (self.learn.xb[0],self.learn.yb[0]),
    IndexError: tuple index out of range
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/torch_core.py", line 268, in to_concat
        try:    return retain_type(torch.cat(xs, dim=dim), xs[0])
    TypeError: expected Tensor as element 0 in argument 0, but got NoneType
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "<string>", line 2, in <module>
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 235, in get_preds
        self._do_epoch_validate(dl=dl)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 188, in _do_epoch_validate
        with torch.no_grad(): self._with_events(self.all_batches, 'validate', CancelValidException)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 157, in _with_events
        finally:   self(f'after_{event_type}')        ;final()
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 133, in __call__
        def __call__(self, event_name): L(event_name).map(self._call_one)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/foundation.py", line 226, in map
        def map(self, f, *args, gen=False, **kwargs): return self._new(map_ex(self, f, *args, gen=gen, **kwargs))
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/basics.py", line 537, in map_ex
        return list(res)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/basics.py", line 527, in __call__
        return self.fn(*fargs, **kwargs)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 137, in _call_one
        [cb(event_name) for cb in sort_by_run(self.cbs)]
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 137, in <listcomp>
        [cb(event_name) for cb in sort_by_run(self.cbs)]
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/callback/core.py", line 44, in __call__
        if self.run and _run: res = getattr(self, event_name, noop)()
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/callback/core.py", line 123, in after_validate
        if not self.save_preds: self.preds   = detuplify(to_concat(self.preds, dim=self.concat_dim))
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/torch_core.py", line 270, in to_concat
        for i in range_of(o_)) for o_ in xs], L())
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/torch_core.py", line 270, in <listcomp>
        for i in range_of(o_)) for o_ in xs], L())
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/basics.py", line 425, in range_of
        return list(range(a,b,step) if step is not None else range(a,b) if b is not None else range(a))
    TypeError: 'NoneType' object cannot be interpreted as an integer
    
    opened by turgut090 2
  • 'str' object has no attribute

    'str' object has no attribute "__stored_args__" (ISSUE #7)

    I opened an issue (https://github.com/tmabraham/UPIT/issues/7) on 03/09/2020. The problem and the solution are mentioned below with the colab files as well.

    On 03/09/2020, a commit was made in fastai/fastcore/uitls.py (https://github.com/fastai/fastcore/commit/ea1c33f1c3543e6e4403b1d3b7702a98471f3515) (line 86) which made sure that we need not use "self" when we call the method store_attr. ( separate the names using commas).

    I made a small change in the code ie. in the upit/train/cyclegan.py such that it follows the changed fastcore/utils.py structure. I removed the self component for store_attr in cyclegan.py.

    The error can be seen here

    Screenshot from 2020-09-04 22-27-26

    The colab link : https://colab.research.google.com/drive/1lbXhX-bWvTsQcLb2UUFI0UkRpvx_WK2Y?usp=sharing

    After the change, the error does not occur as shown here.

    Screenshot from 2020-09-04 22-50-17

    The Colab link: https://colab.research.google.com/drive/13rrNLBgDulgeHclovxJNtQVbVYcGMKm0?usp=sharing

    opened by lohithmunakala 2
  • Add HuggingFace Hub integration

    Add HuggingFace Hub integration

    Pretrained models to be available on HuggingFace Hub, as well as allowing users to save their own models to HuggingFace Hub.

    Relevant links:

    • https://huggingface.co/docs/hub/adding-a-model
    • https://huggingface.co/docs/hub/adding-a-model#using-the-huggingface_hub-client-library
    enhancement 
    opened by tmabraham 1
  • Validate with Existing Model Trained on Both Classes

    Validate with Existing Model Trained on Both Classes

    This is really great work! I noticed you appear to be using this code for creating new examples of pathology images. I am doing something similar, just not pathology. Let's say I already have a separate model trained to classify these types of images and I want to run it against a validation set every epoch and determine how well the cyclegan is performing in generating new examples that fool an existing classifier. I'm trying to figure out where the best place might be to add code which does that. I can see a few places where it would probably work, but was curious if you have already thought about adding this functionality to monitor training progress?

    Thanks,

    Bob

    opened by rbunn80110 4
  • Add more unit tests

    Add more unit tests

    Here are some tests of interest:

    • [ ] Test the loss (example fake and real images for the reconstruction loss and discriminator)

    • [ ] Check the batch independence and the model parameter updates

    • [ ] Test successful overfitting on a single batch

    • [ ] Test for rotation invariance and other invariance properties

    enhancement 
    opened by tmabraham 0
  • multi-GPU support

    multi-GPU support

    I need to check if fastai's multi-GPU support work with my package, and if not, what needs to be modified to get it to work. Additionally, I may need to add a simpler interface for DDP or at least clear examples/documentation. This will enable for quicker model training on multi-GPU servers, like those at USF.

    enhancement 
    opened by tmabraham 1
  • Add metrics and test model tracking callbacks

    Add metrics and test model tracking callbacks

    I want to add support for metrics, and even potentially include some common metrics, like FID, mi-FID, KID, and segmentation metrics (for paired) etc.

    Additionally, monitoring the losses and metrics, I want to be able to use fastai's built-in callbacks for saving best model, early stopping, and reducing LR on plateau.

    This shouldn't be too hard to include. A major part of this feature is finding good PyTorch/numpy implementations of some of these metrics and getting it to work.

    enhancement 
    opened by tmabraham 5
Releases(0.2.2)
This repository provides train&test code, dataset, det.&rec. annotation, evaluation script, annotation tool, and ranking.

SCUT-CTW1500 Datasets We have updated annotations for both train and test set. Train: 1000 images [images][annos] Additional point annotation for each

Yuliang Liu 600 Dec 18, 2022
Use Youdao OCR API to covert your clipboard image to text.

Alfred Clipboard OCR 注:本仓库基于 oott123/alfred-clipboard-ocr 的逻辑用 Python 重写,换用了有道 AI 的 API,准确率更高,有效防止百度导致隐私泄露等问题,并且有道 AI 初始提供的 50 元体验金对于其资费而言个人用户基本可以永久使用

Junlin Liu 6 Sep 19, 2022
A curated list of awesome synthetic data for text location and recognition

awesome-SynthText A curated list of awesome synthetic data for text location and recognition and OCR datasets. Text location SynthText SynthText_Chine

Tianzhong 283 Jan 05, 2023
天池2021"全球人工智能技术创新大赛"【赛道一】:医学影像报告异常检测 - 第三名解决方案

天池2021"全球人工智能技术创新大赛"【赛道一】:医学影像报告异常检测 比赛链接 个人博客记录 目录结构 ├── final------------------------------------决赛方案PPT ├── preliminary_contest--------------------

19 Aug 17, 2022
Image processing using OpenCv

Image processing using OpenCv Write a program that opens the webcam, and the user selects one of the following on the video: ✅ If the user presses the

M.Najafi 4 Feb 18, 2022
Single Shot Text Detector with Regional Attention

Single Shot Text Detector with Regional Attention Introduction SSTD is initially described in our ICCV 2017 spotlight paper. A third-party implementat

Pan He 215 Dec 07, 2022
Face Recognizer using Opencv Python

Face Recognizer using Opencv Python The first step create your own dataset with file open-cv-create_dataset second step You can put the photo accordin

Han Izza 2 Nov 16, 2021
Code for CVPR 2022 paper "SoftGroup for Instance Segmentation on 3D Point Clouds"

SoftGroup We provide code for reproducing results of the paper SoftGroup for 3D Instance Segmentation on Point Clouds (CVPR 2022) Author: Thang Vu, Ko

Thang Vu 231 Dec 27, 2022
SceneCollisionNet This repo contains the code for "Object Rearrangement Using Learned Implicit Collision Functions", an ICRA 2021 paper. For more info

SceneCollisionNet This repo contains the code for "Object Rearrangement Using Learned Implicit Collision Functions", an ICRA 2021 paper. For more info

NVIDIA Research Projects 31 Nov 22, 2022
3点クリックで円を指定し、極座標変換を行うサンプルプログラム

click-warpPolar 3点クリックで円を指定し、極座標変換を行うサンプルプログラムです。 Requirements OpenCV 3.4.2 or Later Usage 実行方法は以下です。 起動後、マウスで3点をクリックし円を指定してください。 python click-warpPol

KazuhitoTakahashi 17 Dec 30, 2022
A bot that plays TFT using OCR. Keeps track of bench, board, items, and plays the user defined team comp.

NOTES: To ensure best results, make sure you are running this on a computer that has decent specs. 1920x1080 fullscreen is required in League, game mu

francis 125 Dec 30, 2022
Text Detection from images using OpenCV

EAST Detector for Text Detection OpenCV’s EAST(Efficient and Accurate Scene Text Detection ) text detector is a deep learning model, based on a novel

Abhishek Singh 88 Oct 20, 2022
CNN+LSTM+CTC based OCR implemented using tensorflow.

CNN_LSTM_CTC_Tensorflow CNN+LSTM+CTC based OCR(Optical Character Recognition) implemented using tensorflow. Note: there is No restriction on the numbe

Watson Yang 356 Dec 08, 2022
Code for the paper "DewarpNet: Single-Image Document Unwarping With Stacked 3D and 2D Regression Networks" (ICCV '19)

DewarpNet This repository contains the codes for DewarpNet training. Recent Updates [May, 2020] Added evaluation images and an important note about Ma

<a href=[email protected]"> 354 Jan 01, 2023
Automatically download multiple papers by keywords in CVPR

CVFPaperHelper Automatically download multiple papers by keywords in CVPR Install mkdir PapersToRead cd PaperToRead pip install requests tqdm git clon

46 Jun 08, 2022
Papers, Datasets, Algorithms, SOTA for STR. Long-time Maintaining

Scene Text Recognition Recommendations Everythin about Scene Text Recognition SOTA • Papers • Datasets • Code Contents 1. Papers 2. Datasets 2.1 Synth

Deep Learning and Vision Computing Lab, SCUT 197 Jan 05, 2023
Sort By Face

Sort-By-Face This is an application with which you can either sort all the pictures by faces from a corpus of photos or retrieve all your photos from

0 Nov 29, 2021
Repository collecting all the submodules for the new PyTorch-based OCR System.

OCRopus3 is being replaced by OCRopus4, which is a rewrite using PyTorch 1.7; release should be soonish. Please check github.com/tmbdev/ocropus for up

NVIDIA Research Projects 138 Dec 09, 2022
DouZero is a reinforcement learning framework for DouDizhu - 斗地主AI

[ICML 2021] DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning | 斗地主AI

Kwai 3.1k Jan 05, 2023
RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition

RepMLP RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition Released the code of RepMLP together with an example o

260 Jan 03, 2023