Transfer-Learn is an open-source and well-documented library for Transfer Learning.

Overview

Introduction

Transfer-Learn is an open-source and well-documented library for Transfer Learning. It is based on pure PyTorch with high performance and friendly API. Our code is pythonic, and the design is consistent with torchvision. You can easily develop new algorithms, or readily apply existing algorithms.

The currently supported algorithms include:

Domain Adaptation for Classification
  • Domain-Adversarial Training of Neural Networks (DANN, ICML 2015)
  • Learning Transferable Features with Deep Adaptation Networks (DAN, ICML 2015)
  • Deep Transfer Learning with Joint Adaptation Networks (JAN, ICML 2017)
  • Conditional Adversarial Domain Adaptation (CDAN, NIPS 2018)
  • Maximum Classifier Discrepancy for Unsupervised Domain Adaptation (MCD, CVPR 2018)
  • Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation (AFN, ICCV 2019)
  • Bridging Theory and Algorithm for Domain Adaptation (MDD, ICML 2019)
  • Minimum Class Confusion for Versatile Domain Adaptation (MCC, ECCV 2020)
Partial Domain Adaptation
  • Partial Adversarial Domain Adaptation (PADA, ECCV 2018)
  • Importance Weighted Adversarial Nets for Partial Domain Adaptation (IWAN, CVPR 2018)
Open-set Domain Adaptation
  • Open Set Domain Adaptation by Backpropagation (OSBP, ECCV 2018)
Domain Adaptation for Segmentation
  • Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks (CycleGAN, ICCV 2017)
  • CyCADA: Cycle-Consistent Adversarial Domain Adaptation (ICML 2018)
  • ADVENT: Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation (CVPR 2019)
  • FDA: Fourier Domain Adaptation for Semantic Segmentation (CVPR 2020)
Domain Adaptation for Keypoint Detection
  • Regressive Domain Adaptation for Unsupervised Keypoint Detection (RegDA, CVPR 2021)
Finetune for Classification
  • DEep Learning Transfer using Fea- ture Map with Attention for convolutional networks (DELTA, ICLR 2019)
  • Catastrophic Forgetting Meets Negative Transfer: Batch Spectral Shrinkage for Safe Transfer Learning (BSS, NIPS 2019)
  • Stochastic Normalization (StochNorm, NIPS 2020)
  • Co-Tuning for Transfer Learning (Co-Tuning, NIPS 2020).

We are planning to add

  • Domain Generalization
  • Multi-task Learning
  • DA for Object Detection
  • Universal Domain Adaptation

The performance of these algorithms were fairly evaluated in this benchmark.

Installation

For flexible use and modification, please git clone the library.

Documentation

You can find the tutorial and API documentation on the website: Documentation (please open in Firefox or Safari). Note that this link is only for temporary use. You can also build the doc by yourself following the instructions in http://170.106.108.162/get_started/faq.html.

Also, we have examples in the directory examples. A typical usage is

# Train a DANN on Office-31 Amazon -> Webcam task using ResNet 50.
# Assume you have put the datasets under the path `data/office-31`, 
# or you are glad to download the datasets automatically from the Internet to this path
python dann.py data/office31 -d Office31 -s A -t W -a resnet50  --epochs 20

In the directory examples, you can find all the necessary running scripts to reproduce the benchmarks with specified hyper-parameters.

Contributing

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us.

Disclaimer on Datasets

This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have licenses to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!

Contact

If you have any problem with our code or have some suggestions, including the future feature, feel free to contact

or describe it in Issues.

For Q&A in Chinese, you can choose to ask questions here before sending an email. 迁移学习算法库答疑专区

Citation

If you use this toolbox or benchmark in your research, please cite this project.

@misc{dalib,
  author = {Junguang Jiang, Baixu Chen, Bo Fu, Mingsheng Long},
  title = {Transfer-Learning-library},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/thuml/Transfer-Learning-Library}},
}

Acknowledgment

We would like to thank School of Software, Tsinghua University and The National Engineering Laboratory for Big Data Software for providing such an excellent ML research platform.

Comments
  • fogy_cityscape dataset convert error:The total number of index is not correct    for adaptive object detection

    fogy_cityscape dataset convert error:The total number of index is not correct for adaptive object detection

    hello thanks for developing this lib i get voc style dataset of cityscape/fogy_cityscape in prepare_cityscapes_to_voc.py but i find index of ImageSets/Main/test.txt which only have 493 indexs ,but there are 500 in paper

    question object detection 
    opened by zyfone 9
  • Can't run rightly

    Can't run rightly

    I've installed dalib and do experiments according to the GitHub example. The question is it can't run rightly, always stuck in the beginning. image It seems like the dataset is not read in because the GPU memory is not used. image And I've checked the dataset format as described in the API documentation. So I want to get some help. Thanks.

    opened by WHUzhusihan96 8
  • The download URL used in the vision dataset is dead.

    The download URL used in the vision dataset is dead.

    Right now, when I clone this latest repository and run the tutorial, I get the following error:

    スクリーンショット 2021-03-30 16 08 47

    This referenced URL seems dead. https://cloud.tsinghua.edu.cn/f/1f5646f39aeb4d7389b9/?dl=1 

    opened by TaiseiYamana 7
  • 关于imagenet-r数据集的格式问题

    关于imagenet-r数据集的格式问题

    您好,感谢您提供如此好关于迁移学习的开源项目。 我遇到的问题是关于imagenet-r的数据集的格式准备问题,在做image_classfication的域自适应中,我在您提供的链接下载了imagenet-r.tar数据集,并在image_calssfication目录下创建了ImageNetR目录,并将imagenet-r.tar拷贝到了ImageNetR目录下,之后调用解压指令tar -xvf imagenet-r.tar 解压了iamgenet-r.tar压缩包,最后调用UDA_VISIBLE_DEVICES=0 python dann.py data/ImageNetR -d ImageNetR -s IN -t INR -a resnet50 --epochs 30 -i 2500 -p 500 --seed 0 --log logs/dann/ImageNet_IN2INR启动脚本。但是程序报错,报错信息如下: FileNotFoundError: [Errno 2] No such file or directory: 'data/ImageNetR/train/n09835506/n09835506_14973.JPEG' 初步判断就是imagenet-r数据集并没有按照规定的格式进行准备,但是该错误意思是该数据集有train和val的划分,但是我解压imagenet-r.tar发现并没有划分train和val,看您的说明文档让在imagenet-r.py查看数据集准备的格式,但是我在该文件中并没有看到相关的代码,所以想请教您一下这个数据集的准备工作具体怎么做呢? 非常期待您的回复,我是在linux系统上进行测试,不过感觉这个bug和系统关系也不大。

    opened by fycfycfyc 6
  • 'utils' module

    'utils' module

    Hello, I tried to run the code from dann.py, but received the error that module 'utils' has no attributes 'get_dataset_names', 'get_train_transform', 'get_val_transform', etc. Also, I could not find these attributes in 'utils' folder. Could you please help to resolve this problem? Perhaps, I do something wrong. Thank you very much! image

    opened by EvgeniyS99 6
  • Attribute 'thing_classes' does not exist in the metadata of dataset: metadata is empty.

    Attribute 'thing_classes' does not exist in the metadata of dataset: metadata is empty.

    Hello,

    I am testing your examples/domain_adaptation/object_detection/d_adapt/d_adapt.py method on my custom dataset (30 classes), which i converted to VOC format. Initially, I trained it on source-only.py successfully, but when trying to run d-adapt.py, I receive the following error.

    -- Process 0 terminated with the following error:
    Traceback (most recent call last):
      File "/opt/rh/rh-python38/root/usr/local/lib64/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
        fn(i, *args)
      File "/scratch/project_2005695/detectron2/detectron2/engine/launch.py", line 126, in _distributed_worker
        main_func(*args)
      File "/scratch/project_2005695/Transfer-Learning-Library/examples/domain_adaptation/object_detection/d_adapt/d_adapt.py", line 272, in main
        train(model, logger, cfg, args, args_cls, args_box)
      File "/scratch/project_2005695/Transfer-Learning-Library/examples/domain_adaptation/object_detection/d_adapt/d_adapt.py", line 131, in train
        classes = MetadataCatalog.get(args.targets[0]).thing_classes
      File "/scratch/project_2005695/detectron2/detectron2/data/catalog.py", line 131, in __getattr__
        raise AttributeError(
    AttributeError: Attribute 'thing_classes' does not exist in the metadata of dataset '.._datasets_TLESS_real_dataset_trainval': metadata is empty.
    

    I have registered the base class in tllib/vision/datasets/object_detection/__init__.py same way as in the provided CityScapesBase class:

    class TLessBase:
        class_names = ('Model 1', 'Model 2', 'Model 3', 'Model 4', 'Model 5',
                    'Model 6', 'Model 7', 'Model 8', 'Model 9', 'Model 10', 'Model 11',
                    'Model 12', 'Model 13', 'Model 14', 'Model 15', 'Model 16', 'Model 17',
                    'Model 18', 'Model 19', 'Model 20', 'Model 21', 'Model 22', 'Model 23',
                    'Model 24', 'Model 25', 'Model 26', 'Model 27', 'Model 28', 'Model 29', 'Model 30'
                    )
    
        def __init__(self, root, split="trainval", year=2007, ext='.jpg'):
            self.name = "{}_{}".format(root, split)
            self.name = self.name.replace(os.path.sep, "_")
            if self.name not in MetadataCatalog.keys():
                register_pascal_voc(self.name, root, split, year, class_names=self.class_names, ext=ext,
                                    bbox_zero_based=True)
                MetadataCatalog.get(self.name).evaluator_type = "pascal_voc"
    

    And then the target and the test classes inherit from it.

    Could you please suggest what I am missing?

    bug object detection 
    opened by darkhan-s 6
  • 关于MDD算法在Visda数据集上的复现

    关于MDD算法在Visda数据集上的复现

    MDD原文中(Bridging Theory and Algorithm for Domain Adaptation),Table3写道算法在visda数据集上Acc能达到74%+,请问可以分享一下超参数设置吗?比如说Initial LR、lr_gamma、lr_decay。十分感谢,最近我也在进行DA的工作,但是利用该lib开源代码的default setting无法reproduce,这样不太敢直接引用数据。 十分感谢!!!!

    opened by dingning97 6
  • Office31 fails to download

    Office31 fails to download

    I tried to use your library today and I used the example command for DANN on Office31. However, the dataset fails to download. Could you please check, if the download link is still up-to-date?

    python examples/dann.py data/office31 -d Office31 -s A -t W -a resnet50  --epochs 20
    Namespace(arch='resnet50', batch_size=32, data='Office31', epochs=20, iters_per_epoch=1000, lr=0.01, momentum=0.9, print_freq=100, root='data/office31', seed=None, source='A', target='W', trade_off=1.0, weight_decay=0.001, workers=2)
    Downloading amazon
    Downloading https://cloud.tsinghua.edu.cn/f/05640442cd904c39ad60/?dl=1 to data/office31/amazon.tgz
     64%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏                                                               | 13164544/20448768 [01:24<01:30, 80083.97it/s]Failed download. Trying https -> http instead. Downloading http://cloud.tsinghua.edu.cn/f/05640442cd904c39ad60/?dl=1 to data/office31/amazon.tgz
     64%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋                                                               | 13180928/20448768 [01:32<00:51, 141740.80it/s]
    Extracting data/office31/amazon.tgz to data/office31                                                                                                                                                          | 0/7692 [00:06<?, ?it/s]
    Fail to download amazon.tgz from url link https://cloud.tsinghua.edu.cn/f/05640442cd904c39ad60/?dl=1
    Please check you internet connection or reinstall DALIB by 'pip install --upgrade dalib'
    
    
    opened by mstoelzle 6
  • 请问 synthia中,synthia_mapped_to_cityscapes这个文件在哪里能找到

    请问 synthia中,synthia_mapped_to_cityscapes这个文件在哪里能找到

    data/synthia
    ├── RGB
    ├── synthia_mapped_to_cityscapes
    └── ...
    

    我在那个官网上下载了 SYNTHIA-RAND-CITYSCAPES (CVPR16) 但里面没有这个文件,我之前跑的网络用的是上面这个数据及,但这个我看 UDA里面 synthia里面 readme有要这个文件

    opened by yuheyuan 4
  • Failed to import mmdetection

    Failed to import mmdetection

    When i try to run source_only.py I get the error message "No module named 'tlllib.vision.models.object_detection.backbone.mmdetection'"

    If I try to import "tlllib.vision.models.object_detection" it seems that the error is in the file backbone/vgg.py in "from .mmdetection.vgg import VGG. Any clue how I can fix this?

    help wanted 
    opened by SeucheAchat9115 4
  • Help on Trainning with custom dataset

    Help on Trainning with custom dataset

    Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

    Describe the solution you'd like A clear and concise description of what you want to happen.

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Additional context Add any other context or screenshots about the feature request here.

    question 
    opened by Mininggamer 4
  • torchvision should be upgrade

    torchvision should be upgrade

    Describe the bug Like issue#114, the bug is not fixed yet.

    To Reproduce Example of cycada cannot be run successfully. It will raise exception.

    Additional context

    Traceback (most recent call last):
      File "/home/studio-lab-user/sagemaker-studiolab-notebooks/Transfer-Learning-Library/cycada.py", line 27, in <module>
        import tllib.vision.models.segmentation as models
      File "<frozen zipimport>", line 259, in load_module
      File "/home/studio-lab-user/.conda/envs/studiolab/lib/python3.9/site-packages/tllib-0.4-py3.9.egg/tllib/vision/models/segmentation/__init__.py", line 1, in <module>
      File "<frozen zipimport>", line 259, in load_module
      File "/home/studio-lab-user/.conda/envs/studiolab/lib/python3.9/site-packages/tllib-0.4-py3.9.egg/tllib/vision/models/segmentation/deeplabv2.py", line 6, in <module>
    ModuleNotFoundError: No module named 'torchvision.models.utils'
    
    bug 
    opened by hetan697 1
  • 您好,有关coral函数的疑问。

    您好,有关coral函数的疑问。

    敬的作者,您好: 非常感谢您的伟大的工作,这方便了我们这些非专业人员快速学习有关迁移学习的知识。我在研究coral中时,有以下疑问向您请教:

    1、Transfer-Learning-Library/tree/master/examples/domain_generalization/image_classification)/coral.py中,第182-192行中,

    for domain_i in range(n_domains_per_batch): # cls loss y_i, labels_i = y_all[domain_i], labels_all[domain_i] loss_ce += F.cross_entropy(y_i, labels_i) # update acc cls_acc += accuracy(y_i, labels_i)[0] / n_domains_per_batch # correlation alignment loss for domain_j in range(domain_i + 1, n_domains_per_batch): f_i = f_all[domain_i] f_j = f_all[domain_j] loss_penalty += correlation_alignment_loss(f_i, f_j)

    为什么需要对每一个样本计算coral损失值,可以直接对一个step中的样本直接计算损失值吗?

    2、在196行中: loss_penalty /= n_domains_per_batch * (n_domains_per_batch - 1) / 2 为什么计算loss_penalty需要除以n_domains_per_batch * (n_domains_per_batch - 1) / 2呢?

    期待您的答疑。

    question 
    opened by SCXCLY 3
  • 实验结果差别大

    实验结果差别大

    作者您好,为什么我这边跑出来的source_only的结果和您的差别有点大呀,作者您是把最后的final_mode那一轮记录的结果作为最终的精确度的吗?还有作者在跑watercolor、comic等数据集集的时候,eg:-t采用的是WaterColor,--test采用的是WaterColorTest吗?如果您能够解答,我将非常感谢! Evaluating datasets_watercolor_train using 2007 metric. +---------+---------------------+ | AP | 0.0129179003324974 | | AP50 | 0.04594659669912493 | | AP75 | 0.00946969696969697 | | bicycle | 0.0 | | bird | 0.12804097311139565 | | car | 0.00989756025139803 | | cat | 0.0 | | dog | 0.13774104683195593 | | person | 0.0 | +---------+---------------------+ Evaluating datasets_watercolor_test using 2007 metric. +---------+----------------------+ | AP | 0.08883031642337483 | | AP50 | 0.2204288930733166 | | AP75 | 0.05140623869850683 | | bicycle | 0.0 | | bird | 0.11188811188811189 | | car | 0.007665184730952015 | | cat | 0.19342359767891684 | | dog | 0.18315018315018317 | | person | 0.8264462809917356 | +---------+----------------------+

    question 
    opened by anranbixin 8
  • 作者您好,我在使用d_adapt.py 程序时出现了一些bug。

    作者您好,我在使用d_adapt.py 程序时出现了一些bug。

    Traceback (most recent call last): File "d_adapt.py", line 348, in args=(args, args_cls, args_box), File "/home/shishijie/anaconda3/envs/detectron-na/lib/python3.6/site-packages/detectron2/engine/launch.py", line 82, in launch main_func(*args) File "d_adapt.py", line 277, in main train(model, logger, cfg, args, args_cls, args_box) File "d_adapt.py", line 163, in train bbox_adaptor.fit(data_loader_source, data_loader_target, data_loader_validation) File "/home/shishijie/shishijie_projects/domain_adapatation/Transfer-Learning-Library-master/examples/domain_adaptation/object_detection/d_adapt/bbox_adaptation.py", line 354, in fit x_s, labels_s = next(iter_source) File "/home/shishijie/shishijie_projects/domain_adapatation/Transfer-Learning-Library-master/examples/domain_adaptation/object_detection/d_adapt/tllib/utils/data.py", line 55, in next data = next(self.iter) File "/home/shishijie/anaconda3/envs/detectron-na/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 521, in next data = self._next_data() File "/home/shishijie/anaconda3/envs/detectron-na/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1176, in _next_data raise StopIteration StopIteration

    我调了一天,我目前的看法是:a_adapt程序有四个训练阶段:源域预训练,类别适应,边界框适应,⽬标域伪标签训练。但是现在源域预训练没问题,我用源域的训练权重作为后续训练的预训练权重,10个epcho的类别适应训练也没问题,边界框适应训练出现了问题。

    examples/domain_adaptation/object_detection/d_adapt/d_adapt.py 文件调用的bbox_adaptor.fit(data_loader_source, data_loader_target, data_loader_validation) 函数出现了问题,examples/domain_adaptation/object_detection/d_adapt/bbox_adaptation.py 文件中的 iter_source 拿不到标签数据,

    (ps:自己的数据集和官方的voc2007和clipart数据集都是这样,我的训练命令:CUDA_VISIBLE_DEVICES=0 python d_adapt.py --config-file config/retinanet_R_101_FPN_voc.yaml -s VOC2007 ../datasets/VOC2007 -t Clipart ../datasets/clipart --test Clipart ../datasets/clipart --finetune --bbox-refine OUTPUT_DIR logs/retinanet_R_101_FPN_voc/voc2clipart/phase2 )

    劳烦作者费心给我这个废物一些指导

    object detection 
    opened by shishi-jie 3
Releases(v0.4)
Owner
THUML @ Tsinghua University
Machine Learning Group, School of Software, Tsinghua University
THUML @ Tsinghua University
Facebook AI Image Similarity Challenge: Descriptor Track

Facebook AI Image Similarity Challenge: Descriptor Track This repository contains the code for our solution to the Facebook AI Image Similarity Challe

Sergio MP 17 Dec 14, 2022
[IEEE TPAMI21] MobileSal: Extremely Efficient RGB-D Salient Object Detection [PyTorch & Jittor]

MobileSal IEEE TPAMI 2021: MobileSal: Extremely Efficient RGB-D Salient Object Detection This repository contains full training & testing code, and pr

Yu-Huan Wu 52 Jan 06, 2023
FAST Aiming at the problems of cumbersome steps and slow download speed of GNSS data

FAST Aiming at the problems of cumbersome steps and slow download speed of GNSS data, a relatively complete set of integrated multi-source data download terminal software fast is developed. The softw

ChangChuntao 23 Dec 31, 2022
Pytorch Implementation of rpautrat/SuperPoint

SuperPoint-Pytorch (A Pure Pytorch Implementation) SuperPoint: Self-Supervised Interest Point Detection and Description Thanks This work is based on:

76 Dec 27, 2022
CAPITAL: Optimal Subgroup Identification via Constrained Policy Tree Search

CAPITAL: Optimal Subgroup Identification via Constrained Policy Tree Search This repository is the official implementation of CAPITAL: Optimal Subgrou

Hengrui Cai 0 Oct 19, 2021
Code of paper "Compositionally Generalizable 3D Structure Prediction"

Compositionally Generalizable 3D Structure Prediction In this work, We bring in the concept of compositional generalizability and factorizes the 3D sh

Songfang Han 30 Dec 17, 2022
Gesture Volume Control v.2

Gesture volume control v.2 In this project I am going to learn how to use Gesture Control to change the volume of a computer. I first look into hand t

Pavel Dat 23 Dec 26, 2022
TorchGeo is a PyTorch domain library, similar to torchvision, that provides datasets, transforms, samplers, and pre-trained models specific to geospatial data.

TorchGeo is a PyTorch domain library, similar to torchvision, that provides datasets, transforms, samplers, and pre-trained models specific to geospatial data.

Microsoft 1.3k Dec 30, 2022
Multi-Objective Reinforced Active Learning

Multi-Objective Reinforced Active Learning Dependencies wandb tqdm pytorch = 1.7.0 numpy = 1.20.0 scipy = 1.1.0 pycolab == 1.2 Weights and Biases O

Markus Peschl 6 Nov 19, 2022
OCR Post Correction for Endangered Language Texts

📌 Coming soon: an update to the software including features from our paper on semi-supervised OCR post-correction, to be published in the Transaction

Shruti Rijhwani 96 Dec 31, 2022
Code of the paper "Shaping Visual Representations with Attributes for Few-Shot Learning (ASL)".

Shaping Visual Representations with Attributes for Few-Shot Learning This code implements the Shaping Visual Representations with Attributes for Few-S

chx_nju 9 Sep 01, 2022
Beyond Image to Depth: Improving Depth Prediction using Echoes (CVPR 2021)

Beyond Image to Depth: Improving Depth Prediction using Echoes (CVPR 2021) Kranti Kumar Parida, Siddharth Srivastava, Gaurav Sharma. We address the pr

Kranti Kumar Parida 33 Jun 27, 2022
Constrained Logistic Regression - How to apply specific constraints to logistic regression's coefficients

Constrained Logistic Regression Sample implementation of constructing a logistic regression with given ranges on each of the feature's coefficients (v

1 Dec 29, 2021
Fast Axiomatic Attribution for Neural Networks (NeurIPS*2021)

Fast Axiomatic Attribution for Neural Networks This is the official repository accompanying the NeurIPS 2021 paper: R. Hesse, S. Schaub-Meyer, and S.

Visual Inference Lab @TU Darmstadt 11 Nov 21, 2022
Simple Linear 2nd ODE Solver GUI - A 2nd constant coefficient linear ODE solver with simple GUI using euler's method

Simple_Linear_2nd_ODE_Solver_GUI Description It is a 2nd constant coefficient li

:) 4 Feb 05, 2022
Unofficial PyTorch implementation of Guided Dropout

Unofficial PyTorch implementation of Guided Dropout This is a simple implementation of Guided Dropout for research. We try to reproduce the algorithm

2 Jan 07, 2022
Really awesome semantic segmentation

really-awesome-semantic-segmentation A list of all papers on Semantic Segmentation and the datasets they use. This site is maintained by Holger Caesar

Holger Caesar 400 Nov 28, 2022
Code for CMaskTrack R-CNN (proposed in Occluded Video Instance Segmentation)

CMaskTrack R-CNN for OVIS This repo serves as the official code release of the CMaskTrack R-CNN model on the Occluded Video Instance Segmentation data

Q . J . Y 61 Nov 25, 2022
converts nominal survey data into a numerical value based on a dictionary lookup.

SWAP RATE Converts nominal survey data into a numerical values based on a dictionary lookup. It allows the user to switch nominal scale data from text

Jake Rhodes 1 Jan 18, 2022
The hippynn python package - a modular library for atomistic machine learning with pytorch.

The hippynn python package - a modular library for atomistic machine learning with pytorch. We aim to provide a powerful library for the training of a

Los Alamos National Laboratory 37 Dec 29, 2022