OpenDelta - An Open-Source Framework for Paramter Efficient Tuning.

Overview

An Open-Source Framework for Paramter Efficient Tuning.


OverviewInstallationBasic UsageDocsPerformance

version

Overview

OpenDelta is a toolkit for parameter efficient methods (we dub it as delta tuning), by which users could flexibly assign (or add) a small amount parameters to update while keeping the most paramters frozen. By using OpenDelta, users could easily implement prefix-tuning, adapters, Lora, or any other types of delta tuning with preferred PTMs.

Our repo is tested on Python 3.8 and PyTorch 1.9.0. Lower version may also be supported.

A demo of using Opendelta to modify the PLM (E.g., BART). How PLM changes using Delta-tuning

Installation

create a virtualenv (optional)

conda create -n opendelta_env python=3.8
conda activate opendelta_env

Using Pip

Install OpenDelta using pip as follows:

pip install opendelta

To play with the latest features, you can also install OpenDelta from the source.

Build from Source

git clone https://github.com/thunlp/OpenDelta.git
cd OpenDelta

Option 1: If you won't modify the code, run

python setup.py install

Option 2: If you want to modify the code, run

python setup.py develop

Must Try

from transformers import AutoModelForSeq2SeqLM
t5 = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
from opendelta import AutoDeltaModel
delta = AutoDeltaModel.from_finetuned("DeltaHub/lora_t5-base_mrpc", backbone_model=t5)
delta.log()

Verified Supported Models

  • You can try to use OpenDelta on any backbone models based on PyTorch.

  • However, with small chances thatThe interface of the submodules of the backbone model is not supported. Therefore we verified some commonly used models that OpenDelta are sure to support.

  • We will keep testing more and more emerging models.

  • Pull requests are welcomed when you successfully apply OpenDelta on your own backbone model.

Lora Bias
Tuning
Adapter
Houstbly
Adapter
Preffier
Adapter
Drop
Adapater
Low-Rank
Compactor Prefix
Tuning
Prompt
Tuning
T5
GPT-2
BART
DistilBERT
RoBERTa
BERT
T5-3b(parallel)
Deberta-v2
CTRL
ViT

Performance Checked Combination

Google sheet here

Subject to change at any moment.

Comments
  • 可以像OpenPrompt项目那样提供一些样例代码吗?

    可以像OpenPrompt项目那样提供一些样例代码吗?

    在使用中有一些不太清楚的地方,希望可以有些详细的代码参考,感谢!

    目前遇到的问题:在使用PrefixModel时,指定modified_modules=["0.layer.0"]和不传入modified_modules参数时reparams的参数不一样,是使用的方式不对吗

    modified_modules=["0.layer.0"]时:reparams.control_trans.2: weight:[3072, 512] bias:[3072],而且在model.generate时会报错:The size of tensor a (2) must match the size of tensor b (12) at non-singleton dimension 3

    不传入该参数时:reparams.control_trans.2: weight:[36864, 512] bias:[36864],可以正常generate

    模型使用T5

    question 
    opened by fade-color 4
  • Update basemodel.py

    Update basemodel.py

    Do not use _pseudo_data_to_instantiate, because it can better modify complex model rather than pretrained model from Huggingface only. Because the opendelta now cannot create complex input for complex model, and it will report error. We can see Lora model does not use _pseudo_data_to_instantiate, and we can use Lora in our model. But after we do not use _pseudo_data_to_instantiate, we just modify the model

    opened by CaffreyR 3
  • Is it possible to extract the Visualization module as an independent python packages?

    Is it possible to extract the Visualization module as an independent python packages?

    Visualization(mode).structure_graph() is especially useful to view the large language models, and sometimes I would like to use it in some other scenario.

    So instead of install the whole OpenDelta, is it possible to isolate the Visualization functionality from OpenDelta, then it can become more light-weight and more easily to install ?

    enhancement 
    opened by Dounm 2
  • `index.html` is not included in the package if installing from PyPI

    `index.html` is not included in the package if installing from PyPI

    Thanks for the excellent package.

    Problem

    The index.html file in opendelta/utils/interactive/templates/ is a static file, and it will not be included in the distributed package file (like the wheel file) unless you add the package data manually in setup.py.

    Reproduce

    On a clean environment,

    $ pip install opendelta
    $ python examples/tutorial/0_interactive.py
    
    opened by Spico197 2
  • Differences between Houlsby and Pfeiffer adapters

    Differences between Houlsby and Pfeiffer adapters

    Thanks for providing such a great work here! There are structural differences between Houlsby and Pfeiffer adapters (Houlsby et al. places two adapters sequentially within one layer of the transformer, one after the multi-head attention and one after the FFN sub-layer, while Pfeiffer et al. adapter is inserted only after the FFN “add & layer norm” sub-layer), which seems to be missed in the code.

    question 
    opened by ImKeTT 2
  • Prefix tuning for T5-small

    Prefix tuning for T5-small

    Hi, I met an error when using Prefix tuning with T5-small.

    File "/home/user/anaconda3/lib/python3.7/site-packages/OpenDelta/opendelta/basemodel.py", line 502, in _caller
        args, kwargs = delta_module.pre_forward(*args, **kwargs)
      File "/home/user/anaconda3/lib/python3.7/site-packages/OpenDelta/opendelta/delta_models/prefix.py", line 68, in pre_forward
        kwargs['past_key_value'] = (expand_batchsize(past_key), expand_batchsize(past_value))
      File "/home/user/anaconda3/lib/python3.7/site-packages/OpenDelta/opendelta/delta_models/prefix.py", line 60, in expand_batchsize
        x = x.reshape(self.prefix_token_num, self.num_heads, -1).transpose(0,1)
    RuntimeError: shape '[6, 6, -1]' is invalid for input of size 2048
    

    In T5-small, with 6 heads, it looks not possible to evenly divide 2048 with 6, no matter what num_prefix_token is.

    def expand_batchsize(x):
                x = x.reshape(self.prefix_token_num, self.num_heads, -1).transpose(0,1)
                x = x.unsqueeze(0).expand(batch_size, *x.shape)
                return x
    

    Could you help me with this? Thank you!

    bug 
    opened by chengjiali 2
  • What is the difference between OpenDelta and adapter-transformers?

    What is the difference between OpenDelta and adapter-transformers?

    Hi team, recently I was investigating the method of fine-tuning the PTMs using an adapter(delta) model. I found the functions implemented by OpenDelta and adapter-transformers are similar. Is there any difference between them? Thanks!

    opened by fighterhit 1
  • compatibility with pytorch

    compatibility with pytorch

    Hi. here is another problem. See, I use opendelta and pytorch lightning to fine tune my model using lora. But when I tried to load, it seems wrong since it seems there is state keys missing here. Apparently, it seems not save the LORA weight. @ShengdingHu

    
    def opendelta_modify_with_lora(transformer, config):
        # pass
        LoraModel(backbone_model=transformer, modified_modules=['[r](\d).SelfAttention.[q,v,o,k]'])
        LoraModel(backbone_model=transformer, modified_modules=['[r](\d).EncDecAttention.[q,v,o,k]'])
        delta_model = LoraModel(backbone_model=transformer, modified_modules=['[r](\d).DenseReluDense.w[o,i]'])
    
        delta_model.freeze_module(exclude=["layer_norm", "lora_A", "lora_B"])
        # delta_model.log(delta_ratio=True, trainable_ratio=True, visualization=True)
        # Visualization(transformer).structure_graph();
        return transformer
    
    class EncoderDecoder(LightningModule):
        """
        Encoder Decoder
        """
    
        def __init__(self, config, tokenizer, transformer, dataset_reader):
            """
            :param config
            """
            super().__init__()
            self.config = config
            self.tokenizer = tokenizer
            self.model = transformer
            self.dataset_reader = dataset_reader
    
            self.use_deepspeed = self.config.compute_strategy.startswith("deepspeed")
            self.use_ddp = self.config.compute_strategy.startswith("ddp")
            self.load_model()
    
            self._last_global_step_saved = -1
    
            if self.config.fishmask_mode is not None:
                fishmask_plugin_on_init(self)
    
    model= EncoderDecoder.load_from_checkpoints("my file path")
    

    image

    opened by CaffreyR 1
  • RuntimeError: This is a delta model, which should be attached to a backbone model and can't forward any data by itself. Please using the backbone model's forward function             after attach the delta model to the backbone. eceived was empty, your model won't be able to train on it. Double-check that your training dataset contains keys expected by the model: args,kwargs,label_ids,label.

    RuntimeError: This is a delta model, which should be attached to a backbone model and can't forward any data by itself. Please using the backbone model's forward function after attach the delta model to the backbone. eceived was empty, your model won't be able to train on it. Double-check that your training dataset contains keys expected by the model: args,kwargs,label_ids,label.

    I used bert model to train on RAFT datasets, the original model went well. But when I tried to add LowRankAdapterModel to finetune, it went wrong. I just simply apply the code in this. @ShengdingHu

    #!/usr/bin/env python
    # coding: utf-8
    
    # In[1]:
    
    
    import datasets
    
    datasets.logging.set_verbosity_error()
    
    
    # In[2]:
    
    
    from datasets import get_dataset_config_names
    
    RAFT_TASKS = get_dataset_config_names("ought/raft")
    RAFT_TASKS
    
    
    # In[3]:
    
    
    from datasets import load_dataset
    
    TASK = "ade_corpus_v2"
    raft_dataset = load_dataset("ought/raft", name=TASK)
    raft_dataset
    
    
    # In[4]:
    
    
    from transformers import AutoTokenizer,Seq2SeqTrainingArguments, TrainerCallback
    tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
    
    from sklearn.model_selection import train_test_split
    X = raft_dataset["train"]['Sentence']
    y = raft_dataset["train"]['Label']
    
    X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)
    X_train_tokenized = tokenizer(X_train, padding=True, truncation=True, max_length=512)
    X_val_tokenized = tokenizer(X_val, padding=True, truncation=True, max_length=512)
    
    
    # In[5]:
    
    
    # X_train_tokenized
    
    
    # In[19]:
    
    
    item={}
    for key, val in X_train_tokenized.items():
        if key == 'input_ids':
            item['label_ids']=torch.tensor(val[idx])
        else:
            item[key]=torch.tensor(val[idx])
            
    item
            
    
    
    # In[6]:
    
    
    import torch
    class Dataset(torch.utils.data.Dataset):
        def __init__(self, encodings, labels=None):
            self.encodings = encodings
            self.labels = labels
    
        def __getitem__(self, idx):
    #         item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
            item={}
            for key, val in self.encodings.items():
                if key == 'input_ids':
                    item['label_ids']=torch.tensor(val[idx])
                else:
                    item[key]=torch.tensor(val[idx])
            if self.labels:
                item["label"] = torch.tensor(self.labels[idx]-1)
            return item
    
        def __len__(self):
            return len(self.encodings["input_ids"])
    
    train_dataset = Dataset(X_train_tokenized, y_train)
    val_dataset = Dataset(X_val_tokenized, y_val)
    
    
    # In[7]:
    
    
    train_dataset[0]
    
    
    # In[8]:
    
    
    from transformers import TrainingArguments, Trainer
    from transformers import AutoModelForSequenceClassification,EarlyStoppingCallback
    
    model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2)
    
    
    # In[9]:
    
    
    from opendelta import Visualization
    Visualization(model).structure_graph();
    
    
    # In[13]:
    
    
    from opendelta import LowRankAdapterModel
    delta_model1 = LowRankAdapterModel(backbone_model=model, modified_modules=['LayerNorm'])
    # delta_model1.freeze_module(set_state_dict = True)
    delta_model1.log(delta_ratio=True, trainable_ratio=True, visualization=True)
    
    from opendelta import LoraModel
    delta_model2 = LoraModel(backbone_model=model, modified_modules=['dense'])
    # delta_model2.freeze_module(set_state_dict = True)
    delta_model2.log(delta_ratio=True, trainable_ratio=True, visualization=True)from opendelta import CompacterModel
    delta_model3 = CompacterModel(backbone_model=model, modified_modules=['dense'])
    # delta_model2.freeze_module(set_state_dict = True)
    delta_model3.log(delta_ratio=True, trainable_ratio=True, visualization=True)
    # In[14]:
    
    
    def compute_metrics(p):
        pred, labels = p
        pred = np.argmax(pred, axis=1)
    
        accuracy = accuracy_score(y_true=labels, y_pred=pred)
        recall = recall_score(y_true=labels, y_pred=pred)
        precision = precision_score(y_true=labels, y_pred=pred)
        f1 = f1_score(y_true=labels, y_pred=pred)
    
        return {"accuracy": accuracy, "precision": precision, "recall": recall, "f1": f1}
    
    # Define Trainer
    args = TrainingArguments(
        output_dir="output",
        evaluation_strategy="steps",
        eval_steps=500,
        per_device_train_batch_size=8,
        per_device_eval_batch_size=8,
        num_train_epochs=3,
        seed=0,
        load_best_model_at_end=True,
    )
    trainer = Trainer(
        model=delta_model1,
    #     model=model,
        args=args,
        train_dataset=train_dataset,
        eval_dataset=val_dataset,
        compute_metrics=compute_metrics,
        callbacks=[EarlyStoppingCallback(early_stopping_patience=3)],
    )
    
    # Train pre-trained model
    trainer.train()
    
    
    # TrainOutput(global_step=15, training_loss=0.5652575810750325, metrics={'train_runtime': 11.1754, 'train_samples_per_second': 10.738, 'train_steps_per_second': 1.342, 'total_flos': 4563332366400.0, 'train_loss': 0.5652575810750325, 'epoch': 3.0})
    
    
    

    RuntimeError: This is a delta model, which should be attached to a backbone model and can't forward any data by itself. Please using the backbone model's forward function after attach the delta model to the backbone.

    opened by CaffreyR 1
  • LowRankAdapter not working with Bert models

    LowRankAdapter not working with Bert models

    Ok I am trying to use LowRankAdapterModel with bert-base-uncased and bert-large-uncased and I am getting the following error. Please look into it


    KeyError Traceback (most recent call last) in () 1 from opendelta import LowRankAdapterModel ----> 2 delta_model1 = LowRankAdapterModel(backbone_model=model) 3 delta_model1.freeze_module(set_state_dict = True) 4 delta_model1.log(delta_ratio=True, trainable_ratio=True, visualization=True)

    5 frames /usr/local/lib/python3.7/dist-packages/opendelta/delta_models/low_rank_adapter.py in init(self, backbone_model, reduction_factor, non_linearity, low_rank_w_init, low_rank_rank, modified_modules, exclude_modules, unfrozen_modules, common_structure, interactive_modify) 167 unfrozen_modules=unfrozen_modules, 168 common_structure=common_structure, --> 169 interactive_modify=interactive_modify, 170 ) 171 arg_names = get_arg_names_inside_func(self.init)

    /usr/local/lib/python3.7/dist-packages/opendelta/basemodel.py in init(self, backbone_model, modified_modules, exclude_modules, unfrozen_modules, interactive_modify, common_structure) 130 self.common_structure = common_structure 131 if self.common_structure: --> 132 self.structure_mapping = CommonStructureMap.load(self.backbone_model) 133 else: 134 self.structure_mapping = None

    /usr/local/lib/python3.7/dist-packages/opendelta/utils/structure_mapping.py in load(cls, backbone_model, strict, warining, visualize) 317 if backbone_class not in cls.Mappings: 318 raise KeyError(backbone_class) --> 319 mapping = cls.Mappings[backbone_class] 320 if visualize: 321 logger.info("Since you are using the common structure mapping, draw the transformed parameter structure for checking.")

    /usr/local/lib/python3.7/dist-packages/opendelta/utils/structure_mapping.py in getitem(self, key) 279 raise KeyError(key) 280 value = self._mapping_string[key] --> 281 self._mapping[key] = eval(value) 282 return self._mapping[key] 283

    /usr/local/lib/python3.7/dist-packages/opendelta/utils/structure_mapping.py in ()

    /usr/local/lib/python3.7/dist-packages/opendelta/utils/structure_mapping.py in mapping_for_SequenceClassification(mapping, type) 252 } 253 elif type == "bert": --> 254 mapping.pop("lm_head") 255 mapping["classifier"] = {"name": "classifier"} 256 elif type == "deberta":

    KeyError: 'lm_head'

    This is how model is defined

    config = AutoConfig.from_pretrained( "bert-base-uncased" cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) config.dropout_rate = 0.0 tokenizer = AutoTokenizer.from_pretrained( "bert-base-uncased", cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) model = AutoModelForSequenceClassification.from_pretrained( "bert-base-uncased", from_tf=bool(".ckpt" in model_args.model_name_or_path), config=config, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) model.resize_token_embeddings(len(tokenizer))

    opened by zuluzazu 1
  • `sequential` parameter is not used in `AdapterModel`

    `sequential` parameter is not used in `AdapterModel`

    Hi,

    Thanks for the awesome tool! I noticed that the sequential: Optional[str]=True parameter in the AdapterModel is not used, so the user can not actually insert the adapter in a parallel manner for the AdapterModel class by setting sequntial=False. I think it's a little bit confusing for the user. Maybe you can add the insert_parallel_module() function to the AdapterModel class, or just don't let the user to be able to set the sequential parameter when initializing the AdapterModel class.

    opened by alvin870203 1
  • 通过setup.py安装提示error: aiohttp 4.0.0a0 is installed but aiohttp!=4.0.0a0,!=4.0.0a1 is required by {'fsspec'}

    通过setup.py安装提示error: aiohttp 4.0.0a0 is installed but aiohttp!=4.0.0a0,!=4.0.0a1 is required by {'fsspec'}

    Installed /home/bmxm/anaconda3/envs/cpm-ant-plus/lib/python3.8/site-packages/opendelta-0.3.2-py3.8.egg Processing dependencies for opendelta==0.3.2 error: aiohttp 4.0.0a0 is installed but aiohttp!=4.0.0a0,!=4.0.0a1 is required by {'fsspec'} image

    opened by daliang0222 1
  • Example of multi-task

    Example of multi-task

    Hi,

    I saw on the documentation page there is a page for multi-task training: https://opendelta.readthedocs.io/en/latest/notes/pluginunplug.html.

    However I think it is not entirely clear how this modelling approach would work in practice?

    Is there any examples for using OpenDelta for multi-task with the training code etc?

    Thanks in advance

    Best,

    Niall

    opened by NtaylorOX 2
  • tutorial doc bug

    tutorial doc bug

    Hi, I noticed that there're some bugs exists in the BM train tutorial file, would you mind if you could modify it in the future?

    argument bug

    returns:2_with_bmtrain.py: error: unrecognized arguments: --delta_type low_rank_adapter

    delta model visualization bug

    returns:

    File "./2_with_bmtrain.py", line 132, in get_model
    od.Visualization(model).structure_graph()
    AttributeError: module 'opendelta' has no attribute 'Visualization'
    

    in order to reproduce it, I worked with open delta 0.3.2

    lowrank adapter with bert

    when using bert with lowrankadapter, returns

    AttributeError: str(forward() got an unexpected keyword argument 'output_pooler_output')
            The LowRankAdapterModel requires a dummy_inputs to be passed through the model to understand the dimensionality of each tensor in the computation graph. 
             The BertModel Class has no dummy_inputs, and automatically created dummy_inputs failed.
             Refer to `https://opendelta.readthedocs.io/en/latest/notes/faq.html` for detail.
    

    lora with bert

    Traceback (most recent call last):
      File "./2_with_bmtrain.py", line 371, in <module>
        main()
      File "./2_with_bmtrain.py", line 360, in main
        tokenizer, model, optimizer, lr_scheduler = setup_model_and_optimizer(args)
      File "./2_with_bmtrain.py", line 204, in setup_model_and_optimizer
        model = get_model(args)
      File "./2_with_bmtrain.py", line 135, in get_model
        delta_model = LoraModel(backbone_model=model, modified_modules=['project_q', 'project_k'], backend='bmt')
      File "/root/miniconda3/lib/python3.8/site-packages/opendelta/delta_models/lora.py", line 136, in __init__
        self.add_all_delta_to_backbone(self.backbone_model,
      File "/root/miniconda3/lib/python3.8/site-packages/opendelta/basemodel.py", line 213, in add_all_delta_to_backbone
        self.update_module(backbone, key)
      File "/root/miniconda3/lib/python3.8/site-packages/opendelta/delta_models/lora.py", line 143, in update_module
        parallel_module = self.new_module_like(child_module=child_ref)
      File "/root/miniconda3/lib/python3.8/site-packages/opendelta/delta_models/lora.py", line 151, in new_module_like
        in_features, out_features = child_module.in_features, child_module.out_features
      File "/root/miniconda3/lib/python3.8/site-packages/bmtrain-0.1.8-py3.8-linux-x86_64.egg/bmtrain/layer.py", line 12, in __getattr__
        ret = super().__getattr__(name)
      File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1185, in __getattr__
        raise AttributeError("'{}' object has no attribute '{}'".format(
    AttributeError: 'Linear' object has no attribute 'in_features'
    

    incorrect installation commands

    [email protected]:~/OpenDelta/examples/tutorial# pip install [email protected]:OpenBMB/ModelCenter.git
    ERROR: Invalid requirement: '[email protected]:OpenBMB/ModelCenter.git'
    Hint: It looks like a path. File '[email protected]:OpenBMB/ModelCenter.git' does not exist.
    

    thanks for your contribution to the open source community; if you got some time in the feature, it would be great to update the tutorial with regards jiajun

    opened by zhujiajunbryan 1
  • Feature Request: Add Support for

    Feature Request: Add Support for "Aside Modules"

    Injected additional trainable modules to connect the unfrozen modules in parameter efficient finetuning can improve the gradient flow and significantly improve convergence speed and performance (at least when finetuning models for information retrieval) see https://arxiv.org/pdf/2208.09847.pdf.

    enhancement 
    opened by ethankim00 1
  • does opendelta support gradient_checkpointing?

    does opendelta support gradient_checkpointing?

    Thank you for the awesome work. I met some problems when using opendelta with gradient_checkpointing, it just throws: "RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn" btw code works well as gradient_checkpointing is closed.

    so does opendelta support gradient_checkpointing?

    opened by hmzo 3
Releases(v0.3.2)
Owner
THUNLP
Natural Language Processing Lab at Tsinghua University
THUNLP
CS_Final_Metal_surface_detection - This is a final project for CoderSchool Machine Learning bootcamp on 29/12/2021.

CS_Final_Metal_surface_detection This is a final project for CoderSchool Machine Learning bootcamp on 29/12/2021. The project is based on the dataset

Cuong Vo 1 Dec 29, 2021
The official GitHub repository for the Argoverse 2 dataset.

Argoverse 2 API Official GitHub repository for the Argoverse 2 family of datasets. If you have any questions or run into any problems with either the

Argo AI 156 Dec 23, 2022
Here is the implementation of our paper S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained Representations.

S2VC Here is the implementation of our paper S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained Representations. In thi

81 Dec 15, 2022
GULAG: GUessing LAnGuages with neural networks

GULAG: GUessing LAnGuages with neural networks Classify languages in text via neural networks. Привет! My name is Egor. Was für ein herrliches Frühl

Egor Spirin 12 Sep 02, 2022
Do Smart Glasses Dream of Sentimental Visions? Deep Emotionship Analysis for Eyewear Devices

EMOShip This repository contains the EMO-Film dataset described in the paper "Do Smart Glasses Dream of Sentimental Visions? Deep Emotionship Analysis

1 Nov 18, 2022
Eth brownie struct encoding example

eth-brownie struct encoding example Overview This repository contains an example of encoding a struct, so that it can be used in a function call, usin

Ittai Svidler 2 Mar 04, 2022
Neon: an add-on for Lightbulb making it easier to handle component interactions

Neon Neon is an add-on for Lightbulb making it easier to handle component interactions. Installation pip install git+https://github.com/neonjonn/light

Neon Jonn 9 Apr 29, 2022
Bayesian Meta-Learning Through Variational Gaussian Processes

vmgp This is the repository of Vivek Myers and Nikhil Sardana for our CS 330 final project, Bayesian Meta-Learning Through Variational Gaussian Proces

Vivek Myers 2 Nov 17, 2022
MGFN: Multi-Graph Fusion Networks for Urban Region Embedding was accepted by IJCAI-2022.

Multi-Graph Fusion Networks for Urban Region Embedding (IJCAI-22) This is the implementation of Multi-Graph Fusion Networks for Urban Region Embedding

202 Nov 18, 2022
Experimental solutions to selected exercises from the book [Advances in Financial Machine Learning by Marcos Lopez De Prado]

Advances in Financial Machine Learning Exercises Experimental solutions to selected exercises from the book Advances in Financial Machine Learning by

Brian 1.4k Jan 04, 2023
PyTorch implementation of CloudWalk's recent work DenseBody

densebody_pytorch PyTorch implementation of CloudWalk's recent paper DenseBody. Note: For most recent updates, please check out the dev branch. Update

Lingbo Yang 401 Nov 19, 2022
Implementation of Sequence Generative Adversarial Nets with Policy Gradient

SeqGAN Requirements: Tensorflow r1.0.1 Python 2.7 CUDA 7.5+ (For GPU) Introduction Apply Generative Adversarial Nets to generating sequences of discre

Lantao Yu 2k Dec 29, 2022
The official implementation of paper "Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural Networks" (IJCV under review).

DGMS This is the code of the paper "Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural Networks". Installation Our code works with Pytho

Runpei Dong 3 Aug 28, 2022
Active learning for Mask R-CNN in Detectron2

MaskAL - Active learning for Mask R-CNN in Detectron2 Summary MaskAL is an active learning framework that automatically selects the most-informative i

49 Dec 20, 2022
3D ResNet Video Classification accelerated by TensorRT

Activity Recognition TensorRT Perform video classification using 3D ResNets trained on Kinetics-400 dataset and accelerated with TensorRT P.S Click on

Akash James 39 Nov 21, 2022
Designing a Practical Degradation Model for Deep Blind Image Super-Resolution (ICCV, 2021) (PyTorch) - We released the training code!

Designing a Practical Degradation Model for Deep Blind Image Super-Resolution Kai Zhang, Jingyun Liang, Luc Van Gool, Radu Timofte Computer Vision Lab

Kai Zhang 804 Jan 08, 2023
OOD Dataset Curator and Benchmark for AI-aided Drug Discovery

🔥 DrugOOD 🔥 : OOD Dataset Curator and Benchmark for AI Aided Drug Discovery This is the official implementation of the DrugOOD project, this is the

108 Dec 17, 2022
Code for DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning

DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning Pytorch Implementation for DisCo: Remedy Self-supervi

79 Jan 06, 2023
This repository contains implementations and illustrative code to accompany DeepMind publications

DeepMind Research This repository contains implementations and illustrative code to accompany DeepMind publications. Along with publishing papers to a

DeepMind 11.3k Dec 31, 2022