NLP library designed for reproducible experimentation management

Overview

Welcome to the Transfer NLP library, a framework built on top of PyTorch to promote reproducible experimentation and Transfer Learning in NLP

You can have an overview of the high-level API on this Colab Notebook, which shows how to use the framework on several examples. All DL-based examples on these notebooks embed in-cell Tensorboard training monitoring!

For an example of pre-trained model finetuning, we provide a short executable tutorial on BertClassifier finetuning on this Colab Notebook

Set up your environment

mkvirtualenv transfernlp
workon transfernlp

git clone https://github.com/feedly/transfer-nlp.git
cd transfer-nlp
pip install -r requirements.txt

To use Transfer NLP as a library:

# to install the experiment builder only
pip install transfernlp
# to install Transfer NLP with PyTorch and Transfer Learning in NLP support
pip install transfernlp[torch]

or

pip install git+https://github.com/feedly/transfer-nlp.git

to get the latest state before new releases.

To use Transfer NLP with associated examples:

git clone https://github.com/feedly/transfer-nlp.git
pip install -r requirements.txt

Documentation

API documentation and an overview of the library can be found here

Reproducible Experiment Manager

The core of the library is made of an experiment builder: you define the different objects that your experiment needs, and the configuration loader builds them in a nice way. For reproducible research and easy ablation studies, the library then enforces the use of configuration files for experiments. As people have different tastes for what constitutes a good experiment file, the library allows for experiments defined in several formats:

  • Python Dictionary
  • JSON
  • YAML
  • TOML

In Transfer-NLP, an experiment config file contains all the necessary information to define entirely the experiment. This is where you will insert names of the different components your experiment will use, along with the hyperparameters you want to use. Transfer-NLP makes use of the Inversion of Control pattern, which allows you to define any class / method / function you could need, the ExperimentConfig class will create a dictionnary and instatiate your objects accordingly.

To use your own classes inside Transfer-NLP, you need to register them using the @register_plugin decorator. Instead of using a different registry for each kind of component (Models, Data loaders, Vectorizers, Optimizers, ...), only a single registry is used here, in order to enforce total customization.

If you use Transfer NLP as a dev dependency only, you might want to use it declaratively only, and call register_plugin() on objects you want to use at experiment running time.

Here is an example of how you can define an experiment in a YAML file:

data_loader:
  _name: MyDataLoader
  data_parameter: foo
  data_vectorizer:
    _name: MyVectorizer
    vectorizer_parameter: bar

model:
  _name: MyModel
  model_hyper_param: 100
  data: $data_loader

trainer:
  _name: MyTrainer
  model: $model
  data: $data_loader
  loss:
    _name: PyTorchLoss
  tensorboard_logs: $HOME/path/to/tensorboard/logs
  metrics:
    accuracy:
      _name: Accuracy

Any object can be defined through a class, method or function, given a _name parameters followed by its own parameters. Experiments are then loaded and instantiated using ExperimentConfig(experiment=experiment_path_or_dict)

Some considerations:

  • Defaults parameters can be skipped in the experiment file.

  • If an object is used in different places, you can refer to it using the $ symbol, for example here the trainer object uses the data_loader instantiated elsewhere. No ordering of objects is required.

  • For paths, you might want to use environment variables so that other machines can also run your experiments. In the previous example, you would run e.g. ExperimentConfig(experiment=yaml_path, HOME=Path.home()) to instantiate the experiment and replace $HOME by your machine home path.

  • The config instantiation allows for any complex settings with nested dict / list

You can have a look at the tests for examples of experiment settings the config loader can build. Additionally we provide runnable experiments in experiments/.

Transfer Learning in NLP: flexible PyTorch Trainers

For deep learning experiments, we provide a BaseIgniteTrainer in transfer_nlp.plugins.trainers.py. This basic trainer will take a model and some data as input, and run a whole training pipeline. We make use of the PyTorch-Ignite library to monitor events during training (logging some metrics, manipulating learning rates, checkpointing models, etc...). Tensorboard logs are also included as an option, you will have to specify a tensorboard_logs simple parameters path in the config file. Then just run tensorboard --logdir=path/to/logs in a terminal and you can monitor your experiment while it's training! Tensorboard comes with very nice utilities to keep track of the norms of your model weights, histograms, distributions, visualizing embeddings, etc so we really recommend using it.

We provide a SingleTaskTrainer class which you can use for any supervised setting dealing with one task. We are working on a MultiTaskTrainer class to deal with multi task settings, and a SingleTaskFineTuner for large models finetuning settings.

Use cases

Here are a few use cases for Transfer NLP:

  • You have all your classes / methods / functions ready. Transfer NLP allows for a clean way to centralize loading and executing your experiments
  • You have all your classes but you would like to benchmark multiple configuration settings: the ExperimentRunner class allows for sequentially running your sets of experiments, and generates personalized reporting (you only need to implement your report method in a custom ReporterABC class)
  • You want to experiment with training deep learning models but you feel overwhelmed bby all the boilerplate code in SOTA models github projects. Transfer NLP encourages separation of important objects so that you can focus on the PyTorch Module implementation and let the trainers deal with the training part (while still controlling most of the training parameters through the experiment file)
  • You want to experiment with more advanced training strategies, but you are more interested in the ideas than the implementations details. We are working on improving the advanced trainers so that it will be easier to try new ideas for multi task settings, fine-tuning strategies or model adaptation schemes.

Slack integration

While experimenting with your own models / data, the training might take some time. To get notified when your training finishes or crashes, you can use the simple library knockknock by folks at HuggingFace, which add a simple decorator to your running function to notify you via Slack, E-mail, etc.

Some objectives to reach:

  • Include examples using state of the art pre-trained models
  • Include linguistic properties to models
  • Experiment with RL for sequential tasks
  • Include probing tasks to try to understand the properties that are learned by the models

Acknowledgment

The library has been inspired by the reading of "Natural Language Processing with PyTorch" by Delip Rao and Brian McMahan. Experiments in experiments, the Vocabulary building block and embeddings nearest neighbors are taken or adapted from the code provided in the book.

Comments
  • Pytorch Lightning as a back-end

    Pytorch Lightning as a back-end

    Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

    Describe the solution you'd like A clear and concise description of what you want to happen.

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Hi! check out Pytorch Lightning as an option for your backend! We're looking for awesome project implemented in Lightning.

    https://github.com/williamFalcon/pytorch-lightning Additional context Add any other context or screenshots about the feature request here.

    opened by williamFalcon 3
  • have the possibility to build object with a function instead of a class

    have the possibility to build object with a function instead of a class

    When you want to experiment with someone else's code, you don't want to copy-paste their code.

    If you want to use a class AwesomeClass from an awesome github repo, you can do:

    from transfer_nlp.pluginf.config import register_plugin
    from awesome_repo.module import AwesomeClass
    
    register_plugin(AwesomeClass)
    

    and then use it in your experiments.

    However, when reusing complex objects, it might complicated to configure them. An example is the pre-trained model from the pytorch-pretrained-bert repo, where you can build complex models with nice one-liners such as model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=4)

    It's possible to encapsulate these into other classes and have Transfer NLP build them, but it can feel awkward and adds unnecessary complexity / lines of code compared to the initial one-liner. An alternative is to build these objects with a method, in the previous example we would only write:

    @register_function
    def bert_classifier(bert_version: str='bert-base-uncased', num_labels: int=4):
        return BertForSequenceClassification.from_pretrained(pretrained_model_name_or_path=bert_version, num_labels=num_labels)
    

    and we could use functions just as methods in the config loading.

    opened by petermartigny 2
  • caching objects in experiment runner

    caching objects in experiment runner

    some readonly objects can take awhile to load in experiments (embeddings, datasets, etc). The current ExperimentRunner always recreates the entire experiment. It would be nice if we could keep some objects in memory...

    Proposal

    add a cached property in run_all

        def run_all(experiment: Union[str, Path, Dict],
                    experiment_cache: Union[str, Path, Dict],
                    experiment_config: Union[str, Path],
                    report_dir: Union[str, Path],
                    trainer_config_name: str = 'trainer',
                    reporter_config_name: str = 'reporter',
                    **env_vars) -> None:
    

    The cache is just another experiment json. it would be loaded only once at the very beginning only using the env_vars. any resulting objects would then be added to env_vars when running each each experiment. objects can optionally implement a Resettable class that has a reset method that would be called once before each experiment.

    incorrect usage of this feature could lead to non-reproducibility issues, but through docs we could make it clear this should only be for read-only objects. i think it would be worth doing...

    opened by kireet 1
  • cleanup config tests, also fixes #28

    cleanup config tests, also fixes #28

    i wanted to try to make the config tests a bit more sane, try to minimize the number of temporary classes we needed to create and improve naming. also found issue #28 and fixed it.

    opened by kireet 1
  • unsubstituted parameter doesn't cause an error

    unsubstituted parameter doesn't cause an error

    something like this won't cause a problem:

    { 
       "item": {
           "_name": "foo",
           "param":"$bar"
        }
    }
    

    even if we don't set a value for bar. this can lead to easily misconfigured objects.

    opened by kireet 1
  • Ioc refactor

    Ioc refactor

    • Refactor the basic trainer in an IoC pattern, with a single registry for every registrable classes, allowing for maximum customization
    • Separate the example experiments from the library
    • Adapt the examples to the new logic
    • Set cuda as optional in the config file
    opened by petermartigny 1
  • TPU + 16 bit

    TPU + 16 bit

    hey!

    Not sure if you've seen: https://github.com/williamFalcon/pytorch-lightning.

    The fastest growing PyTorch front-end project.

    We're also now venture funded so we have a fulltime team working on this and will be around for a very long time :)

    https://medium.com/pytorch/pytorch-lightning-0-7-1-release-and-venture-funding-dd12b2e75fb3?postPublishedType=repub

    opened by williamFalcon 0
  • Optional torch imports for trainers

    Optional torch imports for trainers

    We import torch modules in the __init__.py of trainers. This PR makes these imports optional, in the case where we don't have torch installed but still want to use the base TrainerABC class

    opened by petermartigny 0
  • move trainerABC to separate file

    move trainerABC to separate file

    This PR moves the TrainerABC class to a separate file. Therefore, someone willing to use the experiment runner class can do so without having to install torch

    opened by petermartigny 0
  • Refactor/experiment config

    Refactor/experiment config

    This PR does the refactoring defined in #76 to have a more easily maintainable configuration logic.

    Also, we remove the pytorch modules that were included in the registry by default. This allows for non-DL projects to use the config part f the library.

    opened by petermartigny 0
  • simplify configs reporting

    simplify configs reporting

    This PR does a few things:

    • Get rid of ini .cfg files saving
    • Before doing the sequential experiments, we copy the configs, experiment and cache files to a global-reporting directory.
    • This global-reporting directory will also host the outputs from the reporter's report_globally() call
    opened by petermartigny 0
  • [ExperimentRunner] Default value of experiment_cache cause run_all to fail

    [ExperimentRunner] Default value of experiment_cache cause run_all to fail

    Describe the bug The ExperimentRunner.run_all fails if experiment_cache is None.

    The issue comes from line 109, where the default value for the experiment cache (None) is not handled correctly: https://github.com/feedly/transfer-nlp/blob/master/transfer_nlp/runner/experiment_runner.py#L109

    opened by Mathieu4141 0
  • Check that all registrables are registered

    Check that all registrables are registered

    Currently, objects are built one by one and when one fails it throws an error.

    It would be great to have a quick pass before instantiating objects to check that all registrable names / aliases are actually registered, and throw an error at this moment.

    opened by petermartigny 0
  • Downloader Plugin

    Downloader Plugin

    From the talk today, one good point was the point that reproducibility problems often stem from data inconsistencies. To that end, I think we should have a DataDownloader component that can download data from URLs and save them locally to disk.

    • If the files exist, the downloader can skip the download
    • the downloader should calculate checksums for downloaded files. it should produce a checksums.cfg file to simplify reusing these in configuration later
    • the downloader should allow checksums to be configured in the experiment file. when set, the downloader would verify the downloaded file is the same as the one specified in the experiment.

    so an example json config could be:

    {
      "_name": "Downloader",
      "local_dir": "$my_path",
      "checksums": "$WORK_DIR/checksums_2019_05_23.cfg", <-- produced by a previous download 
      "sentences.txt.gz": {
        "url": "$BASE_URL/sentences.txt.gz",
        "decompress": true
      },
      "word_embeddings.npy": {
        "url": "$BASE_URL/word_embeddings.npy"
      }
    }
    
    opened by kireet 1
Releases(v0.1.6)
  • v0.1.5(Jun 25, 2019)

  • v0.1.3(May 29, 2019)

  • v0.1.2(May 28, 2019)

  • v0.1.1(May 28, 2019)

  • v0.1(May 28, 2019)

    This is a first stable version for Transfer NLP, allowing users to:

    Keep track of experiments and enforce reproducible research Combine custom and open-source code into controlled experiments Here are a few features available in the release:

    Configuring all objects from an experiment using a json file Running sequential jobs for the same experiment using different sets of parameters (parameter tuning, ablation studies...) Keep track of your experiments and make them reproducible / incrementally improvable Allow dynamic re-creation of any instantiated object during training through object factories Use several basic building blocks: Vocabulary class, PyTorch optimizer, Predictors... Transfer Learning: use the BasicTrainer to fine-tune pre-trained models to your custom downstream tasks.

    Source code(tar.gz)
    Source code(zip)
Owner
Feedly
Feedly
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.

Pretrained Language Model This repository provides the latest pretrained language models and its related optimization techniques developed by Huawei N

HUAWEI Noah's Ark Lab 2.6k Jan 08, 2023
Implementation of TTS with combination of Tacotron2 and HiFi-GAN

Tacotron2-HiFiGAN-master Implementation of TTS with combination of Tacotron2 and HiFi-GAN for Mandarin TTS. Inference In order to inference, we need t

SunLu Z 7 Nov 11, 2022
Repositório do trabalho de introdução a NLP

Trabalho da disciplina de BI NLP Repositório do trabalho da disciplina Introdução a Processamento de Linguagem Natural da pós BI-Master da PUC-RIO. Eq

Leonardo Lins 1 Jan 18, 2022
Code for the paper "Language Models are Unsupervised Multitask Learners"

Status: Archive (code is provided as-is, no updates expected) gpt-2 Code and models from the paper "Language Models are Unsupervised Multitask Learner

OpenAI 16.1k Jan 08, 2023
Text Analysis & Topic Extraction on Android App user reviews

AndroidApp_TextAnalysis Hi, there! This is code archive for Text Analysis and Topic Extraction from user_reviews of Android App. Dataset Source : http

Fitrie Ratnasari 1 Feb 14, 2022
PyTorch implementation of convolutional neural networks-based text-to-speech synthesis models

Deepvoice3_pytorch PyTorch implementation of convolutional networks-based text-to-speech synthesis models: arXiv:1710.07654: Deep Voice 3: Scaling Tex

Ryuichi Yamamoto 1.8k Dec 30, 2022
Gold standard corpus annotated with verb-preverb connections for Hungarian.

Hungarian Preverb Corpus A gold standard corpus manually annotated with verb-preverb connections for Hungarian. corpus The corpus consist of the follo

RIL Lexical Knowledge Representation Research Group 3 Jan 27, 2022
VMD Audio/Text control with natural language

This repository is a proof of principle for performing Molecular Dynamics analysis, in this case with the program VMD, via natural language commands.

Andrew White 13 Jun 09, 2022
IMDB film review sentiment classification based on BERT's supervised learning model.

IMDB film review sentiment classification based on BERT's supervised learning model. On the other hand, the model can be extended to other natural language multi-classification tasks.

Paris 1 Apr 17, 2022
运小筹公众号是致力于分享运筹优化(LP、MIP、NLP、随机规划、鲁棒优化)、凸优化、强化学习等研究领域的内容以及涉及到的算法的代码实现。

OlittleRer 运小筹公众号是致力于分享运筹优化(LP、MIP、NLP、随机规划、鲁棒优化)、凸优化、强化学习等研究领域的内容以及涉及到的算法的代码实现。编程语言和工具包括Java、Python、Matlab、CPLEX、Gurobi、SCIP 等。 关注我们: 运筹小公众号 有问题可以直接在

运小筹 151 Dec 30, 2022
Nmt - TensorFlow Neural Machine Translation Tutorial

Neural Machine Translation (seq2seq) Tutorial Authors: Thang Luong, Eugene Brevdo, Rui Zhao (Google Research Blogpost, Github) This version of the tut

6.1k Dec 29, 2022
Sentiment Classification using WSD, Maximum Entropy & Naive Bayes Classifiers

Sentiment Classification using WSD, Maximum Entropy & Naive Bayes Classifiers

Pulkit Kathuria 173 Jan 04, 2023
Contains analysis of trends from Fitbit Dataset (source: Kaggle) to see how the trends can be applied to Bellabeat customers and Bellabeat products

Contains analysis of trends from Fitbit Dataset (source: Kaggle) to see how the trends can be applied to Bellabeat customers and Bellabeat products.

Leah Pathan Khan 2 Jan 12, 2022
Production First and Production Ready End-to-End Keyword Spotting Toolkit

Production First and Production Ready End-to-End Keyword Spotting Toolkit

223 Jan 02, 2023
Input english text, then translate it between languages n times using the Deep Translator Python Library.

mass-translator About Input english text, then translate it between languages n times using the Deep Translator Python Library. How to Use Install dep

2 Mar 04, 2022
Repository for the paper "Optimal Subarchitecture Extraction for BERT"

Bort Companion code for the paper "Optimal Subarchitecture Extraction for BERT." Bort is an optimal subset of architectural parameters for the BERT ar

Alexa 461 Nov 21, 2022
TFPNER: Exploration on the Named Entity Recognition of Token Fused with Part-of-Speech

TFPNER TFPNER: Exploration on the Named Entity Recognition of Token Fused with Part-of-Speech Named entity recognition (NER), which aims at identifyin

1 Feb 07, 2022
German Text-To-Speech Engine using Tacotron and Griffin-Lim

jotts JoTTS is a German text-to-speech engine using tacotron and griffin-lim. The synthesizer model has been trained on my voice using Tacotron1. Due

padmalcom 6 Aug 28, 2022
Simple Speech to Text, Text to Speech

Simple Speech to Text, Text to Speech 1. Download Repository Opsi 1 Download repository ini, extract di lokasi yang diinginkan Opsi 2 Jika sudah famil

Habib Abdurrasyid 5 Dec 28, 2021