Sequence-to-Sequence Framework in PyTorch

Overview

nmtpytorch

License: MIT Python 3.7

nmtpytorch allows training of various end-to-end neural architectures including but not limited to neural machine translation, image captioning and automatic speech recognition systems. The initial codebase was in Theano and was inspired from the famous dl4mt-tutorial codebase.

nmtpytorch received valuable contributions from the Grounded Sequence-to-sequence Transduction Team of Frederick Jelinek Memorial Summer Workshop 2018:

Loic Barrault, Ozan Caglayan, Amanda Duarte, Desmond Elliott, Spandana Gella, Nils Holzenberger, Chirag Lala, Jasmine (Sun Jae) Lee, Jindřich Libovický, Pranava Madhyastha, Florian Metze, Karl Mulligan, Alissa Ostapenko, Shruti Palaskar, Ramon Sanabria, Lucia Specia and Josiah Wang.

If you use nmtpytorch, you may want to cite the following paper:

@article{nmtpy2017,
  author    = {Ozan Caglayan and
               Mercedes Garc\'{i}a-Mart\'{i}nez and
               Adrien Bardet and
               Walid Aransa and
               Fethi Bougares and
               Lo\"{i}c Barrault},
  title     = {NMTPY: A Flexible Toolkit for Advanced Neural Machine Translation Systems},
  journal   = {Prague Bull. Math. Linguistics},
  volume    = {109},
  pages     = {15--28},
  year      = {2017},
  url       = {https://ufal.mff.cuni.cz/pbml/109/art-caglayan-et-al.pdf},
  doi       = {10.1515/pralin-2017-0035},
  timestamp = {Tue, 12 Sep 2017 10:01:08 +0100}
}

Installation

You may want to install NVIDIA's Apex extensions. As of February 2020, we only monkey-patched nn.LayerNorm with Apex' one if the library is installed and found.

pip

You can install nmtpytorch from PyPI using pip (or pip3 depending on your operating system and environment):

$ pip install nmtpytorch

conda

We provide an environment.yml file in the repository that you can use to create a ready-to-use anaconda environment for nmtpytorch:

$ conda update --all
$ git clone https://github.com/lium-lst/nmtpytorch.git
$ conda env create -f nmtpytorch/environment.yml

IMPORTANT: After installing nmtpytorch, you need to run nmtpy-install-extra to download METEOR related files into your ${HOME}/.nmtpy folder. This step is only required once.

Development Mode

For continuous development and testing, it is sufficient to run python setup.py develop in the root folder of your GIT checkout. From now on, all modifications to the source tree are directly taken into account without requiring reinstallation.

Documentation

We currently only provide some preliminary documentation in our wiki.

Release Notes

See NEWS.md.

Comments
  • Error when trying to start training

    Error when trying to start training

    When I run the configuration file I get this error and i don't know how to solve it. Could you help me?

    Traceback (most recent call last): File "/nmtpytorch/pytorch/bin/nmtpy", line 6, in <mo
    dule> exec(compile(open(file).read(), file, 'exec')) File "/nmtpytorch/bin/nmtpy", line 120, in model = getattr(models, opts.train['model_type'])(opts=opts, logger=log) File "/nmtpytorch/nmtpytorch/models/nmt.py", line 44
    , in init self.vocabs[lang] = Vocabulary(opts.vocabulary[lang]) File "nmtpytorch/nmtpytorch/vocabulary.py", line 29
    , in init self._map = json.load(open(self.vocab)) File "/usr/lib/python3.5/json/init.py", line 268, in load parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) File "/usr/lib/python3.5/json/init.py", line 319, in loads return _default_decoder.decode(s) File "/usr/lib/python3.5/json/decoder.py", line 339, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python3.5/json/decoder.py", line 357, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

    opened by bertanunez 7
  • BrokenPipeError: [Errno 32] Broken pipe

    BrokenPipeError: [Errno 32] Broken pipe

    Hi, I encountered a issue when I use nmtpy to train a model. Whenever it comes to the step to perform evaluation, it will always generate a BrokenPipe Error. Traceback (most recent call last):

    File "/home/zmykevin/software/miniconda2/envs/nmtpy_pytorch/bin/nmtpy", line 6, in exec(compile(open(file).read(), file, 'exec')) File "/home/zmykevin/machine_translation_vision/code/mtv_kevin/nmtpytorch/bin/nmtpy", line 132, in loop() File "/home/zmykevin/machine_translation_vision/code/mtv_kevin/nmtpytorch/nmtpytorch/mainloop.py", line 246, in call while self.train_epoch(): File "/home/zmykevin/machine_translation_vision/code/mtv_kevin/nmtpytorch/nmtpytorch/mainloop.py", line 146, in train_epoch self.do_validation() File "/home/zmykevin/machine_translation_vision/code/mtv_kevin/nmtpytorch/nmtpytorch/mainloop.py", line 217, in do_validation results.extend(self.evaluator.score(hyps)) File "/home/zmykevin/machine_translation_vision/code/mtv_kevin/nmtpytorch/nmtpytorch/evaluator.py", line 38, in score scorer.compute(self.refs, hyps, **self.kwargs[key])) File "/home/zmykevin/machine_translation_vision/code/mtv_kevin/nmtpytorch/nmtpytorch/metrics/meteor.py", line 55, in compute proc.stdin.write(line + '\n') BrokenPipeError: [Errno 32] Broken pipe

    I wonder if you have any idea what causes this problem? Thanks.

    opened by zmykevin 5
  • Error when trying example

    Error when trying example

    evaluator.py", line 14, in init self.refs = list(refs.parent.glob(refs.name)) AttributeError: 'str' object has no attribute 'parent'

    when i trying use mmt-task-en-fr-nmt.conf,the error occurs

    opened by xiang-xiang-zhu 4
  • AttentiveMNMTFeatures model

    AttentiveMNMTFeatures model

    When trying to train an AMNMTF model I get this error:

    File "/home/usuaris/veu/tfgveu12/anaconda/envs/tfg/lib/python3.6/site-packages /nmtpytorch-1.4.0-py3.6.egg/nmtpytorch/models/amnmtfeats.py", line 85, in encode feats = batch['image'].view( File "/home/usuaris/veu/tfgveu12/anaconda/envs/tfg/lib/python3.6/collections/_ init_.py", line 991, in getitem raise KeyError(key) KeyError: 'image'

    I guess that I am missing something insinde the config file but I don't know what it is.

    opened by bertanunez 3
  • Unchanged training result

    Unchanged training result

    Hi, I trained and test the mnmt Greman model on Multi30K several times, but every training test gave me the same BLEU precise to 0.001, and there is no fluctuation in train_loss after at any given epoch across these training logs (for example, the loss after epoch 28 is the same across all 5 training logs), is it normal?

    opened by sampalomad 3
  • when i trying examples mmt-task-en-fr-nmt.conf

    when i trying examples mmt-task-en-fr-nmt.conf

    Traceback (most recent call last): File "/home/hx/anaconda3/envs/torch1.8/bin/nmtpy", line 164, in loop() File "/home/hx/anaconda3/envs/torch1.8/lib/python3.8/site-packages/nmtpytorch/mainloop.py", line 313, in call while self.train_epoch(): File "/home/hx/anaconda3/envs/torch1.8/lib/python3.8/site-packages/nmtpytorch/mainloop.py", line 238, in train_epoch self.do_validation() File "/home/hx/anaconda3/envs/torch1.8/lib/python3.8/site-packages/nmtpytorch/mainloop.py", line 268, in do_validation hyps = beam_search([self.net], self.beam_iterator, File "/home/hx/anaconda3/envs/torch1.8/lib/python3.8/site-packages/nmtpytorch/search.py", line 130, in beam_search *[f_next(cd, dec.get_emb(idxs, tstep), h_t[tile]) for File "/home/hx/anaconda3/envs/torch1.8/lib/python3.8/site-packages/nmtpytorch/search.py", line 130, in *[f_next(cd, dec.get_emb(idxs, tstep), h_t[tile]) for File "/home/hx/anaconda3/envs/torch1.8/lib/python3.8/site-packages/nmtpytorch/layers/decoders/conditional.py", line 200, in f_next txt_alpha_t, txt_z_t = self.att( File "/home/hx/anaconda3/envs/torch1.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/hx/anaconda3/envs/torch1.8/lib/python3.8/site-packages/nmtpytorch/layers/attention/mlp.py", line 52, in forward inner_sum = self.ctx2ctx(ctx) + self.hid2ctx(hid) RuntimeError: The size of tensor a (31) must match the size of tensor b (32) at non-singleton dimension 1

    I don't know how to solve it,it seems that the tensor size not euqal?but i don't know which tensors

    opened by xiang-xiang-zhu 2
  • How2 Run?

    How2 Run?

    From how2-dataset README.md,

    How2 Run

    The results in the dataset paper can be reproduced using nmtpytorch. We provide instructions and configuration files to reproduce three baselines on multi-modal speech-to-text, multi-modal machine translation, and multi-modal summarization.

    But I cannot find any instructions and configuration files, could you update README.md instructions in nmtpytorch or how2-dataset?

    opened by Eurus-Holmes 1
  • error when trying the example

    error when trying the example

    When I run mmt-task-en-fr-encdecinit.conf I get this error and I don't know how to solve it. Could you help me?

    File "/home/ew/.conda/envs/nmtpy/bin/nmtpy", line 145, in model = getattr(models, opts.train['model_type'])(opts=opts) AttributeError: module 'nmtpytorch.models' has no attribute 'MultimodalNMT'

    opened by hsuanlyh1997 1
  • Enhancement: tutorials about training on a self-defining datasets

    Enhancement: tutorials about training on a self-defining datasets

    Thanks for this wonderful library, but it would be much more intuitive for users to get started by providing some simple but clearly training process on self-defined dataset.

    opened by jinfagang 1
  • Some bugs here are caused by the version of pytorch

    Some bugs here are caused by the version of pytorch

    Can you please update this repository's code? When I run this project, there will be some problems due to the version of pytorch

    for example, Traceback (most recent call last): File "/home/hx/anaconda3/envs/torch1.8/bin/nmtpy", line 174, in loop() File "/home/hx/anaconda3/envs/torch1.8/lib/python3.8/site-packages/nmtpytorch/mainloop.py", line 330, in call while self.train_epoch(): File "/home/hx/anaconda3/envs/torch1.8/lib/python3.8/site-packages/nmtpytorch/mainloop.py", line 256, in train_epoch self.do_validation() File "/home/hx/anaconda3/envs/torch1.8/lib/python3.8/site-packages/nmtpytorch/mainloop.py", line 286, in do_validation hyps = self.net.beam_search( File "/home/hx/anaconda3/envs/torch1.8/lib/python3.8/site-packages/nmtpytorch/models/nmt.py", line 433, in beam_search beam[:tstep] = beam[:tstep].gather(2, pdxs.repeat(tstep, 1, 1)) RuntimeError: gather_out_cuda(): Expected dtype int64 for index

    when i trying to run the example,this problem arises ,I saw some possible problems on the Internet because of pytorch version problems (1.4->1.7) I run the code in torch==1.8.0+cu111,python3.7.0

    opened by xiang-xiang-zhu 0
  • TPU + 16 bit  + more

    TPU + 16 bit + more

    hey!

    Not sure if you've seen: https://github.com/williamFalcon/pytorch-lightning.

    The fastest growing PyTorch front-end project.

    We're also now venture funded so we have a fulltime team working on this and will be around for a very long time :)

    https://medium.com/pytorch/pytorch-lightning-0-7-1-release-and-venture-funding-dd12b2e75fb3?postPublishedType=repub

    (in fact, Huggingface has started using it for their models)

    opened by williamFalcon 0
Releases(v4.0.0)
  • v4.0.0(Dec 18, 2018)

    This release supports Pytorch >= 0.4.1 including the recent 1.0 release. The relevant setup.py and environment.yml files will default to 1.0.0 installation.

    v4.0.0 (18/12/2018)

    • Critical: NumpyDataset now returns tensors of shape HxW, N, C for 3D/4D convolutional features, 1, N, C for 2D feature files. Models should be adjusted to adapt to this new shaping.
    • An order_file per split (ord: path/to/txt file with integer per line) can be given from the configurations to change the feature order of numpy tensors to flexibly revert, shuffle, tile, etc. them.
    • Better dimension checking to ensure that everything is OK.
    • Added LabelDataset for single label input/outputs with associated Vocabulary for integer mapping.
    • Added handle_oom=(True|False) argument for [train] section to recover from GPU out-of-memory (OOM) errors during training. This is disabled by default, you need to enable it from the experiment configuration file. Note that it is still possible to get an OOM during validation perplexity computation. If you hit that, reduce the eval_batch_size parameter.
    • Added de-hyphen post-processing filter to stitch back the aggressive hyphen splitting of Moses during early-stopping evaluations.
    • Added optional projection layer and layer normalization to TextEncoder.
    • Added enc_lnorm, sched_sampling options to NMT to enable layer normalization for encoder and use scheduled sampling at a given probability.
    • ConditionalDecoder can now be initialized with max-pooled encoder states or the last state as well.
    • You can now experiment with different decoders for NMT by changing the dec_variant option.
    • Collect all attention weights in self.history dictionary of the decoders.
    • Added n-best output to nmtpy translate with the argument -N.
    • Changed the way -S works for nmtpy translate. Now you need to give the split name with -s all the time but -S is used to override the input data sources defined for that split in the configuration file.
    • Removed decoder-initialized multimodal NMT MNMTDecInit. Same functionality exists within the NMT model by using the model option dec_init=feats.
    • New model MultimodalNMT: that supports encoder initialization, decoder initialization, both, concatenation of embeddings with visual features, prepending and appending. This model covers almost all the models from LIUM-CVC's WMT17 multimodal systems except the multiplicative interaction variants such as trgmul.
    • New model MultimodalASR: encoder-decoder initialized ASR model. See the paper
    • New Model AttentiveCaptioning: Similar but not an exact reproduction of show-attend-and-tell, it uses feature files instead of raw images.
    • New model AttentiveMNMTFeaturesFA: LIUM-CVC's WMT18 multimodal system i.e. filtered attention
    • New (experimental) model NLI: A simple LSTM-based NLI baseline for SNLI dataset:
      • direction should be defined as direction: pre:Text, hyp:Text -> lb:Label
      • pre, hyp and lb keys point to plain text files with one sentence per line. A vocabulary should be constructed even for the labels to fit the nmtpy architecture.
      • acc should be added to eval_metrics to compute accuracy.
    Source code(tar.gz)
    Source code(zip)
  • v2.0.0(Sep 26, 2018)

    • Ability to install through pip.
    • Advanced layers are now organized into subfolders.
    • New basic layers: Convolution over sequence, MaxMargin.
    • New attention layers: Co-attention, multi-head attention, hierarchical attention.
    • New encoders: Arbitrary sequence-of-vectors encoder, BiLSTMp speech feature encoder.
    • New decoders: Multi-source decoder, switching decoder, vector decoder.
    • New datasets: Kaldi dataset (.ark/.scp reader), Shelve dataset, Numpy sequence dataset.
    • Added learning rate annealing: See lr_decay* options in config.py.
    • Removed subword-nmt and METEOR files from repository. We now depend on the PIP package for subword-nmt. For METEOR, nmtpy-install-extra should be launched after installation.
    • More multi-task and multi-input/output translate and training regimes.
    • New early-stopping metrics: Character and word error rate (cer,wer) and ROUGE (rouge).
    • Curriculum learning option for the BucketBatchSampler, i.e. length-ordered batches.
    • New models:
      • ASR: Listen-attend-and-spell like automatic speech recognition
      • Multitask*: Experimental multi-tasking & scheduling between many inputs/outputs.
    Source code(tar.gz)
    Source code(zip)
  • v1.4.0(May 9, 2018)

    • Add different environment.yml files for easy installation using conda. You can now create a ready-to-use conda environment by just calling conda env create -f environment-cuda<VER>.yml.
    • Make NumpyDataset memory efficient by keeping float16 arrays as they are until batch creation time.
    • Rename Multi30kRawDataset to Multi30kDataset which now supports both raw image files and pre-extracted visual features file stored as .npy.
    • Add CNN feature extraction script under scripts/.
    • Add doubly stochastic attention to ShowAttendAndTell and multimodal NMT.
    • New model MNMTDecinit to initialize decoder with auxiliary features.
    • New model AMNMTFeatures which is the attentive MMT but with features file instead of end-to-end feature extraction which was memory hungry.
    Source code(tar.gz)
    Source code(zip)
  • v1.3.2(May 2, 2018)

  • v1.3.1(May 1, 2018)

  • v1.3.0(Apr 30, 2018)

    • Added Multi30kRawDataset for training end-to-end systems from raw images as input.
    • Added NumpyDataset to read .npy/.npz tensor files as input features.
    • You can now pass -S to nmtpy train to produce shorter experiment files with not all the hyperparameters in file name.
    • New post-processing filter option de-spm for Google SentencePiece (SPM) processed files.
    • sacrebleu is now a dependency as it is now accepted as an early-stopping metric. It only makes sense to use it with SPM processed files since they are detokenized once post-processed.
    • Added sklearn as a dependency for some metrics.
    • Added momentum and nesterov parameters to [train] section for SGD.
    • ImageEncoder layer is improved in many ways. Please see the code for further details.
    • Added unmerged upstream PR for ModuleDict() support.
    • METEOR will now fallback to English if language can not be detected from file suffixes.
    • -f now produces a separate numpy file for token frequencies when building vocabulary files with nmtpy-build-vocab.
    • Added new command nmtpy test for non beam-search inference modes.
    • Removed nmtpy resume command and added pretrained_file option for [train] to initialize model weights from a checkpoint.
    • Added freeze_layers option for [train] to give comma-separated list of layer name prefixes to freeze.
    • Improved seeding: seed is now printed in order to reproduce the results.
    • Added IPython notebook for attention visualization.
    • Layers
      • New shallow SimpleGRUDecoder layer.
      • TextEncoder: Ability to set maxnorm and gradscale of embeddings and work with or without sorted-length batches.
      • ConditionalDecoder: Make it work with GRU/LSTM, allow setting maxnorm/gradscale for embeddings.
      • ConditionalMMDecoder: Same as above.
    • nmtpy translate
      • --avoid-double and --avoid-unk removed for now.
      • Added Google's length penalty normalization switch --lp-alpha.
      • Added ensembling which is enabled automatically if you give more than 1 model checkpoints.
    • New machine learning metric wrappers in utils/ml_metrics.py:
      • Label-ranking average precision lrap
      • Coverage error
      • Mean reciprocal rank
    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(Feb 20, 2018)

    Release Notes

    • You can now use $HOME and $USER in your configuration files.
    • Fixed an overflow error that would cause NMT with more than 255 tokens to fail.
    • METEOR worker process is now correctly killed after validations.
    • Many runs of an experiment are now suffixed with a unique random string instead of incremental integers to avoid race conditions in cluster setups.
    • Replaced utils.nn.get_network_topology() with a new Topology class that will parse the direction string of the model in a more smart way.
    • If CUDA_VISIBLE_DEVICES is set, the GPUManager will always honor it.
    • Dropped creation of temporary/advisory lock files under /tmp for GPU reservation.
    • Time measurements during training are now structered into batch overhead, training and evaluation timings.
    • Datasets
      • Added TextDataset for standalone text file reading.
      • Added OneHotDataset, a variant of TextDataset where the sequences are not prefixed/suffixed with <bos> and <eos> respectively.
      • Added experimental MultiParallelDataset that merges an arbitrary number of parallel datasets together.
    • nmtpy translate
      • .nodbl and .nounk suffixes are now added to output files for --avoid-double and --avoid-unk arguments respectively.
      • A model-agnostic enough beam_search() is now separated out into its own file nmtpytorch/search.py.
      • max_len default is increased to 200.
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Jan 25, 2018)

    v1.1 (25/01/2018)

    • New experimental Multi30kDataset and ImageFolderDataset classes
    • torchvision dependency added for CNN support
    • nmtpy-coco-metrics now computes one METEOR without norm=True
    • Mainloop mechanism is completely refactored with backward-incompatible configuration option changes for [train] section:
      • patience_delta option is removed
      • Added eval_batch_size to define batch size for GPU beam-search during training
      • eval_freq default is now 3000 which means per 3000 minibatches
      • eval_metrics now defaults to loss. As before, you can provide a list of metrics like bleu,meteor,loss to compute all of them and early-stop based on the first
      • Added eval_zero (default: False) which tells to evaluate the model once on dev set right before the training starts. Useful for sanity checking if you fine-tune a model initialized with pre-trained weights
      • Removed save_best_n: we no longer save the best N models on dev set w.r.t. early-stopping metric
      • Added save_best_metrics (default: True) which will save best models on dev set w.r.t each metric provided in eval_metrics. This kind of remedies the removal of save_best_n
      • checkpoint_freq now to defaults to 5000 which means per 5000 minibatches.
      • Added n_checkpoints (default: 5) to define the number of last checkpoints that will be kept if checkpoint_freq > 0 i.e. checkpointing enabled
    • Added ExtendedInterpolation support to configuration files:
      • You can now define intermediate variables in .conf files to avoid typing same paths again and again. A variable can be referenced from within its section using tensorboard_dir: ${save_path}/tb notation Cross-section references are also possible: ${data:root} will be replaced by the value of the root variable defined in the [data] section.
    • Added -p/--pretrained to nmtpy train to initialize the weights of the model using another checkpoint .ckpt.
    • Improved input/output handling for nmtpy translate:
      • -s accepts a comma-separated test sets defined in the configuration file of the experiment to translate them at once. Example: -s val,newstest2016,newstest2017
      • The mutually exclusive counterpart of -s is -S which receives a single input file of source sentences.
      • For both cases, an output prefix should now be provided with -o. In the case of multiple test sets, the output prefix will be appended the name of the test set and the beam size. If you just provide a single file with -S the final output name will only reflect the beam size information.
    • Two new arguments for nmtpy-build-vocab:
      • -f: Stores frequency counts as well inside the final json vocabulary
      • -x: Does not add special markers <eos>,<bos>,<unk>,<pad> into the vocabulary

    Layers/Architectures

    • Added Fusion() layer to concat,sum,mul an arbitrary number of inputs
    • Added experimental ImageEncoder() layer to seamlessly plug a VGG or ResNet CNN using torchvision pretrained models
    • Attention layer arguments improved. You can now select the bottleneck dimensionality for MLP attention with att_bottleneck. The dot attention is still not tested and probably broken.

    New stuff

    Changes in NMT

    • dec_init defaults to mean_ctx, i.e. the decoder will be initialized with the mean context computed from the source encoder
    • enc_lnorm which was just a placeholder is now removed since we do not provided layer-normalization for now
    • Beam Search is completely moved to GPU
    Source code(tar.gz)
    Source code(zip)
Owner
LIUM
Laboratory of Informatics of Le Mans University
LIUM
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Intel Labs 2.9k Dec 31, 2022
This is the offline-training-pipeline for our project.

offline-training-pipeline This is the offline-training-pipeline for our project. We adopt the offline training and online prediction Machine Learning

0 Apr 22, 2022
📜 GPT-2 Rhyming Limerick and Haiku models using data augmentation

Well-formed Limericks and Haikus with GPT2 📜 GPT-2 Rhyming Limerick and Haiku models using data augmentation In collaboration with Matthew Korahais &

Bardia Shahrestani 2 May 26, 2022
Control the classic General Instrument SP0256-AL2 speech chip and AY-3-8910 sound generator with a Raspberry Pi and this Python library.

GI-Pi Control the classic General Instrument SP0256-AL2 speech chip and AY-3-8910 sound generator with a Raspberry Pi and this Python library. The SP0

Nick Bild 8 Dec 15, 2021
Implementation of Multistream Transformers in Pytorch

Multistream Transformers Implementation of Multistream Transformers in Pytorch. This repository deviates slightly from the paper, where instead of usi

Phil Wang 47 Jul 26, 2022
Code for hyperboloid embeddings for knowledge graph entities

Implementation for the papers: Self-Supervised Hyperboloid Representations from Logical Queries over Knowledge Graphs, Nurendra Choudhary, Nikhil Rao,

30 Dec 10, 2022
Ecco is a python library for exploring and explaining Natural Language Processing models using interactive visualizations.

Visualize, analyze, and explore NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BER

Jay Alammar 1.6k Dec 25, 2022
UniSpeech - Large Scale Self-Supervised Learning for Speech

UniSpeech The family of UniSpeech: WavLM (arXiv): WavLM: Large-Scale Self-Supervised Pre-training for Full Stack Speech Processing UniSpeech (ICML 202

Microsoft 281 Dec 15, 2022
Contains the code and data for our #ICSE2022 paper titled as "CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Naming Sequences"

CodeFill This repository contains the code for our paper titled as "CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Namin

Software Analytics Lab 11 Oct 31, 2022
基于pytorch+bert的中文事件抽取

pytorch_bert_event_extraction 基于pytorch+bert的中文事件抽取,主要思想是QA(问答)。 要预先下载好chinese-roberta-wwm-ext模型,并在运行时指定模型的位置。

西西嘛呦 31 Nov 30, 2022
Extract city and country mentions from Text like GeoText without regex, but FlashText, a Aho-Corasick implementation.

flashgeotext ⚡ 🌍 Extract and count countries and cities (+their synonyms) from text, like GeoText on steroids using FlashText, a Aho-Corasick impleme

Ben 57 Dec 16, 2022
This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.

Speech-Backbones This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab. Grad-TTS Official implementation of the Grad-

HUAWEI Noah's Ark Lab 295 Jan 07, 2023
超轻量级bert的pytorch版本,大量中文注释,容易修改结构,持续更新

bert4pytorch 2021年8月27更新: 感谢大家的star,最近有小伙伴反映了一些小的bug,我也注意到了,奈何这个月工作上实在太忙,更新不及时,大约会在9月中旬集中更新一个只需要pip一下就完全可用的版本,然后会新添加一些关键注释。 再增加对抗训练的内容,更新一个完整的finetune

muqiu 317 Dec 18, 2022
An Analysis Toolkit for Natural Language Generation (Translation, Captioning, Summarization, etc.)

VizSeq is a Python toolkit for visual analysis on text generation tasks like machine translation, summarization, image captioning, speech translation

Facebook Research 409 Oct 28, 2022
Easy to use, state-of-the-art Neural Machine Translation for 100+ languages

EasyNMT - Easy to use, state-of-the-art Neural Machine Translation This package provides easy to use, state-of-the-art machine translation for more th

Ubiquitous Knowledge Processing Lab 748 Jan 06, 2023
Partially offline multi-language translator built upon Huggingface transformers.

Translate Command-line interface to translation pipelines, powered by Huggingface transformers. This tool can download translation models, and then us

Richard Jarry 8 Oct 25, 2022
Ukrainian TTS (text-to-speech) using Coqui TTS

title emoji colorFrom colorTo sdk app_file pinned Ukrainian TTS 🐸 green green gradio app.py false Ukrainian TTS 📢 🤖 Ukrainian TTS (text-to-speech)

Yurii Paniv 85 Dec 26, 2022
Code for CVPR 2021 paper: Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning

Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning This is the PyTorch companion code for the paper: A

Amazon 69 Jan 03, 2023
chaii - hindi & tamil question answering

chaii - hindi & tamil question answering This is the solution for rank 5th in Kaggle competition: chaii - Hindi and Tamil Question Answering. The comp

abhishek thakur 33 Dec 18, 2022
CCKS-Title-based-large-scale-commodity-entity-retrieval-top1

- 基于标题的大规模商品实体检索top1 一、任务介绍 CCKS 2020:基于标题的大规模商品实体检索,任务为对于给定的一个商品标题,参赛系统需要匹配到该标题在给定商品库中的对应商品实体。 输入:输入文件包括若干行商品标题。 输出:输出文本每一行包括此标题对应的商品实体,即给定知识库中商品 ID,

43 Nov 11, 2022