Scikit-learn style model finetuning for NLP

Overview

Scikit-learn style model finetuning for NLP

Finetune is a library that allows users to leverage state-of-the-art pretrained NLP models for a wide variety of downstream tasks.

Finetune currently supports TensorFlow implementations of the following models:

  1. BERT, from "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"
  2. RoBERTa, from "RoBERTa: A Robustly Optimized BERT Pretraining Approach"
  3. GPT, from "Improving Language Understanding by Generative Pre-Training"
  4. GPT2, from "Language Models are Unsupervised Multitask Learners"
  5. TextCNN, from "Convolutional Neural Networks for Sentence Classification"
  6. Temporal Convolution Network, from "An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling"
  7. DistilBERT from "Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT"
Section Description
API Tour Base models, configurables, and more
Installation How to install using pip or directly from source
Finetune with Docker Finetune and inference within a Docker Container
Documentation Full API documentation

Finetune API Tour

Finetuning the base language model is as easy as calling Classifier.fit:

model = Classifier()               # Load base model
model.fit(trainX, trainY)          # Finetune base model on custom data
model.save(path)                   # Serialize the model to disk
...
model = Classifier.load(path)      # Reload models from disk at any time
predictions = model.predict(testX) # [{'class_1': 0.23, 'class_2': 0.54, ..}, ..]

Choose your desired base model from finetune.base_models:

from finetune.base_models import BERT, RoBERTa, GPT, GPT2, TextCNN, TCN
model = Classifier(base_model=BERT)

Optimize your model with a variety of configurables. A detailed list of all config items can be found in the finetune docs.

model = Classifier(low_memory_mode=True, lr_schedule="warmup_linear", max_length=512, l2_reg=0.01, oversample=True, ...)

The library supports finetuning for a number of tasks. A detailed description of all target models can be found in the finetune API reference.

from finetune import *
models = (Classifier, MultiLabelClassifier, MultiFieldClassifier, MultipleChoice, # Classify one or more inputs into one or more classes
          Regressor, OrdinalRegressor, MultifieldRegressor,                       # Regress on one or more inputs
          SequenceLabeler, Association,                                           # Extract tokens from a given class, or infer relationships between them
          Comparison, ComparisonRegressor, ComparisonOrdinalRegressor,            # Compare two documents for a given task
          LanguageModel, MultiTask,                                               # Further pretrain your base models
          DeploymentModel                                                         # Wrapper to optimize your serialized models for a production environment
          )

For example usage of each of these target types, see the finetune/datasets directory. For purposes of simplicity and runtime these examples use smaller versions of the published datasets.

If you have large amounts of unlabeled training data and only a small amount of labeled training data, you can finetune in two steps for best performance.

model = Classifier()               # Load base model
model.fit(unlabeledX)              # Finetune base model on unlabeled training data
model.fit(trainX, trainY)          # Continue finetuning with a smaller amount of labeled data
predictions = model.predict(testX) # [{'class_1': 0.23, 'class_2': 0.54, ..}, ..]
model.save(path)                   # Serialize the model to disk

Installation

Finetune can be installed directly from PyPI by using pip

pip3 install finetune

or installed directly from source:

git clone -b master https://github.com/IndicoDataSolutions/finetune && cd finetune
python3 setup.py develop              # symlinks the git directory to your python path
pip3 install tensorflow-gpu --upgrade # or tensorflow-cpu
python3 -m spacy download en          # download spacy tokenizer

In order to run finetune on your host, you'll need a working copy of tensorflow-gpu >= 1.14.0 and up to date nvidia-driver versions.

You can optionally run the provided test suite to ensure installation completed successfully.

pip3 install pytest
pytest

Docker

If you'd prefer you can also run finetune in a docker container. The bash scripts provided assume you have a functional install of docker and nvidia-docker.

git clone https://github.com/IndicoDataSolutions/finetune && cd finetune

# For usage with NVIDIA GPUs
./docker/build_gpu_docker.sh      # builds a docker image
./docker/start_gpu_docker.sh      # starts a docker container in the background, forwards $PWD to /finetune

docker exec -it finetune bash # starts a bash session in the docker container

For CPU-only usage:

./docker/build_cpu_docker.sh
./docker/start_cpu_docker.sh

Documentation

Full documentation and an API Reference for finetune is available at finetune.indico.io.

Comments
  • Very slow inference in 0.5.11

    Very slow inference in 0.5.11

    After training a default classifier, saving and loading. model.predict("lorem ipsum") and model.predict_prob take in average 14 seconds even on a hefty server such as AWS p3.16Xlarge.

    opened by dimidd 17
  • Out of Memory on Small Dataset

    Out of Memory on Small Dataset

    Describe the bug When attempting to train a classifier on a small dataset of 8,000 documents, I get an out of memory error and the script stops running.

    Minimum Reproducible Example Version of finetune = 0.4.1 Version of tensorflow-gpu = 1.8.0 Version of cuda = release 9.0, V9.0.176 Windows 10 Pro
    Load a dataset of documents (X_train) and labels (Y_train), where each document and label is simply a string. model = finetune.Classifier(max_length = 256, batch_size = 1) #tried reducing the memory footprint model.fit(X_train, Y_train)

    Expected behavior I expected the model to train, but it doesn't manage to start training.

    Additional context I get the following warnings in the jupyter notebook:

    C:\Users...\Python35\site-packages\finetune\encoding.py:294: UserWarning: Some examples are longer than the max_length. Please trim documents or increase max_length. Fallback behaviour is to use the first 254 byte-pair encoded tokens "Fallback behaviour is to use the first {} byte-pair encoded tokens".format(max_length - 2) C:\Users...\Python35\site-packages\finetune\encoding.py:233: UserWarning: Document is longer than max length allowed, trimming document to 256 tokens. max_length C:\Users...\tensorflow\python\ops\gradients_impl.py: 100: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. " WARNING:tensorflow:From C:\Users...\tensorflow\python\util\tf_should_use.py:118: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02. Instructions for updating: Use tf.variables_initializer instead.

    And then I get the following diagnostic info showing up in the command prompt:

    2018-10-04 17:26:36.920118: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2018-10-04 17:26:37.716883: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1356] Found device 0 with properties: name: Quadro M1200 major: 5 minor: 0 memoryClockRate(GHz): 1.148 pciBusID: 0000:01:00.0 totalMemory: 4.00GiB freeMemory: 3.35GiB 2018-10-04 17:26:37.725637: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0 2018-10-04 17:26:38.412484: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-10-04 17:26:38.417413: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:929] 0 2018-10-04 17:26:38.419392: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:942] 0: N 2018-10-04 17:26:38.421353: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1053] Created TensorFlow device (/device:GPU:0 with 3083 MB memory) -> physical GPU (device: 0, name: Quadro M1200, pci bus id: 0000:01:00.0, compute capability: 5.0) [I 17:28:26.081 NotebookApp] Saving file at /projects/language-models/Finetune Package.ipynb 2018-10-04 17:29:14.118663: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0 2018-10-04 17:29:14.123595: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-10-04 17:29:14.127649: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:929] 0 2018-10-04 17:29:14.135411: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:942] 0: N 2018-10-04 17:29:14.138698: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3083 MB memory) -> physical GPU (device: 0, name: Quadro M1200, pci bus id: 0000:01:00.0, compute capability: 5.0) 2018-10-04 17:30:06.881174: W T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:275] Allocator (GPU_0_bfc) ran out of memory trying to allocate 9.00MiB. Current allocation summary follows. 2018-10-04 17:30:06.900550: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (256): Total Chunks: 60, Chunks in use: 60. 15.0KiB allocated for chunks. 15.0KiB in use in bin. 312B client-requested in use in bin. 2018-10-04 17:30:06.929551: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (512): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:06.964647: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (1024): Total Chunks: 2, Chunks in use: 2. 2.5KiB allocated for chunks. 2.5KiB in use in bin. 2.0KiB client-requested in use in bin. 2018-10-04 17:30:06.995394: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (2048): Total Chunks: 532, Chunks in use: 532. 1.56MiB allocated for chunks. 1.56MiB in use in bin. 1.56MiB client-requested in use in bin. 2018-10-04 17:30:07.031613: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (4096): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.061013: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (8192): Total Chunks: 137, Chunks in use: 137. 1.39MiB allocated for chunks. 1.39MiB in use in bin. 1.39MiB client-requested in use in bin. 2018-10-04 17:30:07.093603: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (16384): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.130530: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (32768): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.170321: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (65536): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.212730: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (131072): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.246329: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (262144): Total Chunks: 2, Chunks in use: 2. 512.0KiB allocated for chunks. 512.0KiB in use in bin. 512.0KiB client-requested in use in bin. 2018-10-04 17:30:07.288640: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (524288): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.303248: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (1048576): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.332990: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (2097152): Total Chunks: 71, Chunks in use: 71. 159.75MiB allocated for chunks. 159.75MiB in use in bin. 159.75MiB client-requested in use in bin. 2018-10-04 17:30:07.364897: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (4194304): Total Chunks: 69, Chunks in use: 68. 466.99MiB allocated for chunks. 459.00MiB in use in bin. 459.00MiB client-requested in use in bin. 2018-10-04 17:30:07.396862: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (8388608): Total Chunks: 140, Chunks in use: 140. 1.23GiB allocated for chunks. 1.23GiB in use in bin. 1.23GiB client-requested in use in bin. 2018-10-04 17:30:07.428029: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (16777216): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.464813: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (33554432): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.494067: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (67108864): Total Chunks: 10, Chunks in use: 10. 1.17GiB allocated for chunks. 1.17GiB in use in bin. 1.17GiB client-requested in use in bin. 2018-10-04 17:30:07.524156: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (134217728): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.550345: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:630] Bin (268435456): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2018-10-04 17:30:07.578392: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:646] Bin for 9.00MiB was 8.00MiB, Chunk State: 2018-10-04 17:30:07.600123: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 0000000801980000 of size 1280 2018-10-04 17:30:07.629493: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 0000000801980500 of size 1280 2018-10-04 17:30:07.649189: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 0000000801980A00 of size 125144064 2018-10-04 17:30:07.676965: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 00000008090D9600 of size 7077888 2018-10-04 17:30:07.699245: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 0000000809799600 of size 3072 2018-10-04 17:30:07.718738: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:665] Chunk at 000000080979A200 of size 3072

    ...and so on. This is, in my opinion a pretty small dataset and I've made the max characters pretty small so I don't think this is a hardware limitation, but a bug.

    opened by stevesmit 12
  • interpolate_pos_embed on MultiTask

    interpolate_pos_embed on MultiTask

    Hi, it looks like this parameter from the multitask is not being properly inherited by the input_pipeline.

    from finetune import Classifier, MultiTask
    
    MAX_LENGTH = 300
    finetune_config = {'batch_size': 4, 
                      'interpolate_pos_embed': True,
                       'n_epochs' : 1, #default 3
                       'train_embeddings': False, 
                       'num_layers_trained': 3, 
                       'max_length': MAX_LENGTH
                      }
    multi_model = MultiTask({"sentiment": Classifier, 
                             "tags": Classifier}, 
                            **finetune_config)
    

    The previous code builds the multitask object.

    The next code works fine finetuning it:

    multi_model.finetune(X={"sentiment": X_train.regex_text.values,
                             "tags": X_train.regex_text.values}, 
                         Y={"sentiment": y_train.sentiment,
                             "tags": y_train.full_topic},
                         batch_size=4
                       )
    

    Also, multi_model.input_pipeline.config['interpolate_pos_embed'] = True is verified.

    But when prediction time comes:

    y_pred = multi_model.predict({"sentiment": X_train.regex_text.values,
                                       "tags": X_train.regex_text.values})
    

    It does not work with:

    ValueError: Max Length cannot be greater than 300 if interpolate_pos_embed is turned off
    

    I do not know if I am missing something on the setup or it is a conflict between the parameters of the distinct objects.

    Thanks very much Madison for the great job! The MultiTask model is a fantastic tool for uneven multiobjective labeled data.

    opened by Guillermogsjc 11
  • Loading a model from 0.4.1 in 0.5.11

    Loading a model from 0.4.1 in 0.5.11

    Describe the bug After saving a model on 5.10 using Classifier.save("my_model.bin"), upgrading to 5.11. Loading using Classifier.load("my_model.bin") results in KeyError: 'base_model_path'

    opened by dimidd 11
  • A different way of doing the similarity/comparison task?

    A different way of doing the similarity/comparison task?

    Hey! Thanks for the awsome work. I was wondering if I could use and update finetune to do the following:

    Instead of using (Start, Text1, Delim, Text2, Extract) and (Start, Text2, Delim, Text1, Extract) as in the paper, can we use (Start, Text1, Extract) and (Start, Text2, Extract) separately through the transformer?

    This could be thought of as obtaining sentence/document embeddings for Text1 and Text2 separately. Upon doing that, I would like to compare their similarity using a distance metric such as cosine distance. (i.e. train the transformer as a siamese network.)

    Would you suggest I build such a model on top of a fork of finetune?

    opened by chaitjo 11
  • Support for pre-training the language model

    Support for pre-training the language model

    Is your feature request related to a problem? Please describe. In order to use the classifier on different languages / specific domains it would be useful to be able to pretrain the language model.

    Describe the solution you'd like Calling .fit on a corpus (i.e.) no labels should train the language model.

    model.fit(corpus)
    

    Describe alternatives you've considered Use the original repo which doesn't have a simple to use interface.

    enhancement 
    opened by elyase 11
  • ValueError: Couldn't find trained model at /tmp/Finetune14yvac9b.

    ValueError: Couldn't find trained model at /tmp/Finetune14yvac9b.

    Describe the bug INFO:finetune:Saving tensorboard output to /tmp/Finetune14yvac9b


    ValueError Traceback (most recent call last) in 6 inputs = {"x": tf.placeholder(shape=xshapes, dtype=xtypes)} 7 return tf.estimator.export.ServingInputReceiver(inputs, inputs) ----> 8 estimator.export_saved_model(export_dir_base='saved_model', serving_input_receiver_fn=serving_input_receiver_fn)

    ~/.local/lib/python3.5/site-packages/tensorflow_estimator/python/estimator/estimator.py in export_saved_model(self, export_dir_base, serving_input_receiver_fn, assets_extra, as_text, checkpoint_path, experimental_mode) 730 as_text=as_text, 731 checkpoint_path=checkpoint_path, --> 732 strip_default_attrs=True) 733 734 def experimental_export_all_saved_models(

    ~/.local/lib/python3.5/site-packages/tensorflow_estimator/python/estimator/estimator.py in _export_all_saved_models(self, export_dir_base, input_receiver_fn_map, assets_extra, as_text, checkpoint_path, strip_default_attrs) 825 else: 826 raise ValueError("Couldn't find trained model at {}.".format( --> 827 self._model_dir)) 828 829 export_dir = export_lib.get_timestamped_export_dir(export_dir_base)

    ValueError: Couldn't find trained model at /tmp/Finetune14yvac9b.

    Minimum Reproducible Example from finetune import MultiLabelClassifier model = MultiLabelClassifier.load('comp_gpt2.model') estimator, hooks = model.get_estimator() (xtypes, ytypes), (xshapes, yshapes) = model.input_pipeline.feed_shape_type_def() def serving_input_receiver_fn(): inputs = {"x": tf.placeholder(shape=xshapes, dtype=xtypes)} return tf.estimator.export.ServingInputReceiver(inputs, inputs) estimator.export_saved_model(export_dir_base='saved_model', serving_input_receiver_fn=serving_input_receiver_fn)

    opened by emtropyml 10
  • getting much lower accuracy with new release of finetune library

    getting much lower accuracy with new release of finetune library

    Describe the bug I updated my finetune library to the latest version two days ago. For sanity check, I loaded my fine-tuned and saved models from previous model. I get totally different training and test accuracies. In the previous version, my train and test accuracy was 90% and 82%, now with this new release, with the same fine-tuned model, and same datasets, but I am getting 34% for training set, and 16% for test set. This is a huge difference. I assume there is a bug, or something else going on?

    My code lines for fine tuning:

    import time
    start = time.time()
    model = Classifier(n_epochs=2 , base_model=GPT2Model, tensorboard_folder ='/workspace/checkpoints', max_length= 1024, val_size = 1000, chunk_long_sequences=False, keep_best_model= True)
    model.fit(trainX, trainY)
    print("total training time:", time.time() - start)
    

    for testing:

    #Load the saved model
    model= Classifier.load('./checkpoints/2epochs_GPT2')
    #test accuracy for the test set 
    pred_test = model.predict(testX)
    accuracy = np.mean(pred_test == testY)
    print('Test Accuracy: {:0.3f}'.format(accuracy))
    
    opened by rnyak 9
  • Can I use to generate text?

    Can I use to generate text?

    Hi, seems great work done by the team. According to the documentation, I understand that every model uses a pre-trained language model. Can I use it for the following scenario, if yes how?:

    1. Fine-tune the pre-trained language model on my own text corpus and then generate(sample) text.
    2. Fine-tune the pre-trained language model on my own text corpus and then score any given text/sentence. Thanks.
    opened by abubakar-ucr 9
  • Slow unsupervised training

    Slow unsupervised training

    Thank you for your library, the supervised finetuning works very well. However, when I try to train on unlabelled data ( model.fit(unlabeledX) ), the training is much slower (9s/it) compared to supervised training (1.7s/it). This is on one K80 gpu. I am not sure why unsupervised training is slower, as doesn't the supervised training tune the language model as well?

    opened by chiayewken 9
  •  eval_acc parameter

    eval_acc parameter

    Describe the bug I set the eval_acc = true, and val_size = 1000. I am fine-tuning the Classifier model for 3 epochs. I get 90% training 82% test set accuracies, but when I check the TensorBoard accuracy plot, I see val accuracy is 49%. That does not seem correct to me.

    I am not sure if the eval_acc is calculated correctly.

    Expected behavior How can we print put val accuracy during fine tuning at least for epoch?

    opened by rnyak 8
  • [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1 to latest

    [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1 to latest

    Keeping your Docker base image up-to-date means you’ll benefit from security fixes in the latest version of your chosen image.

    Changes included in this PR

    • docker/Dockerfile.cpu

    We recommend upgrading to tensorflow/tensorflow:latest, as this image has only 29 known vulnerabilities. To do this, merge this pull request, then verify your application still works as expected.

    Some of the most important vulnerabilities in your base image include:

    | Severity | Priority Score / 1000 | Issue | Exploit Maturity | | :------: | :-------------------- | :---- | :--------------- | | low severity | 536 | Improper Check for Dropped Privileges
    SNYK-UBUNTU2004-BASH-581100 | Mature | | high severity | 614 | Loop with Unreachable Exit Condition ('Infinite Loop')
    SNYK-UBUNTU2004-OPENSSL-2426343 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387723 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387728 | No Known Exploit | | medium severity | 514 | Improper Input Validation
    SNYK-UBUNTU2004-XZUTILS-2442551 | No Known Exploit |


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Learn about vulnerability in an interactive lesson of Snyk Learn.

    opened by snyk-bot 0
  • [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1-gpu to latest-gpu

    [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1-gpu to latest-gpu

    This PR was automatically created by Snyk using the credentials of a real user.


    Keeping your Docker base image up-to-date means you’ll benefit from security fixes in the latest version of your chosen image.

    Changes included in this PR

    • docker/Dockerfile.gpu

    We recommend upgrading to tensorflow/tensorflow:latest-gpu, as this image has only 49 known vulnerabilities. To do this, merge this pull request, then verify your application still works as expected.

    Some of the most important vulnerabilities in your base image include:

    | Severity | Priority Score / 1000 | Issue | Exploit Maturity | | :------: | :-------------------- | :---- | :--------------- | | low severity | 536 | Improper Check for Dropped Privileges
    SNYK-UBUNTU2004-BASH-581100 | Mature | | medium severity | 514 | CVE-2022-32221
    SNYK-UBUNTU2004-CURL-3070971 | No Known Exploit | | medium severity | 514 | Arbitrary Code Injection
    SNYK-UBUNTU2004-GNUPG2-2940666 | No Known Exploit | | medium severity | 514 | Improper Input Validation
    SNYK-UBUNTU2004-GZIP-2442549 | No Known Exploit | | high severity | 614 | Loop with Unreachable Exit Condition ('Infinite Loop')
    SNYK-UBUNTU2004-OPENSSL-2426343 | No Known Exploit |


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Learn about vulnerability in an interactive lesson of Snyk Learn.

    opened by ashmuck 0
  • [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1 to 2.11.0

    [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1 to 2.11.0

    Keeping your Docker base image up-to-date means you’ll benefit from security fixes in the latest version of your chosen image.

    Changes included in this PR

    • docker/Dockerfile.cpu

    We recommend upgrading to tensorflow/tensorflow:2.11.0, as this image has only 29 known vulnerabilities. To do this, merge this pull request, then verify your application still works as expected.

    Some of the most important vulnerabilities in your base image include:

    | Severity | Priority Score / 1000 | Issue | Exploit Maturity | | :------: | :-------------------- | :---- | :--------------- | | low severity | 536 | Improper Check for Dropped Privileges
    SNYK-UBUNTU2004-BASH-581100 | Mature | | high severity | 614 | Loop with Unreachable Exit Condition ('Infinite Loop')
    SNYK-UBUNTU2004-OPENSSL-2426343 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387723 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387728 | No Known Exploit | | medium severity | 514 | Improper Input Validation
    SNYK-UBUNTU2004-XZUTILS-2442551 | No Known Exploit |


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Learn about vulnerability in an interactive lesson of Snyk Learn.

    opened by snyk-bot 0
  • [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1-gpu to 2.11.0-gpu

    [Snyk] Security upgrade tensorflow/tensorflow from 2.7.1-gpu to 2.11.0-gpu

    Keeping your Docker base image up-to-date means you’ll benefit from security fixes in the latest version of your chosen image.

    Changes included in this PR

    • docker/Dockerfile.gpu

    We recommend upgrading to tensorflow/tensorflow:2.11.0-gpu, as this image has only 49 known vulnerabilities. To do this, merge this pull request, then verify your application still works as expected.

    Some of the most important vulnerabilities in your base image include:

    | Severity | Priority Score / 1000 | Issue | Exploit Maturity | | :------: | :-------------------- | :---- | :--------------- | | low severity | 536 | Improper Check for Dropped Privileges
    SNYK-UBUNTU2004-BASH-581100 | Mature | | high severity | 614 | Loop with Unreachable Exit Condition ('Infinite Loop')
    SNYK-UBUNTU2004-OPENSSL-2426343 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387723 | No Known Exploit | | medium severity | 514 | Files or Directories Accessible to External Parties
    SNYK-UBUNTU2004-UTILLINUX-2387728 | No Known Exploit | | medium severity | 514 | Improper Input Validation
    SNYK-UBUNTU2004-XZUTILS-2442551 | No Known Exploit |


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Learn about vulnerability in an interactive lesson of Snyk Learn.

    opened by snyk-bot 0
Releases(0.8.6)
Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5

NLP-Summarizer Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5 This project aimed to provide in

Samuel Sharkey 1 Feb 07, 2022
NLP tool to extract emotional phrase from tweets 🤩

Emotional phrase extractor Extract phrase in the given text that is used to express the sentiment. Capturing sentiment in language is important in the

Shahul ES 38 Oct 17, 2022
中文生成式预训练模型

T5 PEGASUS 中文生成式预训练模型,以mT5为基础架构和初始权重,通过类似PEGASUS的方式进行预训练。 详情可见:https://kexue.fm/archives/8209 Tokenizer 我们将T5 PEGASUS的Tokenizer换成了BERT的Tokenizer,它对中文更

410 Jan 03, 2023
DeepSpeech - Easy-to-use Speech Toolkit including SOTA ASR pipeline, influential TTS with text frontend and End-to-End Speech Simultaneous Translation.

(简体中文|English) Quick Start | Documents | Models List PaddleSpeech is an open-source toolkit on PaddlePaddle platform for a variety of critical tasks i

5.6k Jan 03, 2023
Conditional Transformer Language Model for Controllable Generation

CTRL - A Conditional Transformer Language Model for Controllable Generation Authors: Nitish Shirish Keskar, Bryan McCann, Lav Varshney, Caiming Xiong,

Salesforce 1.7k Dec 28, 2022
Incorporating KenLM language model with HuggingFace implementation of Wav2Vec2CTC Model using beam search decoding

Wav2Vec2CTC With KenLM Using KenLM ARPA language model with beam search to decode audio files and show the most probable transcription. Assuming you'v

farisalasmary 65 Sep 21, 2022
A calibre plugin that generates Word Wise and X-Ray files then sends them to Kindle. Supports KFX, AZW3 and MOBI eBooks. X-Ray supports 18 languages.

WordDumb A calibre plugin that generates Word Wise and X-Ray files then sends them to Kindle. Supports KFX, AZW3 and MOBI eBooks. Languages X-Ray supp

172 Dec 29, 2022
Lumped-element impedance calculator and frequency-domain plotter.

fastZ: Lumped-Element Impedance Calculator fastZ is a small tool for calculating and visualizing electrical impedance in Python. Features include: Sup

Wesley Hileman 47 Nov 18, 2022
The tool to make NLP datasets ready to use

chazutsu photo from Kaikado, traditional Japanese chazutsu maker chazutsu is the dataset downloader for NLP. import chazutsu r = chazutsu.data

chakki 243 Dec 29, 2022
Winner system (DAMO-NLP) of SemEval 2022 MultiCoNER shared task over 10 out of 13 tracks.

KB-NER: a Knowledge-based System for Multilingual Complex Named Entity Recognition The code is for the winner system (DAMO-NLP) of SemEval 2022 MultiC

116 Dec 27, 2022
multi-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification,seq2seq,attention,beam search

multi-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification,seq2seq,attention,beam search

hellonlp 30 Dec 12, 2022
Code repository for "It's About Time: Analog clock Reading in the Wild"

it's about time Code repository for "It's About Time: Analog clock Reading in the Wild" Packages required: pytorch (used 1.9, any reasonable version s

52 Nov 10, 2022
simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.

Quickly train T5 models in just 3 lines of code + ONNX support simpleT5 is built on top of PyTorch-lightning ⚡️ and Transformers 🤗 that lets you quic

Shivanand Roy 220 Dec 30, 2022
Contains links to publicly available datasets for modeling health outcomes using speech and language.

speech-nlp-datasets Contains links to publicly available datasets for modeling various health outcomes using speech and language. Speech-based Corpora

Tuka Alhanai 77 Dec 07, 2022
PRAnCER is a web platform that enables the rapid annotation of medical terms within clinical notes.

PRAnCER (Platform enabling Rapid Annotation for Clinical Entity Recognition) is a web platform that enables the rapid annotation of medical terms within clinical notes. A user can highlight spans of

Sontag Lab 39 Nov 14, 2022
Deal or No Deal? End-to-End Learning for Negotiation Dialogues

Introduction This is a PyTorch implementation of the following research papers: (1) Hierarchical Text Generation and Planning for Strategic Dialogue (

Facebook Research 1.4k Dec 29, 2022
Segmenter - Transformer for Semantic Segmentation

Segmenter - Transformer for Semantic Segmentation

592 Dec 27, 2022
A fast hierarchical dimensionality reduction algorithm.

h-NNE: Hierarchical Nearest Neighbor Embedding A fast hierarchical dimensionality reduction algorithm. h-NNE is a general purpose dimensionality reduc

Marios Koulakis 35 Dec 12, 2022
Fastseq 基于ONNXRUNTIME的文本生成加速框架

Fastseq 基于ONNXRUNTIME的文本生成加速框架

Jun Gao 9 Nov 09, 2021
Product-Review-Summarizer - Created a product review summarizer which clustered thousands of product reviews and summarized them into a maximum of 500 characters, saving precious time of customers and helping them make a wise buying decision.

Product-Review-Summarizer - Created a product review summarizer which clustered thousands of product reviews and summarized them into a maximum of 500 characters, saving precious time of customers an

Parv Bhatt 1 Jan 01, 2022