Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

Overview

Transformers4Rec | Documentation

Build GitHub Documentation GitHub release

Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

It works as a bridge between NLP and recommender systems by integrating with one the most popular NLP frameworks HuggingFace Transformers, making state-of-the-art Transformer architectures available for RecSys researchers and industry practitioners.

Sequential and Session-based recommendation with Transformers4Rec
Sequential and Session-based recommendation with Transformers4Rec

Transformers4Rec supports multiple input features and provides configurable building blocks that can be easily combined for custom architectures.

You can build a fully GPU-accelerated pipeline for sequential and session-based recommendation with Transformers4Rec and its smooth integration with other components of NVIDIA Merlin: NVTabular for preprocessing and Triton Inference Server.

Highlights

  • Winning and SOTA solution: We have leveraged and evolved the Transformers4Rec library to win two recent session-based recommendation competitions: the WSDM WebTour Workshop Challenge 2021, organized by Booking.com, and the SIGIR eCommerce Workshop Data Challenge 2021, organized by Coveo. Furthermore, we have also done extensive empirical evaluation on the usage of Transformers4Rec for session-based recommendation, which was able to provide higher accuracy than baselines algorithms, as published in our ACM RecSys'21 paper.

  • Flexibility: The building blocks are modularized and are compatible with vanilla PyTorch modules and TF Keras layers. You can create custom architectures, e.g. with multiple towers, multiple heads/tasks and losses.

  • Production-ready: Exports trained models to serve with Triton Inference Server in a single pipeline that includes online features preprocessing and model inference.

  • Leverages cutting-edge NLP research: With the integration with HuggingFace Transformers, you have available more than 64 different Transformer architectures (and counting) to evaluate for your sequential and session-based recommendation task.

  • Support for multiple input features: HF Transformers supports only sequence of token id as input, as it was originally designed for NLP. Due to the rich features available in RecSys datasets, transformers4Rec enables the usage of HF Transformers with any type of sequential tabular data. The library uses a schema format to configure the input features, and automatically creates the necessary layers (e.g. embedding tables, projection layers, output layers based on the target) without requiring code changes to include new features. Interaction and sequence-level input features can be normalized and combined in configurable ways.

  • Seamless preprocessing and feature engineering: The integration with NVTabular has common preprocessing ops for session-based recommendation and exports a dataset schema compatible with Transformers4Rec, so that input features can be configured automatically.

GPU-accelerated Sequential and Session-based recommendation
GPU-accelerated pipeline for Sequential and Session-based recommendation using NVIDIA Merlin components

Quick tour

To train a model on a dataset, the first step is to provide the schema and use this to construct an input-module. For session-based recommendation problems you typically want to use TabularSequenceFeatures, which merges context features with sequential features. Next, we need to provide the prediction-task(s) (the tasks we provide out of the box can be found here). Then all that's left is to construct a transformer-body and convert this to a model.

Here is the PyTorch version:

from transformers4rec import torch as tr

schema: tr.Schema = tr.data.tabular_sequence_testing_data.schema
# Or read schema from disk: tr.Schema().from_json(SCHEMA_PATH)
max_sequence_length, d_model = 20, 64

# Define input module to process tabular input-features
input_module = tr.TabularSequenceFeatures.from_schema(
    schema,
    max_sequence_length=max_sequence_length,
    continuous_projection=d_model,
    aggregation="concat",
    masking="causal",
)
# Define one or multiple prediction-tasks
prediction_tasks = tr.NextItemPredictionTask()

# Define a transformer-config, like the XLNet architecture
transformer_config = tr.XLNetConfig.build(
    d_model=d_model, n_head=4, n_layer=2, total_seq_length=max_sequence_length
)
model: tr.Model = transformer_config.to_torch_model(input_module, prediction_tasks)

And here is the equivalent code for TensorFlow:

from transformers4rec import tf as tr

schema: tr.Schema = tr.data.tabular_sequence_testing_data.schema
# Or read schema from disk: tr.Schema().from_json(SCHEMA_PATH)
max_sequence_length, d_model = 20, 64

# Define input module to process tabular input-features
input_module = tr.TabularSequenceFeatures.from_schema(
    schema,
    max_sequence_length=max_sequence_length,
    continuous_projection=d_model,
    aggregation="concat",
    masking="causal",
)
# Define one or multiple prediction-tasks
prediction_tasks = tr.NextItemPredictionTask()

# Define a transformer-config, like the XLNet architecture
transformer_config = tr.XLNetConfig.build(
    d_model=d_model, n_head=4, n_layer=2, total_seq_length=max_sequence_length
)
model: tr.Model = transformer_config.to_tf_model(input_module, prediction_tasks)

Use cases

Sequential and Session-based recommendation

Traditional recommendation algorithms usually ignore the temporal dynamics and the sequence of interactions when trying to model user behaviour. Generally, the next user interaction is related to the sequence of the user's previous choices. In some cases, it might be even a repeated purchase or song play. User interests might also suffer from the interest drift, as preferences might change over time. Those challenges are addressed by the sequential recommendation task. A special case of sequential-recommendation is the session-based recommendation task, where you have only access to the short sequence of interactions within the current session. This is very common in online services like e-commerce, news and media portals where the user might choose to browse anonymously (and due to GDPR compliance no cookies are collected), or because it is a new user. This task is also relevant for scenarios where users' interests change a lot over time depending on the user context or intent, so leveraging the current session interactions is more promising than old interactions to provide relevant recommendations.

To deal with sequential and session-based recommendation, many sequence learning algorithms previously applied in machine learning and NLP research have been explored for RecSys, based on k-Nearest Neighbors, Frequent Pattern Mining, Hidden Markov Models, Recurrent Neural Networks, and more recently neural architectures using the Self-Attention Mechanism and the Transformer architectures.

Differently from Transformers4Rec, existing frameworks for such tasks are generally focused for research, accept only sequence of item ids as input and do not provide a modularized and scalable implementation for production usage.

Installation

Installing with pip

Transformers4Rec comes in two flavors: PyTorch and Tensorflow. It can optionally use the GPU-accelerated NVTabular dataloader, which is highly recommended. Those components can be installed as optional args for the pip install package.

  • All
    pip install transformers4rec[all]
  • PyTorch
    pip install transformers4rec[torch,nvtabular]
  • Tensorflow:
    pip install transformers4rec[tensorflow,nvtabular]

Installing with conda

conda install -c nvidia transformers4rec

Installing with Docker

Transformers4Rec library is pre-installed in the NVIDIA Merlin Docker containers, that are available in the NVIDIA container repository in three different containers:

Container Name Container Location Functionality
merlin-tensorflow-training https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-tensorflow-training Transformers4Rec, NVTabular, TensorFlow, and HugeCTR Tensorflow Embedding plugin
merlin-pytorch-training https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-pytorch-training Transformers4Rec, NVTabular and PyTorch
merlin-inference https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-inference Transformers4Rec, NVTabular, PyTorch, and Triton Inference

To use these Docker containers, you'll first need to install the NVIDIA Container Toolkit to provide GPU support for Docker. You can use the NGC links referenced in the table above to obtain more information about how to launch and run these containers.

Feedback and Support

If you'd like to contribute to the library directly, see the CONTRIBUTING.md. We're particularly interested in contributions or feature requests for our feature engineering and preprocessing operations. To further advance our Merlin Roadmap, we encourage you to share all the details regarding your recommender system pipeline in this survey.

If you're interested in learning more about how NVTabular works, see Transformers4Rec documentation. We also have the API documentation that outlines specifics of the available modules and classes within the Transformers4Rec library.

Comments
  • Multi-GPU training with DP and DDP documentation

    Multi-GPU training with DP and DDP documentation

    Fixes #492 Fixes #488

    Goals :soccer:

    • Document how to use DataParallel and DistributedDataParallel multi-gpu training.
    • Demonstrate the potential performance benefit of DataParallel and DistributedDataParallel multi-gpu training through presenting experiment results.

    Implementation Details :construction:

    • Experiments were done on a node with 2 Tesla V100-SXM2-32GB-LS GPUs using T4Rec after changes included in #496. The scripts used were the ones included in ci/test_integration.sh.

    Testing Details :mag:

    • Refer to the documentation content to use DataParallel or DistributedDataParallel mode.
    documentation status/needs-review 
    opened by nzarif 24
  • Add MRR to ranking metrics module

    Add MRR to ranking metrics module

    Fixes https://github.com/NVIDIA-Merlin/Transformers4Rec/issues/86

    Goals :soccer:

    Add support for mean reciprocal rank

    Implementation Details :construction:

    I followed the template of NDCG to implement this.

    Testing Details :mag:

    Single basic unit test.

    opened by murphp15 24
  • Standardize prediction tasks' outputs

    Standardize prediction tasks' outputs

    Fixes #544

    • All prediction tasks return the same output format:
      • During training and evaluation: the output is a dictionary with three elements: {"loss":torch.tensor, "labels": torch.tensor, "predictions": torch.tensor}

      • During inference: The output is the tensor of predictions.

    Goals :soccer:

    This part of refactoring includes 4 parts:

    • Update base PredictionTask class:
    • Update Head and Model classes to support the new convention in their forward method call + calculate_metrics
    • Update the fit method [in progress]: loss is computed inside the forward call + add flag compute_metrics=True to control whether to compute metrics during training or not. Replace the compute_loss call loss = self.compute_loss(x, y) by :
    outputs = self(x, y, training=True)
    loss = outputs['loss']
    if compute_metrics=True: 
        self.calculate_metrics(outputs['predictions'], outputs['labels'], mode='train', forward=False, call_body=False)
    
    • Update the failing unit tests

    Testing Details :mag:

    Run the torch unit tests and you will see they pass. You can also try with integration tests.

    enhancement area/api breaking 
    opened by nzarif 18
  • [FEA] Model inference locally

    [FEA] Model inference locally

    ๐Ÿš€ Feature request

    Wanted to check if it is currently possible to perform model inference locally in place of setting up Triton Server. If yes, where is the documentation available currently for the same?

    Motivation

    • Currently, whenever I build a new model, I have to deploy it on Triton Server, but just to check the model performance of a batch of 10-20 sessions for testing, wanted to check if there was a way to see results by inferencing locally.
    • Secondly, to setup triton server, there's a need to download the docker image and setup the nvcr.io/nvidia/merlin/merlin-inference:21.09 image. But is there a work around to inference locally?
    status/needs-triage 
    opened by Ahanmr 17
  • [QST] anyone got success using custom container on vertex ai?

    [QST] anyone got success using custom container on vertex ai?

    โ“ Questions & Help

    Details

    I tried running creating a custom container on vertex ai to run my T4R notebooks

    My dockerfile is as follows

    FROM nvcr.io/nvidia/merlin/merlin-inference:21.11 EXPOSE 8080 EXPOSE 8888 ENTRYPOINT jupyter lab --ip 0.0.0.0 --port 8080 --allow-root --no-browser --NotebookApp.token='' --NotebookApp.password=''

    I am getting jupyter status unhealthy error and couldn't run the notebooks. Has anyone got any success creating a custom container to run T4R library notebooks? Any hint, help much appreciated :)

    status/needs-triage 
    opened by arunslb123 16
  • Support to pre-trained embeddings initializer (trainable or not)

    Support to pre-trained embeddings initializer (trainable or not)

    Fixes #267 Relates to #471, #475 , #485 , and RMP #211

    Goals :soccer:

    Introduces the PretrainedEmbeddingsInitializer, which allows initializing an embedding table matrix with pre-trained weights and make it trainable or not. In collaboration with @angmc

    Implementation Details :construction:

    The signature is PretrainedEmbeddingsInitializer(weights_matrix, trainable=False). The weights_matrix expects a 2D matrix in numpy, list or torch tensor).

    Testing Details :mag:

    A test demonstrates how to use PretrainedEmbeddingsInitializer with trainable True and False.

    pre_trained_item_embeddings = np.random.rand(item_id_cardinality, embedding_dim)
    
    emb_module = tr.EmbeddingFeatures.from_schema(
            schema,
            embedding_dims={"item_id/list": embedding_dim},
            embeddings_initializers={
                "item_id/list": tr.PretrainedEmbeddingsInitializer(
                    pre_trained_item_embeddings, trainable=trainable
                ),
            },
        )
    
    enhancement 
    opened by gabrielspmoreira 14
  • T4rec refactor(Part 1)

    T4rec refactor(Part 1)

    Goals :soccer:

    The final goal of these changes is to:

    1. Be able to use HF Trainer for training all types of tasks (i.e. BinaryClassificationTask and RegressionTask as well as NextItemPredictionTask) to leverage the multi-gpu support already implemented in HF Trainer.
    2. Standardize inference to help with integration with Triton.

    Implementation Details :construction:

    • Standardize the output of forward() functions within T4Rec. All Tasks, Heads and Models will return:
    1. A dictionary of {"loss":Tensor, "labels": Union[Tensor,Dict{"task_name":Tensor}], "predictions":Union[Tensor,Dict{"task_name":Tensor}]} during training and evaluation
    2. A Tensor during inference To learn more, refer to this diagram.
    • Refactor T4Rec flags used within forward() functions of tasks, Heads and Models:
    1. Removed hf_format flag as it was not clear and not needed after standardizing the output format.
    2. Added testing and training flags to indicate whether we are doing training, evaluation or inference. To know how these flags work refer to top half of this diagram.
    3. Removed ignore_masking flag because knowing the mode (train, eval or inference) with the help of training and testing flags means we can know if masking must be applied or ignored.
    • targets was added as an input argument to forward() functions that will be called for BinaryClassificationTask and RegressionTask. So from now on, we can call forward() from HF Trainer class to train these tasks and there is no more need to call custom compute_loss() function.

    Testing Details :mag:

    Go to CI integration tests and run the tests included there using the dataset already downloaded and unzipped by the shell script.

    enhancement status/work-in-progress 
    opened by nzarif 13
  • [QST]Stuck at DataLoader Step

    [QST]Stuck at DataLoader Step

    โ“ Questions & Help

    Details

    Hi,

    What does input data look like? Currently, I have sessions based data in Pandas DataFrame. I am following RecSys 21 tutorial and currently stuck at 8th cell of this notbook at data loading step. How do I go from having data in Pandas DataFrame to be able to load it in the library for training?

    Note: At the moment, I don't have GPU access and I can't get cudf installed on my SageMaker Studio notebook. I want to execute Transformers4Rec on CPU.

    status/needs-triage 
    opened by sumitsidana 13
  • [DOC] Add Pytorch local inference to example notebooks

    [DOC] Add Pytorch local inference to example notebooks

    Report incorrect documentation

    There is only Triton Inference Server option explained in the end-to-end-session-based example notebook. However, we are also aware that PyTorch TIS has an open issue here. There is a closed issue related to local inference here, but it's more of a discussion rather than clean documentation with example.

    Suggested fix for documentation

    Modify the end-to-end-session-based example here and include valid/tested local inference/prediction approach with PyTorch.

    Describe the documentation you'd like

    How to use Transformers4Rec locally without the Triton Inference Server (TIS), to be able to surpass bugs and issues with TIS until those issues are fixed.

    Steps taken to search for needed documentation**

    Searched repo docs, examples, issues

    documentation question P2 
    opened by hosseinkalbasi 12
  • [BUG] Cannot install with conda/pip on Ubuntu 20.04

    [BUG] Cannot install with conda/pip on Ubuntu 20.04

    Bug description

    Running conda install -c nvidia transformers4rec gets an error; while running alternative command pip install transformers4rec[pytorch,nvtabular] gets another error.

    Steps/Code to reproduce bug

    1. conda create -n trans4rec python=3.8
    2. conda activate trans4rec
    3. conda install -c nvidia transformers4rec
    4. pip install transformers4rec[pytorch,nvtabular]

    Expected behavior

    Transformer4rec should be successfully installed.

    Environment details

    • Transformers4Rec version: N/A
    • Platform: Ubuntu 20.04
    • Python version: Python 3.8.13

    Additional context

    image image bug P1 
    opened by future-xy 12
  • [QST] How/Where Is the Schema Generated?

    [QST] How/Where Is the Schema Generated?

    โ“ Questions & Help

    I'm curious on if the schema.pb file is generated manually or automatically? I have seen them in the demos and it's not entirely clear to me how they are created (if they are in the notebooks).

    Details

    For example, how is the schema here generated?

    status/needs-triage 
    opened by zanussbaum 12
  • [BUG] Running example throws

    [BUG] Running example throws "RuntimeError: grad can be implicitly created only for scalar outputs"

    Bug description

    Trying to run the example throws the following exception when training is started

    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    File <timed exec>:18
    
    File /usr/local/lib/python3.8/dist-packages/transformers/trainer.py:1316, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
       1314         tr_loss_step = self.training_step(model, inputs)
       1315 else:
    -> 1316     tr_loss_step = self.training_step(model, inputs)
       1318 if (
       1319     args.logging_nan_inf_filter
       1320     and not is_torch_tpu_available()
       1321     and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))
       1322 ):
       1323     # if loss is nan or inf simply add the average of previous logged losses
       1324     tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)
    
    File /usr/local/lib/python3.8/dist-packages/transformers/trainer.py:1867, in Trainer.training_step(self, model, inputs)
       1865     loss = self.deepspeed.backward(loss)
       1866 else:
    -> 1867     loss.backward()
       1869 return loss.detach()
    
    File /usr/local/lib/python3.8/dist-packages/torch/_tensor.py:402, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)
        393 if has_torch_function_unary(self):
        394     return handle_torch_function(
        395         Tensor.backward,
        396         (self,),
       (...)
        400         create_graph=create_graph,
        401         inputs=inputs)
    --> 402 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
    
    File /usr/local/lib/python3.8/dist-packages/torch/autograd/__init__.py:184, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
        180 inputs = (inputs,) if isinstance(inputs, torch.Tensor) else \
        181     tuple(inputs) if inputs is not None else tuple()
        183 grad_tensors_ = _tensor_or_tensors_to_tuple(grad_tensors, len(tensors))
    --> 184 grad_tensors_ = _make_grads(tensors, grad_tensors_, is_grads_batched=False)
        185 if retain_graph is None:
        186     retain_graph = create_graph
    
    File /usr/local/lib/python3.8/dist-packages/torch/autograd/__init__.py:85, in _make_grads(outputs, grads, is_grads_batched)
         83 if out.requires_grad:
         84     if out.numel() != 1:
    ---> 85         raise RuntimeError("grad can be implicitly created only for scalar outputs")
         86     new_grads.append(torch.ones_like(out, memory_format=torch.preserve_format))
         87 else:
    
    RuntimeError: grad can be implicitly created only for scalar outputs
    

    Steps/Code to reproduce bug

    1. docker pull nvcr.io/nvidia/merlin/merlin-pytorch:22.11
    2. Start docker container as described here: https://github.com/NVIDIA-Merlin/Transformers4Rec/tree/main/examples
    3. Clone the repo into the docker container
    4. in the 2nd notebook comment out the data_loader_engine='merlin' because this doesn't work
    5. execute the first 2 notebooks in examples/getting-started-session-based

    Expected behavior

    The example runs through without crashing

    Environment details

    • Transformers4Rec version: 0.1.15
    • Platform: docker / ubuntu
    • Python version: 3.8.10
    • Huggingface Transformers version: 4.12.0
    • PyTorch version: 1.12.1 with GPU
    • Tensorflow version (GPU?): -
    • CUDA version: 11.7

    Additional context

    I commented out the data_loader_engine='merlin' param in the config because this throws an error that merlin was never registered with pytorch

    bug status/needs-triage 
    opened by LMKight 1
  • Fix multi-gpu documentation

    Fix multi-gpu documentation

    This PR fixes user warning and readme documentation about data partitions for multi GPU training. This addresses https://github.com/NVIDIA-Merlin/Transformers4Rec/issues/550.

    documentation 
    opened by bbozkaya 1
  • Add docstrings and the parameter to `row_groups_per_part ` to the MerlinDataLoader class

    Add docstrings and the parameter to `row_groups_per_part ` to the MerlinDataLoader class

    Fixes #550

    @bbozkaya runs different tests (see image below) of repartitioning a parquet file (using pandas or cudf) and it seems that MerlinDataLoader always loads the dataset files with 1 partition even though we partition to multiple groups when saving the parquet file (as recommended here). To take into account these partitions, we should pass the parameter row_groups_per_part=True to the merlin.io.Dataset. image

    Goals :soccer:

    • Add the parameter row_groups_per_part to MerlinDataLoader so as to load the dataset with the correct partitions.
    • Add docstrings to the MerlinDataLoader to explain the different parameters.
    • Add a user warning to ensure that dataset's partitions are divisible by the number of GPUs for DDP training. This is needed to ensure optimal performance by equally distributing the data among available GPUs.
    enhancement Multi-GPU 
    opened by sararb 1
  • [BUG] Bugs in examples/tutorial

    [BUG] Bugs in examples/tutorial

    Bug description

    Bug 1 examples/tutorial/03-Session-based-recsys.ipynb, section "3.2.4 Train XLNET with Side Information for Next Item Prediction" , the cell that runs training fails.

    Log with stack trace
    ***** Running training *****
      Num examples = 112128
      Num Epochs = 3
      Instantaneous batch size per device = 256
      Total train batch size (w. parallel, distributed & accumulation) = 256
      Gradient Accumulation steps = 1
      Total optimization steps = 1314
    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    File <timed exec>:15
    
    File /usr/local/lib/python3.8/dist-packages/transformers/trainer.py:1316, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
       1314         tr_loss_step = self.training_step(model, inputs)
       1315 else:
    -> 1316     tr_loss_step = self.training_step(model, inputs)
       1318 if (
       1319     args.logging_nan_inf_filter
       1320     and not is_torch_tpu_available()
       1321     and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))
       1322 ):
       1323     # if loss is nan or inf simply add the average of previous logged losses
       1324     tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)
    
    File /usr/local/lib/python3.8/dist-packages/transformers/trainer.py:1849, in Trainer.training_step(self, model, inputs)
       1847         loss = self.compute_loss(model, inputs)
       1848 else:
    -> 1849     loss = self.compute_loss(model, inputs)
       1851 if self.args.n_gpu > 1:
       1852     loss = loss.mean()  # mean() to average on multi-gpu parallel training
    
    File /usr/local/lib/python3.8/dist-packages/transformers/trainer.py:1881, in Trainer.compute_loss(self, model, inputs, return_outputs)
       1879 else:
       1880     labels = None
    -> 1881 outputs = model(**inputs)
       1882 # Save past state if it exists
       1883 # TODO: this needs to be fixed and made cleaner later.
       1884 if self.args.past_index >= 0:
    
    File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1186, in Module._call_impl(self, *input, **kwargs)
       1182 # If we don't have any hooks, we want to skip the rest of the logic in
       1183 # this function, and just call forward.
       1184 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
       1185         or _global_forward_hooks or _global_forward_pre_hooks):
    -> 1186     return forward_call(*input, **kwargs)
       1187 # Do not call functions when jit is used
       1188 full_backward_hooks, non_full_backward_hooks = [], []
    
    File /usr/local/lib/python3.8/dist-packages/transformers4rec/torch/trainer.py:830, in HFWrapper.forward(self, *args, **kwargs)
        828 def forward(self, *args, **kwargs):
        829     inputs = kwargs
    --> 830     return self.wrapper_module(inputs, *args)
    
    File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1186, in Module._call_impl(self, *input, **kwargs)
       1182 # If we don't have any hooks, we want to skip the rest of the logic in
       1183 # this function, and just call forward.
       1184 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
       1185         or _global_forward_hooks or _global_forward_pre_hooks):
    -> 1186     return forward_call(*input, **kwargs)
       1187 # Do not call functions when jit is used
       1188 full_backward_hooks, non_full_backward_hooks = [], []
    
    File /usr/local/lib/python3.8/dist-packages/transformers4rec/torch/model/base.py:553, in Model.forward(self, inputs, training, **kwargs)
        550 outputs = {}
        551 for head in self.heads:
        552     outputs.update(
    --> 553         head(inputs, call_body=True, training=training, always_output_dict=True, **kwargs)
        554     )
        556 if len(outputs) == 1:
        557     outputs = outputs[list(outputs.keys())[0]]
    
    File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1186, in Module._call_impl(self, *input, **kwargs)
       1182 # If we don't have any hooks, we want to skip the rest of the logic in
       1183 # this function, and just call forward.
       1184 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
       1185         or _global_forward_hooks or _global_forward_pre_hooks):
    -> 1186     return forward_call(*input, **kwargs)
       1187 # Do not call functions when jit is used
       1188 full_backward_hooks, non_full_backward_hooks = [], []
    
    File /usr/local/lib/python3.8/dist-packages/transformers4rec/torch/model/base.py:398, in Head.forward(self, body_outputs, training, call_body, always_output_dict, ignore_masking, **kwargs)
        395 outputs = {}
        397 if call_body:
    --> 398     body_outputs = self.body(body_outputs, training=training, ignore_masking=ignore_masking)
        400 for name, task in self.prediction_task_dict.items():
        401     outputs[name] = task(
        402         body_outputs, ignore_masking=ignore_masking, training=training, **kwargs
        403     )
    
    File /usr/local/lib/python3.8/dist-packages/transformers4rec/config/schema.py:50, in SchemaMixin.__call__(self, *args, **kwargs)
         47 def __call__(self, *args, **kwargs):
         48     self.check_schema()
    ---> 50     return super().__call__(*args, **kwargs)
    
    File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1186, in Module._call_impl(self, *input, **kwargs)
       1182 # If we don't have any hooks, we want to skip the rest of the logic in
       1183 # this function, and just call forward.
       1184 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
       1185         or _global_forward_hooks or _global_forward_pre_hooks):
    -> 1186     return forward_call(*input, **kwargs)
       1187 # Do not call functions when jit is used
       1188 full_backward_hooks, non_full_backward_hooks = [], []
    
    File /usr/local/lib/python3.8/dist-packages/transformers4rec/torch/block/base.py:152, in SequentialBlock.forward(self, input, training, ignore_masking, **kwargs)
        150 elif "training" in inspect.signature(module.forward).parameters:
        151     if "ignore_masking" in inspect.signature(module.forward).parameters:
    --> 152         input = module(input, training=training, ignore_masking=ignore_masking)
        153     else:
        154         input = module(input, training=training)
    
    File /usr/local/lib/python3.8/dist-packages/transformers4rec/config/schema.py:50, in SchemaMixin.__call__(self, *args, **kwargs)
         47 def __call__(self, *args, **kwargs):
         48     self.check_schema()
    ---> 50     return super().__call__(*args, **kwargs)
    
    File /usr/local/lib/python3.8/dist-packages/transformers4rec/torch/tabular/base.py:390, in TabularModule.__call__(self, inputs, pre, post, merge_with, aggregation, *args, **kwargs)
        387 inputs = self.pre_forward(inputs, transformations=pre)
        389 # This will call the `forward` method implemented by the super class.
    --> 390 outputs = super().__call__(inputs, *args, **kwargs)  # noqa
        392 if isinstance(outputs, dict):
        393     outputs = self.post_forward(
        394         outputs, transformations=post, merge_with=merge_with, aggregation=aggregation
        395     )
    
    File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1186, in Module._call_impl(self, *input, **kwargs)
       1182 # If we don't have any hooks, we want to skip the rest of the logic in
       1183 # this function, and just call forward.
       1184 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
       1185         or _global_forward_hooks or _global_forward_pre_hooks):
    -> 1186     return forward_call(*input, **kwargs)
       1187 # Do not call functions when jit is used
       1188 full_backward_hooks, non_full_backward_hooks = [], []
    
    File /usr/local/lib/python3.8/dist-packages/transformers4rec/torch/features/sequence.py:257, in TabularSequenceFeatures.forward(self, inputs, training, ignore_masking, **kwargs)
        254     outputs = self.aggregation(outputs)
        256 if self.projection_module:
    --> 257     outputs = self.projection_module(outputs)
        259 if self.masking and (not ignore_masking or training):
        260     outputs = self.masking(
        261         outputs, item_ids=self.to_merge["categorical_module"].item_seq, training=training
        262     )
    
    File /usr/local/lib/python3.8/dist-packages/transformers4rec/config/schema.py:50, in SchemaMixin.__call__(self, *args, **kwargs)
         47 def __call__(self, *args, **kwargs):
         48     self.check_schema()
    ---> 50     return super().__call__(*args, **kwargs)
    
    File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1186, in Module._call_impl(self, *input, **kwargs)
       1182 # If we don't have any hooks, we want to skip the rest of the logic in
       1183 # this function, and just call forward.
       1184 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
       1185         or _global_forward_hooks or _global_forward_pre_hooks):
    -> 1186     return forward_call(*input, **kwargs)
       1187 # Do not call functions when jit is used
       1188 full_backward_hooks, non_full_backward_hooks = [], []
    
    File /usr/local/lib/python3.8/dist-packages/transformers4rec/torch/block/base.py:148, in SequentialBlock.forward(self, input, training, ignore_masking, **kwargs)
        146 if i == len(self) - 1:
        147     filtered_kwargs = filter_kwargs(kwargs, module, filter_positional_or_keyword=False)
    --> 148     input = module(input, **filtered_kwargs)
        150 elif "training" in inspect.signature(module.forward).parameters:
        151     if "ignore_masking" in inspect.signature(module.forward).parameters:
    
    File /usr/local/lib/python3.8/dist-packages/transformers4rec/config/schema.py:50, in SchemaMixin.__call__(self, *args, **kwargs)
         47 def __call__(self, *args, **kwargs):
         48     self.check_schema()
    ---> 50     return super().__call__(*args, **kwargs)
    
    File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1186, in Module._call_impl(self, *input, **kwargs)
       1182 # If we don't have any hooks, we want to skip the rest of the logic in
       1183 # this function, and just call forward.
       1184 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
       1185         or _global_forward_hooks or _global_forward_pre_hooks):
    -> 1186     return forward_call(*input, **kwargs)
       1187 # Do not call functions when jit is used
       1188 full_backward_hooks, non_full_backward_hooks = [], []
    
    File /usr/local/lib/python3.8/dist-packages/transformers4rec/torch/block/base.py:156, in SequentialBlock.forward(self, input, training, ignore_masking, **kwargs)
        154             input = module(input, training=training)
        155     else:
    --> 156         input = module(input)
        158 return input
    
    File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1186, in Module._call_impl(self, *input, **kwargs)
       1182 # If we don't have any hooks, we want to skip the rest of the logic in
       1183 # this function, and just call forward.
       1184 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
       1185         or _global_forward_hooks or _global_forward_pre_hooks):
    -> 1186     return forward_call(*input, **kwargs)
       1187 # Do not call functions when jit is used
       1188 full_backward_hooks, non_full_backward_hooks = [], []
    
    File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/linear.py:114, in Linear.forward(self, input)
        113 def forward(self, input: Tensor) -> Tensor:
    --> 114     return F.linear(input, self.weight, self.bias)
    
    RuntimeError: expected scalar type Float but found Double
    

    I believe this is because the product_recency_days_log_norm-list_seq created in the prior notebook (02-ETL-with-NVTabular) is float64 rather than float32. I was able to get things to run by adding >> nvt.ops.ReduceDtypeSize() to the cell where that feature is defined in the prior notebook, section 5.3. I'm not sure if this is the correct fix though.

    Bug 2

    XLNet-MLM with side information accuracy results that get written to results.txt in 03-Session-based-recsys should have metric name and values separated by : rather than space. Metrics from the other two models trained in the notebook are written correctly. This causes the call to create_bar_chart('results.txt') to fail.

    Easy fix,

    with open("results.txt", 'a') as f:
        f.write('\n')
        f.write('XLNet-MLM with side information accuracy results:')
        f.write('\n')
        for key, value in  model.compute_metrics().items(): 
            f.write('%s %s\n' % (key, value.item()))
    

    should have f.write('%s:%s\n' % (key, value.item())) in the last line.

    Steps/Code to reproduce bug

    Run the tutorial notebooks.

    Expected behavior

    Environment details

    Google Cloud Workbench managed notebook with image version nvcr.io/nvidia/merlin/merlin-pytorch:22.11

    Machine info: a2-highgpu-1g (Accelerator Optimized: 1 NVIDIA Tesla A100 GPU, 12 vCPUs, 85GB RAM)

    I'm using version of the example notebooks that are available in the image.

    • Transformers4Rec version: 0.1.15
    • Platform: Google Cloud Workbench managed notebook, image nvcr.io/nvidia/merlin/merlin-pytorch:22.11, Machine type a2-highgpu-1g (Accelerator Optimized: 1 NVIDIA Tesla A100 GPU, 12 vCPUs, 85GB RAM),
    • Python version: 3.8.10
    • Huggingface Transformers version: 4.12.0
    • PyTorch version (GPU?): 1.13.0a0+d321be6
    • Tensorflow version (GPU?):

    Additional context

    bug status/needs-triage 
    opened by lendle 0
  • Small fixes in getting-started ETL and training notebooks and fix tuple error in serving notebook

    Small fixes in getting-started ETL and training notebooks and fix tuple error in serving notebook

    This PR:

    • fixes/sets data paths and env variables for certain args in the notebooks
    • fixes the error due to returned tuple from Merlin dataloader in the 03 notebook.
    • adding a unit test with for 01 and 02 notebooks using testbook.

    Note: if we want a unit test for serving notebook, we would need systems here. currently we dont couple TF4Rec to systems, that's why I removed that part in the unit test code. The decision is to do that in Merlin repo, instead of TF4Rec repo.

    P0 area/tests chore 
    opened by rnyak 2
  • [QST]IndexError: too many indices for tensor of dimension 2

    [QST]IndexError: too many indices for tensor of dimension 2

    โ“ Questions & Help

    As a part of exploration i'm running the code for transform4rec model built on synthetic data [https://github.com/NVIDIA-Merlin/Transformers4Rec/blob/main/examples/getting-started-session-based/02-session-based-XLNet-with-PyT.ipynb]

    throwing exception while running the below code. image and the error is image Based on my understanding the error occurs on evaluation step where input file is not in required dimension.

    Details

    this is my path for train,valid,and test files image

    status/needs-triage 
    opened by DilipKumar3 1
Releases(v0.1.15)
  • v0.1.15(Nov 22, 2022)

    Whatโ€™s Changed

    ๐Ÿœ Bug Fixes

    • Fix failing ci error related to sparse_names containing features that are not part of the model's schema @sararb (#541)
    • Fix dtype mismatch in CLM masking class due to new data loader changes @sararb (#539)
    • Fix CI test based on the requirements of the new merlin loader @sararb (#536)
    • quick fix: apply masking when training next item prediction @nzarif (#514)

    ๐Ÿš€ Features

    • Add save/load & input/output schema methods to T4Rec Model class @sararb (#507)

    ๐Ÿ“„ Documentation

    • Add docs requirements to extras list in setup.py @oliverholworthy (#533)
    • Add multi-gpu training example for T4Rec PyTorch @bbozkaya (#521)

    ๐Ÿ”ง Maintenance

    • Add lint workflow to run pre-commit on all files @oliverholworthy (#545)
    • Specify packages to look for in setup.py to avoid publishing tests @oliverholworthy (#529)
    • Cleanup tensorflow dependencies @oliverholworthy (#530)
    • Fix failing ci error related to sparse_names containing features that are not part of the model's schema @sararb (#541)
    • Fix CI test based on the requirements of the new merlin loader @sararb (#536)
    • Add docs requirements to extras list in setup.py @oliverholworthy (#533)
    • Remove stale documentation reviews @mikemckiernan (#531)
    • run github action tests and lint via tox, with upstream deps installed @nv-alaiacano (#527)
    • Specify output dtype for Normalize op in ETL example to match model expectations @oliverholworthy (#523)
    • Fix name and bug in MeanReciprocalRankAt @rnyak (#522)
    • Update mypy version to match version in pre-commit-config @oliverholworthy (#517)
    Source code(tar.gz)
    Source code(zip)
    transformers4rec-0.1.15.tar.gz(756.10 KB)
  • v0.1.14(Oct 24, 2022)

  • v0.1.13(Sep 26, 2022)

  • v0.1.12(Sep 6, 2022)

    Whatโ€™s Changed

    ๐Ÿš€ Features

    • Update nv logo in the notebooks @rnyak (#482)
    • Update getting started ETL notebook to generate schema file from nvt and training nb to read the schema file @rnyak (#471)
    • Make the model traceable with Torchscript @edknv (#469)
    • Fix sparse_max dict in the export_pytorch_ensemble() func @rnyak (#468)
    • fix tutorial ETL pipeline @rnyak (#467)
    • Small fixes in the example notebooks @rnyak (#462)

    ๐Ÿ“„ Documentation

    • Update nv logo in the notebooks @rnyak (#482)
    • Second pass for removing mention of TensorFlow @mikemckiernan (#479)
    • Remove mention of TensorFlow @mikemckiernan (#474)

    ๐Ÿ”ง Maintenance

    • Update processing csv file text in tutorial nb @rnyak (#481)
    • Update versioneer from 0.20 to 0.23 @oliverholworthy (#472)
    Source code(tar.gz)
    Source code(zip)
    transformers4rec-0.1.12.tar.gz(703.13 KB)
  • v0.1.11(Jul 19, 2022)

    Whatโ€™s Changed

    ๐Ÿœ Bug Fixes

    • Change the metric names prefix to align with the HF trainer code. @sararb (#454)

    ๐Ÿš€ Features

    • Add the support of prediction step to the Trainer class @sararb (#436)
    • Add PostContextFusion block to support Latent Cross technique @sararb (#444)
    • support sequential binary task @sararb (#434)

    ๐Ÿ“„ Documentation

    • Update the conda install command @mikemckiernan (#445)

    ๐Ÿ”ง Maintenance

    • Integration test data path replacement @jperez999 (#457)
    • Remove unnecessary docs dependencies @benfred (#458)
    • Don't git pull origin main in unit and integration tests, use container version @karlhigley (#455)
    • Move Tensorflow code to tensorflow branch @karlhigley (#448)
    • Update requirement on nvidia-dllogger to follow install instructions @karlhigley (#450)
    • Set INPUT_DATA_DIR env var to /tmp/data in notebook tests @karlhigley (#449)
    Source code(tar.gz)
    Source code(zip)
    transformers4rec-0.1.11.tar.gz(702.24 KB)
  • v0.1.10(Jun 16, 2022)

  • v0.1.9(Jun 15, 2022)

    Whatโ€™s Changed

    ๐Ÿœ Bug Fixes

    • fix: Enable tests to succeed @mikemckiernan (#416)

    ๐Ÿ“„ Documentation

    • Add common release-drafter configuration @mikemckiernan (#415)
    • Improve TOC navigation @mikemckiernan (#413)

    ๐Ÿ”ง Maintenance

    • Add a GA workflow that requires labels on PR's @benfred (#431)
    • Use shared implementation of triage workflow @benfred (#430)
    • Request that PRs are labeled @mikemckiernan (#419)
    Source code(tar.gz)
    Source code(zip)
    transformers4rec-0.1.9.tar.gz(747.99 KB)
  • v0.1.8(May 10, 2022)

  • v0.1.7(Apr 6, 2022)

    What's Changed

    • Update reqs by @albert17 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/368
    • Add MRR to ranking metrics module by @murphp15 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/354
    • Updates Container testing by @albert17 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/379
    • chore: Add docs preview to PRs by @mikemckiernan in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/382
    • fixes for workflow and model triton config creation by @jperez999 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/388
    • docs: Add nightly multi-version build by @mikemckiernan in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/390
    • Set click<8.1.0 by @benfred in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/396
    • Fix test by @albert17 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/397
    • docs: Add a redirect page by @mikemckiernan in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/394
    • Allow the test_schocastic_swap_noise tests to fail by @benfred in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/401
    • Remove pinned keras version by @benfred in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/398
    • Automate pushing package to pypi by @benfred in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/402

    New Contributors

    • @murphp15 made their first contribution in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/354
    • @mikemckiernan made their first contribution in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/382

    Full Changelog: https://github.com/NVIDIA-Merlin/Transformers4Rec/compare/v0.1.6...v0.1.7

    Source code(tar.gz)
    Source code(zip)
    transformers4rec-0.1.7.tar.gz(747.76 KB)
  • v0.1.6(Mar 3, 2022)

    What's Changed

    • Exit tests if error by @albert17 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/366

    Full Changelog: https://github.com/NVIDIA-Merlin/Transformers4Rec/compare/v0.1.4...v0.1.6

    Source code(tar.gz)
    Source code(zip)
  • v0.1.5(Feb 2, 2022)

    What's Changed

    • fix tf import dependency by @sararb in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/344
    • Ci fix by @jperez999 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/346
    • Add new issues to the backlog project by @benfred in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/347
    • Initial Blossom CI by @albert17 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/349
    • Update requirements by @albert17 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/351

    Full Changelog: https://github.com/NVIDIA-Merlin/Transformers4Rec/compare/v0.1.3...v0.1.5

    Source code(tar.gz)
    Source code(zip)
  • v0.1.4(Jan 11, 2022)

    What's Changed

    • fix tf import dependency by @sararb in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/344
    • Ci fix by @jperez999 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/346
    • Add new issues to the backlog project by @benfred in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/347
    • Initial Blossom CI by @albert17 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/349
    • Update requirements by @albert17 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/351

    Full Changelog: https://github.com/NVIDIA-Merlin/Transformers4Rec/compare/v0.1.3...v0.1.4

    Source code(tar.gz)
    Source code(zip)
  • v0.1.3(Dec 7, 2021)

    What's Changed

    • Refactor RankingMetric to fix serialization and graph-mode by @sararb in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/308
    • Adding missing DLLogger requirement for the paper reproducibility example by @gabrielspmoreira in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/315
    • Add codespell to CI by @benfred in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/316
    • Disable stochastic swap noise during eval by @gabrielspmoreira in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/311
    • CI working by @albert17 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/326
    • Fix a bug in GPU evaluation with PyArrow dataloader by @WoosukKwon in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/325
    • Fix save/load tf4rec model by @sararb in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/317
    • Quick fix of the value returned by fit_and_evaluate by @sararb in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/334
    • Fixes the OOM error when running all unit tests in CI env and GPU enabled machine by @gabrielspmoreira in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/336
    • Getting-started notebook with CLM task by @sararb in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/332
    • Tf end-to-end example notebook by @rnyak in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/341
    • Integration testing infrastructure by @albert17 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/343
    • Ci fix by @jperez999 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/345

    New Contributors

    • @WoosukKwon made their first contribution in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/325
    • @jperez999 made their first contribution in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/345

    Full Changelog: https://github.com/NVIDIA-Merlin/Transformers4Rec/compare/v0.1.2...v0.1.3

    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Nov 4, 2021)

    What's Changed

    • Fix conda version by @benfred in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/253
    • Fix badges in README by @benfred in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/254
    • Updating paper references in the documentation by @gabrielspmoreira in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/256
    • Fix pip install command for torch by @rnyak in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/260
    • Quick fix of compute_loss to be able to use fit method in tensorflow by @sararb in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/275
    • Quick fix to be able to read schema outputted by NVTabular by @marcromeyn in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/274
    • Fix README link by @zanussbaum in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/259
    • Update nvtabular links to point to github.com/NVIDIA-Merlin by @benfred in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/276
    • Add "continuous" tag in schema.pb in the examples folder by @rnyak in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/285
    • Training/eval fixes/improvements for RecSys paper reproducibility with the new API by @gabrielspmoreira in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/279
    • Spelling fixes by @benfred in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/287
    • New Example: Transformers4Rec paper reproducibility with the released Transformers4Rec PyTorch API by @gabrielspmoreira in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/289
    • add wipe_memory to example_utils and update notebooks by @sararb in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/294
    • fix broken links in the examples README by @rnyak in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/296
    • Adds notebooks unit tests by @albert17 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/293
    • Skips test if not torch by @albert17 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/300
    • Hf update by @sararb in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/301
    • Fixes notebook unittest by @albert17 in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/303
    • Add to_tf_model to T4RecConfig class by @sararb in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/302
    • Refactor MaskSequence classes to fix serialization and graph-mode by @sararb in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/307
    • refactor TransformerBlock for serialization by @sararb in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/306
    • Refactor NextItremPredictionTask to fix serialization and graph-mode by @sararb in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/309
    • fix error related to compatibility between keras and tf by @sararb in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/312

    New Contributors

    • @zanussbaum made their first contribution in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/259
    • @albert17 made their first contribution in https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/293

    Full Changelog: https://github.com/NVIDIA-Merlin/Transformers4Rec/compare/v0.1.1...v0.1.2

    Source code(tar.gz)
    Source code(zip)
Owner
Merlin is a framework providing end-to-end GPU-accelerated recommender systems, from feature engineering to deep learning training and deploying to production
PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.

VAENAR-TTS - PyTorch Implementation PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.

Keon Lee 67 Nov 14, 2022
LewusBot - Twitch ChatBot built in python with twitchio library

LewusBot Twitch ChatBot built in python with twitchio library. Uses twitch/leagu

Lewus 25 Dec 04, 2022
This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers.

private-transformers This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers. What is this? Why

Xuechen Li 73 Dec 28, 2022
ConferencingSpeech2022; Non-intrusive Objective Speech Quality Assessment (NISQA) Challenge

ConferencingSpeech 2022 challenge This repository contains the datasets list and scripts required for the ConferencingSpeech 2022 challenge. For more

21 Dec 02, 2022
Active learning for text classification in Python

Active Learning allows you to efficiently label training data in a small-data scenario.

Webis 375 Dec 28, 2022
Nateve compiler developed with python.

Adam Adam is a Nateve Programming Language compiler developed using Python. Nateve Nateve is a new general domain programming language open source ins

Nateve 7 Jan 15, 2022
Harvis is designed to automate your C2 Infrastructure.

Harvis Harvis is designed to automate your C2 Infrastructure, currently using Mythic C2. ๐Ÿ“Œ What is it? Harvis is a python tool to help you create mul

Thiago Mayllart 99 Oct 06, 2022
A simple Streamlit App to classify swahili news into different categories.

Swahili News Classifier Streamlit App A simple app to classify swahili news into different categories. Installation Install all streamlit requirements

Davis David 4 May 01, 2022
Sequence modeling benchmarks and temporal convolutional networks

Sequence Modeling Benchmarks and Temporal Convolutional Networks (TCN) This repository contains the experiments done in the work An Empirical Evaluati

CMU Locus Lab 3.5k Jan 03, 2023
BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese

Table of contents Introduction Using BARTpho with fairseq Using BARTpho with transformers Notes BARTpho: Pre-trained Sequence-to-Sequence Models for V

VinAI Research 58 Dec 23, 2022
Non-Autoregressive Predictive Coding

Non-Autoregressive Predictive Coding This repository contains the implementation of Non-Autoregressive Predictive Coding (NPC) as described in the pre

Alexander H. Liu 43 Nov 15, 2022
Transformer - A TensorFlow Implementation of the Transformer: Attention Is All You Need

[UPDATED] A TensorFlow Implementation of Attention Is All You Need When I opened this repository in 2017, there was no official code yet. I tried to i

Kyubyong Park 3.8k Dec 26, 2022
An Analysis Toolkit for Natural Language Generation (Translation, Captioning, Summarization, etc.)

VizSeq is a Python toolkit for visual analysis on text generation tasks like machine translation, summarization, image captioning, speech translation

Facebook Research 409 Oct 28, 2022
SIGIR'22 paper: Axiomatically Regularized Pre-training for Ad hoc Search

Introduction This codebase contains source-code of the Python-based implementation (ARES) of our SIGIR 2022 paper. Chen, Jia, et al. "Axiomatically Re

Jia Chen 17 Nov 09, 2022
DeepAmandine is an artificial intelligence that allows you to talk to it for hours, you won't know the difference.

DeepAmandine This is an artificial intelligence based on GPT-3 that you can chat with, it is very nice and makes a lot of jokes. We wish you a good ex

BuyWithCrypto 3 Apr 19, 2022
Korean stereoypte detector with TUNiB-Electra and K-StereoSet

Korean Stereotype Detector Korean stereotype sentence classifier using K-StereoSet with TUNiB-Electra Web demo you can test this model easily in demo

Sae_Chan_Oh 11 Feb 18, 2022
Knowledge Graph,Question Answering System๏ผŒๅŸบไบŽ็Ÿฅ่ฏ†ๅ›พ่ฐฑๅ’Œๅ‘้‡ๆฃ€็ดข็š„ๅŒป็–—่ฏŠๆ–ญ้—ฎ็ญ”็ณป็ปŸ

Knowledge Graph,Question Answering System๏ผŒๅŸบไบŽ็Ÿฅ่ฏ†ๅ›พ่ฐฑๅ’Œๅ‘้‡ๆฃ€็ดข็š„ๅŒป็–—่ฏŠๆ–ญ้—ฎ็ญ”็ณป็ปŸ

wangle 823 Dec 28, 2022
Tool which allow you to detect and translate text.

Text detection and recognition This repository contains tool which allow to detect region with text and translate it one by one. Description Two pretr

Damian Panek 176 Nov 28, 2022
โšก Automatically decrypt encryptions without knowing the key or cipher, decode encodings, and crack hashes โšก

Translations ๐Ÿ‡ฉ๐Ÿ‡ช DE ๐Ÿ‡ซ๐Ÿ‡ท FR ๐Ÿ‡ญ๐Ÿ‡บ HU ๐Ÿ‡ฎ๐Ÿ‡ฉ ID ๐Ÿ‡ฎ๐Ÿ‡น IT ๐Ÿ‡ณ๐Ÿ‡ฑ NL ๐Ÿ‡ง๐Ÿ‡ท PT-BR ๐Ÿ‡ท๐Ÿ‡บ RU ๐Ÿ‡จ๐Ÿ‡ณ ZH โžก๏ธ Documentation | Discord | Installation Guide โฌ…๏ธ Fully autom

11.2k Jan 05, 2023
Making text a first-class citizen in TensorFlow.

TensorFlow Text - Text processing in Tensorflow IMPORTANT: When installing TF Text with pip install, please note the version of TensorFlow you are run

1k Dec 26, 2022