Make differentially private training of transformers easy for everyone

Overview

private-transformers

This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers.


What is this? Why an extra codebase?

  • This codebase provides a privacy engine that builds off Opacus, but works way more smoothly with Hugging Face's transformers library.
  • Additionally, we support the ghost clipping technique (see Section 4 of this preprint on how it works) which allows privately training large transformers with considerably reduced memory cost -- in many cases, almost as light as non-private training -- at a modest run-time overhead.
  • With this codebase, we have fine-tuned very large pretrained models, yielding some of the best performing differentially private NLP models to date. Some of these models have performance matching strong non-private baseline approaches. We see strong empirical evidence that highly performant DP NLP models could be built on modest datasets.

Installation

Make sure you have python>=3.8; run the following command:

pip install git+https://github.com/lxuechen/private-transformers.git

To check the package is installed properly, be sure to run the test suite (requires pytest and a GPU) via the following command:

pytest -s tests

Usage

Basic usage

Privately training Hugging Face transformers with our codebase simply consists of 4 steps:

  1. Create your favourite transformer model and optimizer; attach this optimizer to a PrivacyEngine
  2. Compute a per-example loss (1-D tensor) for a mini-batch of data
  3. Pass the loss to optimizer.step or optimizer.virtual_step as a keyword argument
  4. Repeat from step 2

Below is a quick example:

import transformers, torch
from private_transformers import PrivacyEngine
import torch.nn.functional as F

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = transformers.GPT2LMHeadModel.from_pretrained('distilgpt2').to(device)
optimizer = torch.optim.Adam(params=model.parameters(), lr=1e-4)
privacy_engine = PrivacyEngine(
    model,
    batch_size=10,
    sample_size=50000,
    epochs=3,
    max_grad_norm=0.1,
    target_epsilon=3,
)
privacy_engine.attach(optimizer)

batch_size, seq_len = 10, 20
# Inputs are batch-first format, i.e., the first dimension of tensors must be batch dimension.
input_ids = torch.randint(size=[batch_size, seq_len], low=0, high=100, device=device)
# Calling `.train()` is very important; otherwise underlying forward and backward hooks don't run.
model.train()
outputs = model(input_ids=input_ids, return_dict=True)
labels = input_ids[:, 1:, ]
logits = outputs.logits[:, :-1, :].permute(0, 2, 1)
# `loss` is a 1-D tensor of shape (batch_size,).
loss = F.cross_entropy(logits, labels, reduction="none").mean(dim=1)
# This step is different from existing workflows: 
#   Don't call `loss.backward`; leave it to `optimizer.step` to handle backward.
optimizer.step(loss=loss)

The biggest differences compared to Opacus are:

  • We require the per-example loss (a 1-D tensor) be passed into optimizer.step (or optimizer.virtual_step)
  • The per-example loss must be passed in as a keyword argument.
  • loss.backward() shouldn't be called on the user end; it's called internally in optimizer.step ( or optimizer.virtual_step).
  • Inputs should be in batch-first format; there isn't a toggle to switch between different formats in the engine.

Ghost clipping: memory saving differentially private learning

Turning on ghost clipping requires changing only 1 line. You should notice a drastic reduction in peak GPU memory usage once this is turned on, at a potential cost of slower training speed. One might find this especially useful when constrained to only use older GPUs with small VRAMs or fitting super large models.

import transformers, torch
from private_transformers import PrivacyEngine

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = transformers.GPT2LMHeadModel.from_pretrained('distilgpt2').to(device)
optimizer = torch.optim.Adam(params=model.parameters(), lr=1e-4)
privacy_engine = PrivacyEngine(
    model,
    batch_size=10,
    sample_size=50000,
    epochs=3,
    max_grad_norm=0.1,
    target_epsilon=3,
    ghost_clipping=True,  # The only change you need to make!
)
privacy_engine.attach(optimizer)

We ran stringent numerical tests to ensure the double-backward implementation is correct. Check out files in the tests folder for more on this.

Examples

Code in the examples folder roughly reproduces our results for the table-to-text and classification tasks. There may be some minor discrepancies, since hyperparameters there aren't exactly what's used in the paper. Nevertheless, it should be sufficient to get things started. Detailed instructions are in the readme file of each subfolder.

Currently supported Hugging Face models

Not all models in the Hugging Face library are supported. The main additional work here is to

  1. support per-example gradients for bespoke modules (e.g., T5LayerNorm), and
  2. ensure position_ids are repeated.

We plan to support more models in the future if there's such a need. Feel free to open an issue if you may want to try out specific models that aren't in the current list.

FAQ

I wrote some answers to potential questions here.

Acknowledgements

It would have been impossible to develop this codebase without cool past works and existing codebases. We roughly follow the PrivacyEngine design in Opacus==0.13.0. We directly use an off-the-shelf package for tightly tracking tradeoff functions while composing multiple private mechanisms.

Disclaimer

  • This codebase is not yet production-grade, e.g., cryptographically secure PRNGs are required for sampling noise -- our codebase currently does not use these strong PRNGs. This codebase also isn't immune to floating point representation attacks.
  • This codebase is born out of the need to experiment with various things for differentially private NLP in rapidly succession. I've tried my best to write clean code, though parts of this codebase may be less tidy than I had hoped given the extremely tight timeline.

Citation

If you found this codebase useful in your research, please consider citing:

@misc{li2021large,
      title={Large Language Models Can Be Strong Differentially Private Learners}, 
      author={Xuechen Li and Florian Tramèr and Percy Liang and Tatsunori Hashimoto},
      year={2021},
      eprint={2110.05679},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Comments
  • Support BART model

    Support BART model

    Hi, I'm trying to apply your code in BART model. But I got the error like the below:

    ValueError: Ghost clipping does not support parameter sharing. Parameter sharing may be due to default parameter sharing between lm_head and embedding.Please use a model without parameter sharing for ghost clipping.
    

    Does it not support BART model yet??

    opened by SeolhwaLee 7
  • Set another seed won't change the result

    Set another seed won't change the result

    Hi Xuechen,

    I have another issue with the training seed. I would like to relax the random seed so that I can get some statistical results. Tried many different ways but even comment out the set_seed() function, the eva acc is the same until the last digit. May I ask how to relax the random seed? I'm doing experiments on examples/classification.

    Thanks!

    opened by JunyiZhu-AI 6
  • Customize loss function / adding regularizer under privacy setting?

    Customize loss function / adding regularizer under privacy setting?

    Hi, thanks again for the great work and the codebase!

    I have a question -- how I'd want to customize loss function in the codebase? I've been trying to do that, e.g. adding a per-example L1 regularization term to vector_loss in trainer, but I didn't manage to get it running after several attempts.

    There's a related discussion/PR in Opacus codebase https://github.com/pytorch/opacus/issues/249.

    However, there're a few tricky things I can see: -- In private-transformers, backward() behavior is not managed on the user end. -- also, 1-D vector_loss is required for private gradient update - optimizer.stepor optimizer.virtual_step

    My intuition is that I can add to vector_loss (per-example loss) at this line before the loss gets passed to the privacy engine.

    However I am afraid privacy concern is also an issue. I am aware of that Private-Transformers overrides compute_loss() in HF trainer, to exclude regularization terms that might mess up with privacy accounting.

    Sorry my question is not super detailed but I hope this makes sense and really appreciate for any comments.

    Thank you!

    opened by shi-kejian 4
  • Using dataloader with fixed batch size

    Using dataloader with fixed batch size

    Hi, thanks for providing this codebase!

    So for a while I've been using Opacus to experiment with DP-SGD and RoBERTa, but I wanted to check out your PrivacyEngine, mainly because of the training speed and memory optimizations. With Opacus, I always trained with their UniformWithReplacementSampler for accurate RDP accounting and as far as I can tell, you're training with fixed size batches in your examples. I'm wondering if there's a reason the UniformWithReplacementSampler isn't needed in your codebase anymore, and if the uniform sampler is compatible with your modified PrivacyEngine because the optimizer needs to be able to deal with variations in batch size?

    opened by xplip 4
  • How to set max_compositions

    How to set max_compositions

    Hi Chen, do you know how to set the max_compositions/steps param? The default at https://github.com/lxuechen/private-transformers/blob/684e27fcd9978539fbabc357c7ea506c0353c771/private_transformers/privacy_utils/privacy_engine.py#L148 is 0 but would raise an error

    private-transformers/private_transformers/privacy_utils/accounting/gdp_accounting.py:33: RuntimeWarning: invalid value encountered in double_scalars return norm.cdf(-eps / mu + mu / 2) - np.exp(eps) * norm.cdf(-eps / mu - mu / 2)
    rv_accountant/accountant.py:55: RuntimeWarning: divide by zero encountered in double_scalars mesh_size = 2*eps_error / np.sqrt(2*max_compositions*np.log(2/eta0))
    
    opened by hlzhang109 3
  • Setting small target epsilon like 0.1 fails training

    Setting small target epsilon like 0.1 fails training

    Hi, @lxuechen I tried to set epsilon as 0.1 on SST-2, but it results in a large noise_multiplier: 20853.95 and fails the training where the accuracy is near 0.5 However, setting epsilon as 1 works well. Any idea about this?

    opened by LinkToPast1900 3
  • Private gradient seemingly has been overwritten by non-private gradient.

    Private gradient seemingly has been overwritten by non-private gradient.

    Hi Xuechen, thanks for providing this codebase!

    I tried tweaking the code in examples/classification but the network does not perform as expected. In particular, I tried zeroing out all gradient by this command in _step() and _ghost_step() functions in privacy_engine.py:

    param.grad /= self.batch_size
    param.grad.mul_(0)
    

    After adding this multiplication the network has been trained as normally. And because with the same seed, the network has been even trained to give out the same eval acc. Could you reproduce this result at your private repo? If it behaves like this, then I suppose that the private gradient has been overwritten by the non-private one.

    opened by JunyiZhu-AI 3
  • Questions about sigma search and epsilon from composed tradeoff functions

    Questions about sigma search and epsilon from composed tradeoff functions

    (Making a new issue for this because you probably weren't notified of my comment in the closed original issue)

    Sorry for having to reopen this, but I do have two more (perhaps related) questions after all and would really appreciate if you could help clarify them.

    1. When using the automated sigma search (based on a specified target epsilon and N epochs), the final epsilon computed by the PrivacyEngine after training for N epochs is always much higher than the target epsilon, so it seems that the sigma chosen by get_sigma_from_rdp is too high. This also happens when I run the sentence classification and the table2test examples in the repo. E.g., instead of my target epsilon 8, I will end up with something like epsilon 10-11. How did you get your final epsilon to match the target epsilon in the experiments in your paper?

    2. How do you compute the converted epsilon from composed tradeoff functions when let's say training SST-2 with the default hyperparameters from the examples? Do you reduce the num_compositions=1000 in _eps_from_glw to something way lower than 1000 because the script only runs for ~400 optimization steps and would otherwise always throw the Numerical composition of tradeoff functions failed! Double check privacy parameters. error?

    Originally posted by @xplip in https://github.com/lxuechen/private-transformers/issues/7#issuecomment-987020758

    opened by xplip 3
  • What is the best way to handle large models?

    What is the best way to handle large models?

    Hi all, I was trying to fine-tune GPT-J 6B but I encounter Out Of Memory errors if I use a single-gpu, for non-private training I managed to solve them by using deepspeed but it seems that I cannot use that with opacus or with this codebase. Do you know how I could solve this problem? Thank you in advance:)

    opened by Pier297 2
  • No such file or directory

    No such file or directory

    I want to finetune qqp and here comes an error:

    File "/private-transformers-main/examples/classification/run_classification.py", line 545, in main train_dataset = FewShotDataset(data_args, tokenizer=tokenizer, mode="train", use_demo=use_demo) File "/private-transformers-main/examples/classification/src/dataset.py", line 377, in init with FileLock(lock_path): File "/home/anaconda3/envs/fuck/lib/python3.8/site-packages/filelock/_api.py", line 214, in enter self.acquire() File "/home/anaconda3/envs/fuck/lib/python3.8/site-packages/filelock/_api.py", line 170, in acquire self._acquire() File "/home/anaconda3/envs/fuck/lib/python3.8/site-packages/filelock/_unix.py", line 35, in _acquire fd = os.open(self._lock_file, open_mode) FileNotFoundError: [Errno 2] No such file or directory: 'classification/data/original/QQP/cached_train_RobertaTokenizer_256_qqp_few_shot.lock'

    how can I get this file? thanks.

    opened by trestad 2
  • [DistilBERT] RuntimeError: stack expects each tensor to be equal size

    [DistilBERT] RuntimeError: stack expects each tensor to be equal size

    Hi, @lxuechen, thanks for your repo.

    I met a problem as follows when I tied to finetune DistilBERT. Both BERT and Roberta work well. Any idea about this? Thanks!

    Traceback (most recent call last): ... File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/private_transformers/privacy_utils/privacy_engine.py", line 360, in step self._ghost_step(loss=kwargs.pop("loss")) File "/opt/conda/lib/python3.8/site-packages/private_transformers/privacy_utils/privacy_engine.py", line 261, in _ghost_step self._ghost_helper(loss) File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/private_transformers/privacy_utils/privacy_engine.py", line 334, in _ghost_helper coef_sample = self.get_coef_sample() File "/opt/conda/lib/python3.8/site-packages/private_transformers/privacy_utils/privacy_engine.py", line 348, in get_coef_sample norm_sample = self.get_norm_sample() File "/opt/conda/lib/python3.8/site-packages/private_transformers/privacy_utils/privacy_engine.py", line 343, in get_norm_sample norm_sample = torch.stack([param.norm_sample for name, param in self.named_params], dim=0).norm(2, dim=0) RuntimeError: stack expects each tensor to be equal size, but got [50] at entry 0 and [1] at entry 1

    (50 is my batch size)

    opened by LinkToPast1900 1
  • v0.3.0 fixes

    v0.3.0 fixes

    Non-structural fixes.

    • [ ] Convert to make_private style to avoid bad syntax highlighting during static analysis
    • [ ] Improve the cleanliness of examples
    • [ ] Refactor test file and use functorch to simplify ground truth gradients' logic
    • [ ] Don't compute per-sample gradients for weights which don't require gradients
    • [ ] Use the new smart resizer for tokenizer and model
    • [ ] Refactor decoding to use new left padding based construction
    opened by lxuechen 0
Releases(v0.2.3)
Owner
Xuechen Li
learning to learn
Xuechen Li
MISSFormer: An Effective Medical Image Segmentation Transformer

MISSFormer Code for paper "MISSFormer: An Effective Medical Image Segmentation Transformer". Please read our preprint at the following link: paper_add

Fong 22 Dec 24, 2022
Official PyTorch implementation of "Improving Face Recognition with Large AgeGaps by Learning to Distinguish Children" (BMVC 2021)

Inter-Prototype (BMVC 2021): Official Project Webpage This repository provides the official PyTorch implementation of the following paper: Improving F

Jungsoo Lee 16 Jun 30, 2022
Light-SERNet: A lightweight fully convolutional neural network for speech emotion recognition

Light-SERNet This is the Tensorflow 2.x implementation of our paper "Light-SERNet: A lightweight fully convolutional neural network for speech emotion

Arya Aftab 29 Nov 12, 2022
🎓Automatically Update CV Papers Daily using Github Actions (Update at 12:00 UTC Every Day)

🎓Automatically Update CV Papers Daily using Github Actions (Update at 12:00 UTC Every Day)

Realcat 270 Jan 07, 2023
PyTorch implementation of SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching

SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching This is the official PyTorch implementation of SMODICE: Versatile Offline I

Jason Ma 14 Aug 30, 2022
Baseline powergrid model for NY

Baseline-powergrid-model-for-NY Table of Contents About The Project Built With Usage License Contact Acknowledgements About The Project As the urgency

Anderson Energy Lab at Cornell 6 Nov 24, 2022
Consensus Learning from Heterogeneous Objectives for One-Class Collaborative Filtering

Consensus Learning from Heterogeneous Objectives for One-Class Collaborative Filtering This repository provides the source code of "Consensus Learning

SeongKu-Kang 6 Apr 29, 2022
Code for Greedy Gradient Ensemble for Visual Question Answering (ICCV 2021, Oral)

Greedy Gradient Ensemble for De-biased VQA Code release for "Greedy Gradient Ensemble for Robust Visual Question Answering" (ICCV 2021, Oral). GGE can

21 Jun 29, 2022
AFLFast (extends AFL with Power Schedules)

AFLFast Power schedules implemented by Marcel Böhme [email protected]

Marcel Böhme 380 Jan 03, 2023
PyTorch implementation of "Conformer: Convolution-augmented Transformer for Speech Recognition" (INTERSPEECH 2020)

PyTorch implementation of Conformer: Convolution-augmented Transformer for Speech Recognition. Transformer models are good at capturing content-based

Soohwan Kim 565 Jan 04, 2023
Offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation

Shunted Transformer This is the offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation by Sucheng Ren, Daquan Zhou, Shengf

156 Dec 27, 2022
Lama-cleaner: Image inpainting tool powered by LaMa

Lama-cleaner: Image inpainting tool powered by LaMa

Qing 5.8k Jan 05, 2023
RNN Predict Street Commercial Vitality

RNN-for-Predicting-Street-Vitality Code and dataset for Predicting the Vitality of Stores along the Street based on Business Type Sequence via Recurre

Zidong LIU 1 Dec 15, 2021
The pytorch implementation of the paper "text-guided neural image inpainting" at MM'2020

TDANet: Text-Guided Neural Image Inpainting, MM'2020 (Oral) MM | ArXiv This repository implements the paper "Text-Guided Neural Image Inpainting" by L

LisaiZhang 75 Dec 22, 2022
The fundamental package for scientific computing with Python.

NumPy is the fundamental package needed for scientific computing with Python. Website: https://www.numpy.org Documentation: https://numpy.org/doc Mail

NumPy 22.4k Jan 09, 2023
Implemented fully documented Particle Swarm Optimization algorithm (basic model with few advanced features) using Python programming language

Implemented fully documented Particle Swarm Optimization (PSO) algorithm in Python which includes a basic model along with few advanced features such as updating inertia weight, cognitive, social lea

9 Nov 29, 2022
Adjusting for Autocorrelated Errors in Neural Networks for Time Series

Adjusting for Autocorrelated Errors in Neural Networks for Time Series This repository is the official implementation of the paper "Adjusting for Auto

Fan-Keng Sun 51 Nov 05, 2022
Automated Attendance Project Using Face Recognition

dependencies for project: cmake 3.22.1 dlib 19.22.1 face-recognition 1.3.0 openc

Rohail Taha 1 Jan 09, 2022
🌊 Online machine learning in Python

In a nutshell River is a Python library for online machine learning. It is the result of a merger between creme and scikit-multiflow. River's ambition

OnlineML 4k Jan 02, 2023
Performant, differentiable reinforcement learning

deluca Performant, differentiable reinforcement learning Notes This is pre-alpha software and is undergoing a number of core changes. Updates to follo

Google 114 Dec 27, 2022