FLSim a flexible, standalone library written in PyTorch that simulates FL settings with a minimal, easy-to-use API

Related tags

Deep LearningFLSim
Overview

Federated Learning Simulator (FLSim)

Federated Learning Simulator (FLSim) is a flexible, standalone library written in PyTorch that simulates FL settings with a minimal, easy-to-use API. FLSim is domain-agnostic and accommodates many use cases such as computer vision and natural text. Currently FLSim supports cross-device FL, where millions of clients' devices (e.g. phones) traing a model collaboratively together.

FLSim is scalable and fast. It supports differential privacy (DP), secure aggregation (secAgg), and variety of compression techniques.

In FL, a model is trained collaboratively by multiple clients that each have their own local data, and a central server moderates training, e.g. by aggregating model updates from multiple clients.

In FLSim, developers only need to define a dataset, model, and metrics reporter. All other aspects of FL training are handled internally by the FLSim core library.

FLSim

Library Structure

FLSim core components follow the same semantic as FedAvg. The server comprises three main features: selector, aggregator, and optimizer at a high level. The selector selects clients for training, and the aggregate aggregates client updates until a round is complete. Then, the optimizer optimizes the server model based on the aggregated gradients. The server communicates with the clients via the channel. The channel then compresses the message between the server and the clients. Locally, the client composes of a dataset and a local optimizer. This local optimizer can be SGD, FedProx, or a custom Pytorch optimizer.

Installation

The latest release of FLSim can be installed via pip:

pip install flsim

You can also install directly from the source for the latest features (along with its quirks and potentially ocassional bugs):

git clone https://github.com/facebookresearch/FLSim.git
cd FLSim
pip install -e .

Getting started

To implement a central training loop in the FL setting using FLSim, a developer simply performs the following steps:

  1. Build their own data pipeline to assign individual rows of training data to client devices (to simulate data is distributed across client devices)
  2. Create a corresponding nn/Module model and wrap it in an FL model.
  3. Define a custom metrics reporter that computes and collects metrics of interest (e.g., accuracy) throughout training.
  4. Set the desired hyperparameters in a config.

Usage Example

Tutorials

To see the details, please refer to the tutorials that we have prepared.

Examples

We have prepared the runnable exampels for 2 of the tutorials above:

Contributing

See the CONTRIBUTING for how to contribute to this library.

License

This code is released under Apache 2.0, as found in the LICENSE file.

Comments
  • Bug Fix#36: fix imports in tests.

    Bug Fix#36: fix imports in tests.

    Types of changes

    • [x ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Docs change / refactoring / dependency upgrade

    Motivation and Context / Related issue

    Bug Fix#36: fix imports in tests.

    How Has This Been Tested (if it applies)

    pytest -ra is able to discover all tests now.

    Checklist

    • [x] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document and completed the CLA (see CONTRIBUTING).
    • [x ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by ghaccount 8
  • Vr

    Vr

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Docs change / refactoring / dependency upgrade

    Motivation and Context / Related issue

    How Has This Been Tested (if it applies)

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [ ] I have read the CONTRIBUTING document and completed the CLA (see CONTRIBUTING).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by JohnlNguyen 6
  • Move optimizer_test_utils to optimizers directory

    Move optimizer_test_utils to optimizers directory

    Summary: it is currently located at the top-level tests directory. However the top-level tests directory does not really make sense as each component is organized into its dedicated directory. optimizer_test_utils.py belongs to the optimizer directory in that sense. In this diff, we move the file to the optimizer directory and fixes the reference.

    Differential Revision: D32241821

    CLA Signed fb-exported Merged 
    opened by jessemin 3
  • Does the backend handle Federated learning asynchronously?

    Does the backend handle Federated learning asynchronously?

    I found this repo from this blog: - https://ai.facebook.com/blog/asynchronous-federated-learning/ However I do not find any mentioning on this repo and also I cannot decipher from the code examples whether this is synchronous version or asynchronous version of Federated learning? Can you please clarify this for me? And also if this is the asynchronous version how can I dive deeper in to the libraries and look at the code of implementation for the asynch handling mechanism?

    Thank you

    opened by 111Kaushal 2
  • Remove test_pytorch_local_dataset_factory

    Remove test_pytorch_local_dataset_factory

    Summary: This test had been very flaky about 1+ year ago an d never been revived since then. Deleting it from the codebase.

    Differential Revision: D32415979

    CLA Signed fb-exported Merged 
    opened by jessemin 2
  • FedSGD with virtual batching

    FedSGD with virtual batching

    🚀 Feature

    Motivation

    Create a memory efficient client to run FedSGD. If a client has many examples, running FedSGD (taking the gradient of the model based on all of the client's data) can lead to OOM. In this PR, we fix this problem by still calling optimizer.step once at the end of local training to simulate the effect of FedSGD.>

    opened by JohnlNguyen 0
  • Add Fednova as a benchmark

    Add Fednova as a benchmark

    Summary:

    What?

    Adding FedNova as a benchmark

    Why?

    FedNova is a well known paper that fixes the objective inconsistency problem

    Differential Revision: D34668291

    CLA Signed fb-exported 
    opened by JohnlNguyen 1
  • Having to `import flsim.configs`  before creating config from json is unintuitive

    Having to `import flsim.configs` before creating config from json is unintuitive

    🚀 Feature

    This code works

    import flsim.configs <-- 
    from flsim.utils.config_utils import fl_config_from_json
    
    json_config = {
        "trainer": {
        }
    }
    cfg = fl_config_from_json(json_config)
    

    This code doesn't work

    from flsim.utils.config_utils import fl_config_from_json
    
    json_config = {
        "trainer": {
        }
    }
    cfg = fl_config_from_json(json_config)
    

    Motivation

    Having to import flsim.configs is unintuitive and not clear from the user perspective

    Pitch

    Alternatives

    Additional context

    opened by JohnlNguyen 0
  • Fix sent140 example

    Fix sent140 example

    Summary:

    What?

    Fix tutorial to word embedding to resolve the poor accuracy problem

    Why?

    https://github.com/facebookresearch/FLSim/issues/34

    Differential Revision: D34149392

    CLA Signed fb-exported 
    opened by JohnlNguyen 1
  • low test accuracy in Sentiment classification with LEAF's Sent140 tutorial?

    low test accuracy in Sentiment classification with LEAF's Sent140 tutorial?

    âť“ Questions and Help

    Until we move the questions to another medium, feel free to use this as your question:

    Question

    I tried this tutorial https://github.com/facebookresearch/FLSim/blob/main/tutorials/sent140_tutorial.ipynb And accuracy is less that random guess (50%)!

    Any suggestions or approaches to improve accuracy for this tutorial?

    from tutorial: Running (epoch = 1, round = 1, global round = 1) for Test (epoch = 1, round = 1, global round = 1), Loss/Test: 0.8683878255035598 (epoch = 1, round = 1, global round = 1), Accuracy/Test: 49.61439588688946 {'Accuracy': 49.61439588688946}

    opened by ghaccount 0
Releases(v0.1.0)
  • v0.0.1(Dec 9, 2021)

    We are excited to announce the release of FLSim 0.0.1.

    Introduction

    How does one train a machine learning model without access to user data? Federated Learning (FL) is the technology that answers this question. In a nutshell, FL is a way for many users to learn a machine learning model without sharing data collaboratively. The two scenarios for FL, cross-silo and cross-device. Cross-silo provides technologies for collaborative learning between a few large organizations with massive silo datasets. Cross-device provides collaborative learning between many small user devices with small local datasets. Cross-device FL, where millions or even billions of users cooperate on learning a model, is a much more complex problem and attracted less attention from the research community. We designed FLSim to address the cross-device FL use case.

    Federated Learning at Scale

    Large-scale cross-device Federated Learning (FL) is a federated learning paradigm with several challenges that differentiate it from cross-silo FL: millions of clients coordinating with a central server and training instability due to the significant cohort problem. With these challenges in mind, we built FLSim to be scalable while easy to use, and FLSim can scale to thousands of clients per round using only 1 GPU. We hope FLSim will equip researchers to tackle problems with federated learning at scale.

    FLSim

    Library Structure

    FLSim core components follow the same semantic as FedAvg. The server comprises three main features: selector, aggregator, and optimizer at a high level. The selector selects clients for training, and the aggregate aggregates client updates until a round is complete. Then, the optimizer optimizes the server model based on the aggregated gradients. The server communicates with the clients via the channel. The channel then compresses the message between the server and the clients. Locally, the client composes of a dataset and a local optimizer. This local optimizer can be SGD, FedProx, or a custom Pytorch optimizer.

    Included Datasets

    Currently, FLSim supports all datasets from LEAF including FEMNIST, Shakespeare, Sent140, CelebA, Synthetic and Reddit. Additionally, we support MNIST and CIFAR-10.

    Included Algorithms

    FLSim supports standard FedAvg, and other federated learning methods such as FedAdam, FedProx, FedAvgM, FedBuff, FedLARS, and FedLAMB.

    What’s next?

    We hope FLSim will foster large-scale cross-device FL research. Soon, we plan to add support for personalization in early 2022. Throughout 2022, we plan to gather feedback and improve usability. We plan to continue to grow our collection of algorithms, datasets, and models.

    Source code(tar.gz)
    Source code(zip)
Owner
Meta Research
Meta Research
Repository for publicly available deep learning models developed in Rosetta community

trRosetta2 This package contains deep learning models and related scripts used by Baker group in CASP14. Installation Linux/Mac clone the package git

81 Dec 29, 2022
Custom implementation of Corrleation Module

Pytorch Correlation module this is a custom C++/Cuda implementation of Correlation module, used e.g. in FlowNetC This tutorial was used as a basis for

Clément Pinard 361 Dec 12, 2022
Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"

Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition This repository contains code for the CVPR2021 paper "Patch-NetV

QVPR 368 Jan 06, 2023
Code for BMVC2021 paper "Boundary Guided Context Aggregation for Semantic Segmentation"

Boundary-Guided-Context-Aggregation Boundary Guided Context Aggregation for Semantic Segmentation Haoxiang Ma, Hongyu Yang, Di Huang In BMVC'2021 Pape

Haoxiang Ma 31 Jan 08, 2023
Pytorch implementation of paper: "NeurMiPs: Neural Mixture of Planar Experts for View Synthesis"

NeurMips: Neural Mixture of Planar Experts for View Synthesis This is the official repo for PyTorch implementation of paper "NeurMips: Neural Mixture

James Lin 101 Dec 13, 2022
Code for NeurIPS 2021 paper "Curriculum Offline Imitation Learning"

README The code is based on the ILswiss. To run the code, use python run_experiment.py --nosrun -e your YAML file -g gpu id Generally, run_experim

ApexRL 12 Mar 19, 2022
Yoga - Yoga asana classifier for python

Yoga Asana Classifier Description Hi welcome to my new deep learning project "Yo

Programminghut 35 Dec 12, 2022
Hyperbolic Procrustes Analysis Using Riemannian Geometry

Hyperbolic Procrustes Analysis Using Riemannian Geometry The code in this repository creates the figures presented in this article: Please notice that

Ronen Talmon's Lab 2 Jan 08, 2023
A Python library for working with arbitrary-dimension hypercomplex numbers following the Cayley-Dickson construction of algebras.

Hypercomplex A Python library for working with quaternions, octonions, sedenions, and beyond following the Cayley-Dickson construction of hypercomplex

7 Nov 04, 2022
MakeItTalk: Speaker-Aware Talking-Head Animation

MakeItTalk: Speaker-Aware Talking-Head Animation This is the code repository implementing the paper: MakeItTalk: Speaker-Aware Talking-Head Animation

Adobe Research 285 Jan 08, 2023
Deploy optimized transformer based models on Nvidia Triton server

🤗 Hugging Face Transformer submillisecond inference 🤯 and deployment on Nvidia Triton server Yes, you can perfom inference with transformer based mo

Lefebvre Sarrut Services 1.2k Jan 05, 2023
Simple node deletion tool for onnx.

snd4onnx Simple node deletion tool for onnx. I only test very miscellaneous and limited patterns as a hobby. There are probably a large number of bugs

Katsuya Hyodo 6 May 15, 2022
ONNX-PackNet-SfM: Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Ibai Gorordo 14 Dec 09, 2022
A robust pointcloud registration pipeline based on correlation.

PHASER: A Robust and Correspondence-Free Global Pointcloud Registration Ubuntu 18.04+ROS Melodic: Overview Pointcloud registration using correspondenc

ETHZ ASL 101 Dec 01, 2022
Online-compatible Unsupervised Non-resonant Anomaly Detection Repository

Online-compatible Unsupervised Non-resonant Anomaly Detection Repository Repository containing all scripts used in the studies of Online-compatible Un

0 Nov 09, 2021
A very tiny, very simple, and very secure file encryption tool.

Picocrypt is a very tiny (hence "Pico"), very simple, yet very secure file encryption tool. It uses the modern ChaCha20-Poly1305 cipher suite as well

Evan Su 1k Dec 30, 2022
Rethinking of Pedestrian Attribute Recognition: A Reliable Evaluation under Zero-Shot Pedestrian Identity Setting

Pytorch Pedestrian Attribute Recognition: A strong PyTorch baseline of pedestrian attribute recognition and multi-label classification.

Jian 79 Dec 18, 2022
AFLFast (extends AFL with Power Schedules)

AFLFast Power schedules implemented by Marcel Böhme [email protected]

Marcel Böhme 380 Jan 03, 2023
Very deep VAEs in JAX/Flax

Very Deep VAEs in JAX/Flax Implementation of the experiments in the paper Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on I

Jamie Townsend 42 Dec 12, 2022
Tensorflow implementation of our method: "Triangle Graph Interest Network for Click-through Rate Prediction".

TGIN Tensorflow implementation of our method: "Triangle Graph Interest Network for Click-through Rate Prediction". Files in the folder dataset/ electr

Alibaba 21 Dec 21, 2022