An easy-to-use federated learning platform

Overview

federatedscope-logo

Website Playground Contributing

FederatedScope is a comprehensive federated learning platform that provides convenient usage and flexible customization for various federated learning tasks in both academia and industry. Based on an event-driven architecture, FederatedScope integrates rich collections of functionalities to satisfy the burgeoning demands from federated learning, and aims to build up an easy-to-use platform for promoting learning safely and effectively.

A detailed tutorial is provided on our website.

News

  • [06-17-2022] We release pFL-Bench, a comprehensive benchmark for personalized Federated Learning (pFL), containing 10+ datasets and 20+ baselines. [code, pdf]
  • [06-17-2022] We release FedHPO-B, a benchmark suite for studying federated hyperparameter optimization. [code, pdf]
  • [06-17-2022] We release B-FHTL, a benchmark suit for studying federated hetero-task learning. [code, pdf]
  • [06-13-2022] Our project was receiving an attack, which has been resolved. More details.
  • [05-25-2022] Our paper FederatedScope-GNN has been accepted by KDD'2022!
  • [05-06-2022] We release FederatedScope v0.1.0!

Quick Start

We provide an end-to-end example for users to start running a standard FL course with FederatedScope.

Step 1. Installation

First of all, users need to clone the source code and install the required packages (we suggest python version >= 3.9).

git clone https://github.com/alibaba/FederatedScope.git
cd FederatedScope

You can install the dependencies from the requirement file:

# For minimal version
conda install --file enviroment/requirements-torch1.10.txt -c pytorch -c conda-forge -c nvidia

# For application version
conda install --file enviroment/requirements-torch1.10-application.txt -c pytorch -c conda-forge -c nvidia -c pyg

or build docker image and run with docker env (cuda 11 and torch 1.10):

docker build -f enviroment/docker_files/federatedscope-torch1.10.Dockerfile -t alibaba/federatedscope:base-env-torch1.10 .
docker run --gpus device=all --rm -it --name "fedscope" -w $(pwd) alibaba/federatedscope:base-env-torch1.10 /bin/bash

If you need to run with down-stream tasks such as graph FL, change the requirement/docker file name into another one when executing the above commands:

# enviroment/requirements-torch1.10.txt -> 
enviroment/requirements-torch1.10-application.txt

# enviroment/docker_files/federatedscope-torch1.10.Dockerfile ->
enviroment/docker_files/federatedscope-torch1.10-application.Dockerfile

Note: You can choose to use cuda 10 and torch 1.8 via changing torch1.10 to torch1.8. The docker images are based on the nvidia-docker. Please pre-install the NVIDIA drivers and nvidia-docker2 in the host machine. See more details here.

Finally, after all the dependencies are installed, run:

python setup.py install

# Or (for dev mode)
pip install -e .

Step 2. Prepare datasets

To run an FL task, users should prepare a dataset. The DataZoo provided in FederatedScope can help to automatically download and preprocess widely-used public datasets for various FL applications, including CV, NLP, graph learning, recommendation, etc. Users can directly specify cfg.data.type = DATASET_NAMEin the configuration. For example,

cfg.data.type = 'femnist'

To use customized datasets, you need to prepare the datasets following a certain format and register it. Please refer to Customized Datasets for more details.

Step 3. Prepare models

Then, users should specify the model architecture that will be trained in the FL course. FederatedScope provides a ModelZoo that contains the implementation of widely adopted model architectures for various FL applications. Users can set up cfg.model.type = MODEL_NAME to apply a specific model architecture in FL tasks. For example,

cfg.model.type = 'convnet2'

FederatedScope allows users to use customized models via registering. Please refer to Customized Models for more details about how to customize a model architecture.

Step 4. Start running an FL task

Note that FederatedScope provides a unified interface for both standalone mode and distributed mode, and allows users to change via configuring.

Standalone mode

The standalone mode in FederatedScope means to simulate multiple participants (servers and clients) in a single device, while participants' data are isolated from each other and their models might be shared via message passing.

Here we demonstrate how to run a standard FL task with FederatedScope, with setting cfg.data.type = 'FEMNIST'and cfg.model.type = 'ConvNet2' to run vanilla FedAvg for an image classification task. Users can customize training configurations, such as cfg.federated.total_round_num, cfg.data.batch_size, and cfg.optimizer.lr, in the configuration (a .yaml file), and run a standard FL task as:

# Run with default configurations
python federatedscope/main.py --cfg federatedscope/example_configs/femnist.yaml
# Or with custom configurations
python federatedscope/main.py --cfg federatedscope/example_configs/femnist.yaml federated.total_round_num 50 data.batch_size 128

Then you can observe some monitored metrics during the training process as:

INFO: Server #0 has been set up ...
INFO: Model meta-info: <class 'federatedscope.cv.model.cnn.ConvNet2'>.
... ...
INFO: Client has been set up ...
INFO: Model meta-info: <class 'federatedscope.cv.model.cnn.ConvNet2'>.
... ...
INFO: {'Role': 'Client #5', 'Round': 0, 'Results_raw': {'train_loss': 207.6341676712036, 'train_acc': 0.02, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.152683353424072}}
INFO: {'Role': 'Client #1', 'Round': 0, 'Results_raw': {'train_loss': 209.0940284729004, 'train_acc': 0.02, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.1818805694580075}}
INFO: {'Role': 'Client #8', 'Round': 0, 'Results_raw': {'train_loss': 202.24929332733154, 'train_acc': 0.04, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.0449858665466305}}
INFO: {'Role': 'Client #6', 'Round': 0, 'Results_raw': {'train_loss': 209.43883895874023, 'train_acc': 0.06, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.1887767791748045}}
INFO: {'Role': 'Client #9', 'Round': 0, 'Results_raw': {'train_loss': 208.83140087127686, 'train_acc': 0.0, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.1766280174255375}}
INFO: ----------- Starting a new training round (Round #1) -------------
... ...
INFO: Server #0: Training is finished! Starting evaluation.
INFO: Client #1: (Evaluation (test set) at Round #20) test_loss is 163.029045
... ...
INFO: Server #0: Final evaluation is finished! Starting merging results.
... ...

Distributed mode

The distributed mode in FederatedScope denotes running multiple procedures to build up an FL course, where each procedure plays as a participant (server or client) that instantiates its model and loads its data. The communication between participants is already provided by the communication module of FederatedScope.

To run with distributed mode, you only need to:

  • Prepare isolated data file and set up cfg.distribute.data_file = PATH/TO/DATA for each participant;
  • Change cfg.federate.model = 'distributed', and specify the role of each participant by cfg.distributed.role = 'server'/'client'.
  • Set up a valid address by cfg.distribute.host = x.x.x.x and cfg.distribute.port = xxxx. (Note that for a server, you need to set up server_host/server_port for listening messge, while for a client, you need to set up client_host/client_port for listening and server_host/server_port for sending join-in applications when building up an FL course)

We prepare a synthetic example for running with distributed mode:

# For server
python main.py --cfg federatedscope/example_configs/distributed_server.yaml distribute.data_file 'PATH/TO/DATA' distribute.server_host x.x.x.x distribute.server_port xxxx

# For clients
python main.py --cfg federatedscope/example_configs/distributed_client_1.yaml distribute.data_file 'PATH/TO/DATA' distribute.server_host x.x.x.x distribute.server_port xxxx distribute.client_host x.x.x.x distribute.client_port xxxx
python main.py --cfg federatedscope/example_configs/distributed_client_2.yaml distribute.data_file 'PATH/TO/DATA' distribute.server_host x.x.x.x distribute.server_port xxxx distribute.client_host x.x.x.x distribute.client_port xxxx
python main.py --cfg federatedscope/example_configs/distributed_client_3.yaml distribute.data_file 'PATH/TO/DATA' distribute.server_host x.x.x.x distribute.server_port xxxx distribute.client_host x.x.x.x distribute.client_port xxxx

An executable example with generated toy data can be run with:

# Generate the toy data
python scripts/gen_data.py

# Firstly start the server that is waiting for clients to join in
python federatedscope/main.py --cfg federatedscope/example_configs/distributed_server.yaml distribute.data_file toy_data/server_data distribute.server_host 127.0.0.1 distribute.server_port 50051

# Start the client #1 (with another process)
python federatedscope/main.py --cfg federatedscope/example_configs/distributed_client_1.yaml distribute.data_file toy_data/client_1_data distribute.server_host 127.0.0.1 distribute.server_port 50051 distribute.client_host 127.0.0.1 distribute.client_port 50052
# Start the client #2 (with another process)
python federatedscope/main.py --cfg federatedscope/example_configs/distributed_client_2.yaml distribute.data_file toy_data/client_2_data distribute.server_host 127.0.0.1 distribute.server_port 50051 distribute.client_host 127.0.0.1 distribute.client_port 50053
# Start the client #3 (with another process)
python federatedscope/main.py --cfg federatedscope/example_configs/distributed_client_3.yaml distribute.data_file toy_data/client_3_data distribute.server_host 127.0.0.1 distribute.server_port 50051 distribute.client_host 127.0.0.1 distribute.client_port 50054

And you can observe the results as (the IP addresses are anonymized with 'x.x.x.x'):

INFO: Server #0: Listen to x.x.x.x:xxxx...
INFO: Server #0 has been set up ...
Model meta-info: <class 'federatedscope.core.lr.LogisticRegression'>.
... ...
INFO: Client: Listen to x.x.x.x:xxxx...
INFO: Client (address x.x.x.x:xxxx) has been set up ...
Client (address x.x.x.x:xxxx) is assigned with #1.
INFO: Model meta-info: <class 'federatedscope.core.lr.LogisticRegression'>.
... ...
{'Role': 'Client #2', 'Round': 0, 'Results_raw': {'train_avg_loss': 5.215108394622803, 'train_loss': 333.7669372558594, 'train_total': 64}}
{'Role': 'Client #1', 'Round': 0, 'Results_raw': {'train_total': 64, 'train_loss': 290.9668884277344, 'train_avg_loss': 4.54635763168335}}
----------- Starting a new training round (Round #1) -------------
... ...
INFO: Server #0: Training is finished! Starting evaluation.
INFO: Client #1: (Evaluation (test set) at Round #20) test_loss is 30.387419
... ...
INFO: Server #0: Final evaluation is finished! Starting merging results.
... ...

Advanced

As a comprehensive FL platform, FederatedScope provides the fundamental implementation to support requirements of various FL applications and frontier studies, towards both convenient usage and flexible extension, including:

  • Personalized Federated Learning: Client-specific model architectures and training configurations are applied to handle the non-IID issues caused by the diverse data distributions and heterogeneous system resources.
  • Federated Hyperparameter Optimization: When hyperparameter optimization (HPO) comes to Federated Learning, each attempt is extremely costly due to multiple rounds of communication across participants. It is worth noting that HPO under the FL is unique and more techniques should be promoted such as low-fidelity HPO.
  • Privacy Attacker: The privacy attack algorithms are important and convenient to verify the privacy protection strength of the design FL systems and algorithms, which is growing along with Federated Learning.
  • Graph Federated Learning: Working on the ubiquitous graph data, Graph Federated Learning aims to exploit isolated sub-graph data to learn a global model, and has attracted increasing popularity.
  • Recommendation: As a number of laws and regulations go into effect all over the world, more and more people are aware of the importance of privacy protection, which urges the recommender system to learn from user data in a privacy-preserving manner.
  • Differential Privacy: Different from the encryption algorithms that require a large amount of computation resources, differential privacy is an economical yet flexible technique to protect privacy, which has achieved great success in database and is ever-growing in federated learning.
  • ...

More supports are coming soon! We have prepared a tutorial to provide more details about how to utilize FederatedScope to enjoy your journey of Federated Learning!

Materials of related topics are constantly being updated, please refer to FL-Recommendation, Federated-HPO, Personalized FL, Federated Graph Learning, FL-NLP, FL-privacy-attacker and so on.

Documentation

The classes and methods of FederatedScope have been well documented so that users can generate the API references by:

pip install -r requirements-doc.txt
make html

We put the API references on our website.

License

FederatedScope is released under Apache License 2.0.

Publications

If you find FederatedScope useful for your research or development, please cite the following paper:

@article{federatedscope,
  title = {FederatedScope: A Flexible Federated Learning Platform for Heterogeneity},
  author = {Xie, Yuexiang and Wang, Zhen and Chen, Daoyuan and Gao, Dawei and Yao, Liuyi and Kuang, Weirui and Li, Yaliang and Ding, Bolin and Zhou, Jingren},
  journal={arXiv preprint arXiv:2204.05011},
  year = {2022},
}

More publications can be found in the Publications.

Contributing

We greatly appreciate any contribution to FederatedScope! You can refer to Contributing to FederatedScope for more details.

Welcome to join in our Slack channel, or DingDing group (please scan the following QR code) for discussion.

federatedscope-logo

Issues
  • Support optimizers with different parameters

    Support optimizers with different parameters

    • This PR is to solve the issue #91
    • Solution
      • Specific the parameters of the local optimizer by adding new parameters under the config cfg.optimizer and cfg.fedopt.optimizer. :
      • The calling of get_optimizer is as follows
        optimizer = get_optimizer(model=model, **cfg.optimizer)
    
    • Example:
      • Taking cfg.optimizer as an example, the original config file is as follows
        # ------------------------------------------------------------------------ #
        # Optimizer related options
        # ------------------------------------------------------------------------ #
        cfg.optimizer = CN(new_allowed=True)
    
        cfg.optimizer.type = 'SGD'
        cfg.optimizer.lr = 0.1
    
    • By setting new_allowed=True in cfg.optimizer, we allow the users to add new parameters according to the type of their optimizers. For example, if I want to use the optimizer registered as myoptimizer, as well as its new parameters mylr and mynorm. I just need to write the yaml file as follows, and the new parameters will be added automatically.
    optimizer:
        type: myoptimizer
        mylr: 0.1
        mynorm: 1
    
    bug 
    opened by DavdGao 7
  • Redundancy in the log files

    Redundancy in the log files

    A Fedavg on 5% of FEMNIST trail will produce a 500 kb log each round: with 80% eval logs like 2022-04-13 16:33:24,901 (client:264) INFO: Client #1: (Evaluation (test set) at Round #26) test_loss is 79.352451. And 10% is server results and 10% is train informations.

    If the round is 500, 1000, or much larger, the log files will take up too much space with a lot of redundancy. @yxdyc

    enhancement 
    opened by rayrayraykk 6
  • report cuda error when trying to launch up the demo case

    report cuda error when trying to launch up the demo case

    Hi when I am trying to launch up the demo case, cuda relevant error was reported as below:

    I am using conda to manage the environment. in other env I have the pytorch works on cuda without any problem. I think this could be the installation issue-- I did not install anything by myself, totally following your guidance. My cuda version: NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 11.6
    and my torch version: 1.10.1

    (fedscope) [email protected]:~/prj/FederatedScope$ python federatedscope/main.py --cfg federatedscope/example_configs/femnist.yaml
    
    ...
    2022-05-13 22:06:09,249 (server:520) INFO: ----------- Starting training (Round #0) -------------
    Traceback (most recent call last):
     File "/home/liangma/prj/FederatedScope/federatedscope/main.py", line 41, in <module>
       _ = runner.run()
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/federatedscope-0.1.0-py3.9.egg/federatedscope/core/fed_runner.py", line 136, in run
       self._handle_msg(msg)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/federatedscope-0.1.0-py3.9.egg/federatedscope/core/fed_runner.py", line 254, in _handle_msg
       self.client[each_receiver].msg_handlers[msg.msg_type](msg)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/federatedscope-0.1.0-py3.9.egg/federatedscope/core/worker/client.py", line 202, in callback_funcs_for_model_para
       sample_size, model_para_all, results = self.trainer.train()
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/federatedscope-0.1.0-py3.9.egg/federatedscope/core/trainers/trainer.py", line 374, in train
       self._run_routine("train", hooks_set, target_data_split_name)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/federatedscope-0.1.0-py3.9.egg/federatedscope/core/trainers/trainer.py", line 208, in _run_routine
       hook(self.ctx)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/federatedscope-0.1.0-py3.9.egg/federatedscope/core/trainers/trainer.py", line 474, in _hook_on_fit_start_init
       ctx.model.to(ctx.device)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/torch/nn/modules/module.py", line 899, in to
       return self._apply(convert)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/torch/nn/modules/module.py", line 570, in _apply
       module._apply(fn)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/torch/nn/modules/module.py", line 593, in _apply
       param_applied = fn(param)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/torch/nn/modules/module.py", line 897, in convert
       return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/torch/cuda/__init__.py", line 208, in _lazy_init
       raise AssertionError("Torch not compiled with CUDA enabled")
    AssertionError: Torch not compiled with CUDA enabled
    
    
    opened by lmaxeniro 5
  • yo bro,我想问问跟fate比起来,scope有没有提供像fate的federatedML那样的联邦算法库?

    yo bro,我想问问跟fate比起来,scope有没有提供像fate的federatedML那样的联邦算法库?

    请教两个问题: 1、federatedScope有没有提供像fate的federatedML那样的联邦算法库?fate的federatedMl在横向纵向lr、神经网络、决策树等等都有联邦算法提供可以直接使用 2、federatedScope的架构实现上相较fate有没有更优异的地方? 3、federatedScope有没有已经现网生产落地部署、或者是商用的案例?

    opened by MiKKiYang 4
  • Cannot run gfl with link prediction dataset RecSys.

    Cannot run gfl with link prediction dataset RecSys.

    For link-level recommendation system dataset, e.g. RecSys(name='ciao'), there is None for processed data.x, which is necessary for model building. So we cannot run it successfully. Please check it. Located at federatedscope/gfl/model/model_builder.py image

    Besides, there is a 404 URL bug for downloading dataset for RecSys if self.FL is True, which is caused by modified self.name. Located at federatedscope/gfl/dataset/recsys.py image You can keep the same self.name for downloading, and modify the raw_dir() and processed_dir() method to distinguish the FL dataset and client dataset.

    opened by Starry-Hu 3
  • Refactor autotune module

    Refactor autotune module

    1. use widely-adopted ConfigSpace module to characterize the search space and sample concrete configurations.
    2. simplify the multi-threading for conducting FL in parallel.
    3. remove the redundant code that aimed to maintain best-seen performance.
    4. fix SHA's filtering issue (with ceil op)
    5. enable FedEx to work with grid configuration space
    6. refactor how FedEx receives its configuration space
    7. enable SHA to wrap FedEx
    8. fix GPU inconsistent issue that may cause failure when local model is shared
    enhancement 
    opened by joneswong 2
  • A keyword indicates the task type

    A keyword indicates the task type

    Is it possible to add a keyword, such as cfg.federate.task_type, to indicate the task type of each client? It is useful in calculating the loss function because y_true should be long and float respectively for classification and regression tasks.

    documentation 
    opened by wanghh7 2
  • Support different configs for different client

    Support different configs for different client

    https://github.com/alibaba/FederatedScope/blob/b4914e68d40ce102fde676e92478e6181ace520c/federatedscope/core/fed_runner.py#L224-L225

    Here the model uses the global configuration instead of the client configuration.

    bug 
    opened by wanghh7 2
  • A problem when using Adam optimizer

    A problem when using Adam optimizer

    https://github.com/alibaba/FederatedScope/blob/28d325bcb65d904db1b70a22c90441bdbeed654c/federatedscope/core/auxiliaries/optimizer_builder.py#L14

    https://github.com/alibaba/FederatedScope/blob/28d325bcb65d904db1b70a22c90441bdbeed654c/federatedscope/core/trainers/context.py#L108

    Adam got an unexpected keyword argument 'momentum'.

    opened by wanghh7 2
  • Some questions about cross-device FL.

    Some questions about cross-device FL.

    Hello guys! I have read the tutorial about the FederatedScope. It seems that the whole project is based on the Python and the cross-device part is just simulation. I wonder is there any cross-language design to deal with the communication between the cllient and the server, for example, with Android(Java) in the mobile phone and Linux(Java/Python) in the server. Because, you know, some divices lack the Python enviroument. What's more, is there any trial on the real devices especially cross-device part? I will be appreciated if you cute guys can solve my doubts.

    Thanks to you for your payment on the FederatedScope!

    opened by SimaZD 2
  • Rebuild `Context` and Add lifecycle manager

    Rebuild `Context` and Add lifecycle manager

    • The attributes of ctx are classified into the following two classes

      • ctx.xxx: general attributes
      • ctx.mode.xxx: actually refer to ctx["mode"][ctx.cur_mode]["xxx"]
    • Add lifecycle manager

      • Build two classes CtxStatsVar and CtxReferVar to wrap the variables with specific lifecycle and clear function.
        • CtxStatsVar: statistic variables (float), e.g. loss_batch_total
          • lifecycle: chosen from batch, epoch, routine and None
        • CtxReferVar: reference variable, e.g. y_prob, data_batch
          • lifecycle: chosen from batch, epoch, routine and None
          • efunc: the customized function to run when the lifecycle ends
            • e.g. for ctx.model, maybe we want to move it to cpu when the lifecycle ends.
            • When efunc is None, the variable is deleted at the end of the lifecycle
      • Add an attribute lifecycles in Context to record the lifecycles of the attributes
      • Add a decorator @lifecycle(lifecycle=xxx) to manage the variables.
    enhancement 
    opened by DavdGao 2
  • AttributeError in distributed mode

    AttributeError in distributed mode

    Describe the bug When model contains BN layer, the param bn.num_batches_tracked would be convert to int by grpc. But the trainer.update can't handle this situation well.

    image

    A dummy solution:

    def update(self, model_parameters):
        '''
            Called by the FL client to update the model parameters
        Arguments:
            model_parameters (dict): PyTorch Module object's state_dict.
        '''
        for key in model_parameters:
            if isinstance(model_parameters[key], list):
                model_parameters[key] = torch.FloatTensor(
                    model_parameters[key])
            elif isinstance(model_parameters[key], int):
                model_parameters[key] = torch.LongTensor([model_parameters[key]])
            elif isinstance(model_parameters[key], float):
                model_parameters[key] = torch.FloatTensor([model_parameters[key]])
        self.ctx.model.load_state_dict(self._param_filter(model_parameters),
                                       strict=False)
    

    or can we solve it before sending the model_param?

    bug 
    opened by rayrayraykk 0
  • [Doc] Fix the guidance of installation and setup

    [Doc] Fix the guidance of installation and setup

    1. Fix the guidance of installation.
    2. Fix setup.py.
    3. Add format checks in dev mode.
    4. Add conda recipe.

    For consistency, I change the version to 0.1.9. See https://anaconda.org/federatedscope/federatedscope and https://pypi.org/project/federatedscope/ .

    documentation enhancement 
    opened by rayrayraykk 0
  • Cannot set different data related parameters for different clients.

    Cannot set different data related parameters for different clients.

    As the title says, since the dataset is loaded before both fedrunner.run() and the setup of clients, the config_per_client.yaml doesn't work for the dataset, such as different batch size.

    bug 
    opened by DavdGao 1
  • Modification of the finetune mechanism

    Modification of the finetune mechanism

    This PR is mainly for the modification of the finetune mechansim (#148 ), but we also make small change for other functions as following

    Finetune

    • Move partial parameters from cfg.federate into cfg.train as they are more relevant to the training, including
      • local_update_step
      • batch_or_epoch
    • Creat cfg.finetune and cfg.train in the config to support different parameters for finetuning and training (e.g. optimizer.lr)
    • Implement finetune function in the basic trainer
    • Modify most existing shells and yaml files to fit the new setting (except the files under the directory benchmark)

    Enums and Decorators

    • Create enums.py to avoid using string and the inconsistency issues
    • Create decorators.py to keep the code clean

    Optimizer

    • Initialize the ctx.optimizer in the beginning of the routine function rather than within the context to solve #136

    To be discussed

    @joneswong please check if the following modifications are appropriate

    • In this PR, use_diff is implemented by a decorator use_diff.
    • Some hpo configs are modifed to fit the new configuration.
    opened by DavdGao 0
Releases(v0.1.0)
Owner
Alibaba
Alibaba Open Source
Alibaba
TianyuQi 8 Jul 1, 2022
A Research-oriented Federated Learning Library and Benchmark Platform for Graph Neural Networks. Accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks A Research-oriented Federated Learning Library and Benchmark Platform

FedML-AI 170 Jun 29, 2022
GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning

GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding mitigation strategies.

null 102 Jun 23, 2022
A general-purpose, flexible, and easy-to-use simulator alongside an OpenAI Gym trading environment for MetaTrader 5 trading platform (Approved by OpenAI Gym)

gym-mtsim: OpenAI Gym - MetaTrader 5 Simulator MtSim is a simulator for the MetaTrader 5 trading platform alongside an OpenAI Gym environment for rein

Mohammad Amin Haghpanah 92 Jun 27, 2022
A Lighting Pytorch Framework for Recommendation System, Easy-to-use and Easy-to-extend.

Torch-RecHub A Lighting Pytorch Framework for Recommendation Models, Easy-to-use and Easy-to-extend. 安装 pip install torch-rechub 主要特性 scikit-learn风格易用

Mincai Lai 59 Jun 22, 2022
FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX.

FedJAX: Federated learning with JAX What is FedJAX? FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX. FedJAX priori

Google 187 Jun 27, 2022
An open framework for Federated Learning.

Welcome to Intel® Open Federated Learning Federated learning is a distributed machine learning approach that enables organizations to collaborate on m

Intel Corporation 338 Jun 30, 2022
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

Aviv Shamsian 100 Jun 22, 2022
[ICLR'21] FedBN: Federated Learning on Non-IID Features via Local Batch Normalization

FedBN: Federated Learning on Non-IID Features via Local Batch Normalization This is the PyTorch implemention of our paper FedBN: Federated Learning on

Med-AIR@CUHK 122 Jun 24, 2022
[CVPR'21] FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space

FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space by Quande Liu, Cheng Chen, Ji

Quande Liu 146 Jun 23, 2022
Personalized Federated Learning using Pytorch (pFedMe)

Personalized Federated Learning with Moreau Envelopes (NeurIPS 2020) This repository implements all experiments in the paper Personalized Federated Le

Charlie Dinh 198 Jul 3, 2022
Plato: A New Framework for Federated Learning Research

a new software framework to facilitate scalable federated learning research.

System Group@Theory Lab 165 Jun 29, 2022
An unofficial PyTorch implementation of a federated learning algorithm, FedAvg.

Federated Averaging (FedAvg) in PyTorch An unofficial implementation of FederatedAveraging (or FedAvg) algorithm proposed in the paper Communication-E

Seok-Ju Hahn 73 Jul 1, 2022
Bachelor's Thesis in Computer Science: Privacy-Preserving Federated Learning Applied to Decentralized Data

federated is the source code for the Bachelor's Thesis Privacy-Preserving Federated Learning Applied to Decentralized Data (Spring 2021, NTNU) Federat

Dilawar Mahmood 24 May 2, 2022
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning

FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning (FedML) developed and maintained by Scaleout Systems. FEDn enables highly scalable cross-silo and cross-device use-cases over FEDn networks.

Scaleout 69 Jun 29, 2022
FedScale: Benchmarking Model and System Performance of Federated Learning

FedScale: Benchmarking Model and System Performance of Federated Learning (Paper) This repository contains scripts and instructions of building FedSca

null 181 Jun 30, 2022
Code for Subgraph Federated Learning with Missing Neighbor Generation (NeurIPS 2021)

To run the code Unzip the package to your local directory; Run 'pip install -r requirements.txt' to download required packages; Open file ~/nips_code/

null 22 Jun 22, 2022
Robbing the FED: Directly Obtaining Private Data in Federated Learning with Modified Models

Robbing the FED: Directly Obtaining Private Data in Federated Learning with Modified Models This repo contains a barebones implementation for the atta

null 14 Apr 28, 2022
Breaching - Breaching privacy in federated learning scenarios for vision and text

Breaching - A Framework for Attacks against Privacy in Federated Learning This P

Jonas Geiping 99 Jun 29, 2022