Flower - A Friendly Federated Learning Framework

Overview

Flower - A Friendly Federated Learning Framework

GitHub license PRs Welcome Build Downloads Slack

Flower (flwr) is a framework for building federated learning systems. The design of Flower is based on a few guiding principles:

  • Customizable: Federated learning systems vary wildly from one use case to another. Flower allows for a wide range of different configurations depending on the needs of each individual use case.

  • Extendable: Flower originated from a research project at the Univerity of Oxford, so it was build with AI research in mind. Many components can be extended and overridden to build new state-of-the-art systems.

  • Framework-agnostic: Different machine learning frameworks have different strengths. Flower can be used with any machine learning framework, for example, PyTorch, TensorFlow, PyTorch Lightning, MXNet, scikit-learn, TFLite, or even raw NumPy for users who enjoy computing gradients by hand.

  • Understandable: Flower is written with maintainability in mind. The community is encouraged to both read and contribute to the codebase.

Meet the Flower community on flower.dev!

Documentation

Flower Docs:

Flower Usage Examples

A number of examples show different usage scenarios of Flower (in combination with popular machine learning frameworks such as PyTorch or TensorFlow). To run an example, first install the necessary extras:

Usage Examples Documentation

Quickstart examples:

Other examples:

Flower Baselines / Datasets

Experimental - curious minds can take a peek at baselines.

Citation

If you publish work that uses Flower, please cite Flower as follows:

@article{beutel2020flower,
  title={Flower: A Friendly Federated Learning Research Framework},
  author={Beutel, Daniel J and Topal, Taner and Mathur, Akhil and Qiu, Xinchi and Parcollet, Titouan and Lane, Nicholas D},
  journal={arXiv preprint arXiv:2007.14390},
  year={2020}
}

Please also consider adding your publication to the list of Flower-based publications in the docs, just open a Pull Request.

Contributing to Flower

We welcome contributions. Please see CONTRIBUTING.md to get started!

Comments
  • Adding advanced_pytorch example

    Adding advanced_pytorch example

    Reference Issues/PRs

    https://github.com/adap/flower/pull/803

    What does this implement/fix? Explain your changes.

    Implements an advaned_pytorch example. Closely duplicates the advanced_tensorflow example.

    Any other comments?

    I added a --toy flag since my machine can't run the full 10 client simulation. Please run the full simulation and let me know how it works. Happy to make any required changes. I had a notebook for prototyping, removed it. Can add it in a later PR if needed.

    documentation enhancement 
    opened by cozek 20
  • How can I use one GPU to simulate over 100 concurrent clients and total 1000 clients in the pool?

    How can I use one GPU to simulate over 100 concurrent clients and total 1000 clients in the pool?

    What is your question?

    The flower paper says that the Virtual Client Engine can

    enables large-scale single-machine or multi- machine experiments by executing workloads in a resource-aware fashion.

    It creates a ClientProxy for each client, but defers instantiation of the actual client object (including local model and data) until the resources to execute the client-side task (training, evaluation) become available.

    Then I guess that flower can automatically schedule the simulation of different clients based on the resources. That means we can increase the number of clients largely, even infinite number of clients.

    I tried 10 clients on one GPU, using the example https://github.com/adap/flower/blob/main/examples/advanced_pytorch/run.sh . But it cannot successfully run, with a CUDA out-of-memory error (16 GB memory). Could you please give example codes to implement this? And even more clients than 10?

    question 
    opened by wizard1203 15
  • Trying to run quickstart_tensorflow

    Trying to run quickstart_tensorflow

    Hello, I am trying to run your quickstart project but I got an error. When I tried to run server.py with port 8080 it failed, so I just changed the port to 5040 in both files (+ I checked if it is a free port ). The server runs ok, but then when I tried to run client.py I could not connect to the server. Both files client and server are running from the same PC with anaconda environment. Any clue what I did wrong?

    flower quickstart2 flower quickstart3

    opened by Martin-Stevlik 13
  • Implement FedMedian

    Implement FedMedian

    Reference Issues/PRs

    Fixes #1405.

    What does this implement/fix? Explain your changes.

    Implemented the aggragation function FedMedian.

    Any other comments?

    Let me know if everythink is ok so that I can also implement other aggregation functions. Should I also update the CODEOWNERS file?

    opened by edogab33 12
  • simulation, add max_calls arg to ray.remote to avoid rayidle in gpus

    simulation, add max_calls arg to ray.remote to avoid rayidle in gpus

    Reference Issues/PRs

    Fixes #1152 and #1376

    What does this implement/fix? Explain your changes.

    Issue: @ray.remote in ray_client_proxy.py is repeatedly called when running simulation. By default, after each client finishes its training, the ray worker still rests in GPUs as Ray::IDLE. This accumulates and causes CUDA memory to run out.

    Change: By adding one argument @ray.remote(max_calls=1), each ray worker will be removed after every client finishes.

    Any other comments?

    None

    opened by mofanv 12
  • Bumped mypy and tensorflow-cpu to newer versions

    Bumped mypy and tensorflow-cpu to newer versions

    The changes present in the pyproject.tml Poetry file were required to overcome the issues when running ./dev/test.sh in a clean new Poetry environment.

    A wider definition of these changes is presented in https://github.com/adap/flower/issues/961.

    opened by sisco0 11
  • PyTorch simulation example with GPUs

    PyTorch simulation example with GPUs

    What is your question?

    Hello, I'm able to run the PyTorch simulation example (https://github.com/adap/flower/tree/main/examples/simulation_pytorch) included in the repo using CPU, however I couldn't find an example of how to change the setup to use more GPUs instead. Can you please provide some orientation of how to run this script using 4 GPUs? Is this possible? or the simulation is designed to use CPUs only?

    question 
    opened by Mirian-Hipolito 10
  • Unify documentation separator

    Unify documentation separator

    Reference Issues/PRs

    This PR fixes #1083.

    What does this implement/fix? Explain your changes.

    In order to unify documentation, the docs that contain _ (underscore) as separator have been renamed to use - (minus). Two main changes were implemented for this. First, 8 fiilenames have been changed:

    1. /doc/source/quickstart_mxnet.rst
    2. /doc/source/quickstart_pytorch_lightning.rst
    3. /doc/source/example_walkthrough_pytorch_mnist.rst
    4. /doc/source/quickstart_huggingface.rst
    5. /doc/source/quickstart_pytorch.rst
    6. /doc/source/quickstart_tensorflow.rst
    7. /doc/source/release_process.rst
    8. /doc/source/quickstart_scikitlearn.rst

    The references to these files have also been updated, majorly in README.md and index.rst

    Secondly, for each renamed doc redirects were configured in the conf.py file using sphinx_reredirects. A new extension was also added for the same in the file. This was done to redirect from the old naming (e.g., quickstart_pytorch) to the new page (e.g., quickstart-pytorch)

    Any other comments?

    opened by RISHIKESHAVAN 10
  • Let simulation start server with custom ClientManager

    Let simulation start server with custom ClientManager

    Reference Issues/PRs

    Can't start simulation with custom ClientManager.

    What does this implement/fix? Explain your changes.

    Adding extra parameters to default server initialization and simulation start to let users add a custom ClientManager instead of always using the SimpleClientManager.

    Any other comments?

    opened by negedng 9
  • Added __version__ to Flower

    Added __version__ to Flower

    Added __version__ using importlib.metadata to retrieve installed package information. This pull request accomplishes https://github.com/adap/flower/issues/883 duty.

    A screenshot of the testing process is attached below. image

    opened by sisco0 9
  • Not able to run example in flwr_example/quickstart_pytorch

    Not able to run example in flwr_example/quickstart_pytorch

    I cannot run

    $ ./src/py/flwr_example/quickstart_pytorch/run-server.sh
    /usr/bin/python3: Error while finding module specification for 'flwr_example.quickstart_pytorch.server' (ModuleNotFoundError: No module named 'flwr_example.quickstart_pytorch')
    

    But I can run the other example

    $ ./src/py/flwr_example/pytorch/run-server.sh
    
    bug 
    opened by yinfredyue 9
  • Add Flower Baseline: FedAvg+Shakespeare

    Add Flower Baseline: FedAvg+Shakespeare

    Paper

    McMahan et al., 2016, Communication-Efficient Learning of Deep Networks from Decentralized Data, Shakespeare

    Link

    https://arxiv.org/abs/1602.05629

    Maybe give motivations about why the paper should be implemented as a baseline.

    This paper was the first to propose a federated approach for deep learning on decentralized data.

    Is there something else you want to add?

    No response

    Implementation

    To implement this baseline, it is recommended to do the following items in that order:

    For first time contributors

    Prepare - understand the scope

    • [ ] Read the paper linked above
    • [ ] Create the directory structure in Flower Baselines (just the __init__.py files and a README.md)
    • [ ] Before starting to write code, write down all of the specs of this experiment in a README (dataset, partitioning, model, number of clients, all hyperparameters, …)
    • [ ] Open a draft PR

    Implement - make it work

    • [ ] Implement some form of dataset loading and partitioning in a separate dataset.py (doesn’t have to match the paper exactly)
    • [ ] Implement the model in PyTorch
    • [ ] Write a test that shows that the model has the number of parameters mentioned in the paper
    • [ ] Implement the federated learning setup outlined in the paper, maybe starting with fewer clients
    • [ ] Plot accuracy and loss
    • [ ] Run it and check if the model starts to converge

    Align - make it converge

    • [ ] Implement the exact data partitioning outlined in the paper
    • [ ] Use the exact hyperparameters outlined in the paper
    • [ ] Make it converge to roughly the same accuracy that the paper states
    • [ ] Commit the final hyperparameters and plots
    • [ ] Mark the PR as ready
    good first issue new baseline 
    opened by charlesbvll 0
  • Add Flower Baseline: FedAvg+CIFAR-10

    Add Flower Baseline: FedAvg+CIFAR-10

    Paper

    McMahan et al., 2016, Communication-Efficient Learning of Deep Networks from Decentralized Data, CIFAR-10

    Link

    https://arxiv.org/abs/1602.05629

    Maybe give motivations about why the paper should be implemented as a baseline.

    This paper was the first to propose a federated approach for deep learning on decentralized data.

    Is there something else you want to add?

    No response

    Implementation

    To implement this baseline, it is recommended to do the following items in that order:

    For first time contributors

    Prepare - understand the scope

    • [ ] Read the paper linked above
    • [ ] Create the directory structure in Flower Baselines (just the __init__.py files and a README.md)
    • [ ] Before starting to write code, write down all of the specs of this experiment in a README (dataset, partitioning, model, number of clients, all hyperparameters, …)
    • [ ] Open a draft PR

    Implement - make it work

    • [ ] Implement some form of dataset loading and partitioning in a separate dataset.py (doesn’t have to match the paper exactly)
    • [ ] Implement the model in PyTorch
    • [ ] Write a test that shows that the model has the number of parameters mentioned in the paper
    • [ ] Implement the federated learning setup outlined in the paper, maybe starting with fewer clients
    • [ ] Plot accuracy and loss
    • [ ] Run it and check if the model starts to converge

    Align - make it converge

    • [ ] Implement the exact data partitioning outlined in the paper
    • [ ] Use the exact hyperparameters outlined in the paper
    • [ ] Make it converge to roughly the same accuracy that the paper states
    • [ ] Commit the final hyperparameters and plots
    • [ ] Mark the PR as ready
    good first issue new baseline 
    opened by charlesbvll 0
  • Add Flower Baseline: FedAvg+FEMNIST

    Add Flower Baseline: FedAvg+FEMNIST

    Paper

    Caldas et al., 2018, LEAF: A Benchmark for Federated Settings, FedAvg+FEMNIST

    Link

    https://arxiv.org/abs/1812.01097

    Maybe give motivations about why the paper should be implemented as a baseline.

    This benchmark was developed to test federated learning strategies, it is therefore essential to have it as a baseline.

    Is there something else you want to add?

    No response

    Implementation

    To implement this baseline, it is recommended to do the following items in that order:

    For first time contributors

    Prepare - understand the scope

    • [ ] Read the paper linked above
    • [ ] Create the directory structure in Flower Baselines (just the __init__.py files and a README.md)
    • [ ] Before starting to write code, write down all of the specs of this experiment in a README (dataset, partitioning, model, number of clients, all hyperparameters, …)
    • [ ] Open a draft PR

    Implement - make it work

    • [ ] Implement some form of dataset loading and partitioning in a separate dataset.py (doesn’t have to match the paper exactly)
    • [ ] Implement the model in PyTorch
    • [ ] Write a test that shows that the model has the number of parameters mentioned in the paper
    • [ ] Implement the federated learning setup outlined in the paper, maybe starting with fewer clients
    • [ ] Plot accuracy and loss
    • [ ] Run it and check if the model starts to converge

    Align - make it converge

    • [ ] Implement the exact data partitioning outlined in the paper
    • [ ] Use the exact hyperparameters outlined in the paper
    • [ ] Make it converge to roughly the same accuracy that the paper states
    • [ ] Commit the final hyperparameters and plots
    • [ ] Mark the PR as ready
    good first issue new baseline 
    opened by charlesbvll 0
  • Add Flower Baseline: FedProx+MNIST

    Add Flower Baseline: FedProx+MNIST

    Paper

    Li et al., 2018, Federated Optimization in Heterogeneous Networks, MNIST

    Link

    https://arxiv.org/abs/1812.06127

    Maybe give motivations about why the paper should be implemented as a baseline.

    This paper improves on the FedAvg strategy by making the convergence more robust.

    Is there something else you want to add?

    No response

    Implementation

    To implement this baseline, it is recommended to do the following items in that order:

    For first time contributors

    Prepare - understand the scope

    • [x] Read the paper linked above
    • [x] Create the directory structure in Flower Baselines (just the __init__.py files and a README.md)
    • [x] Before starting to write code, write down all of the specs of this experiment in a README (dataset, partitioning, model, number of clients, all hyperparameters, …)
    • [x] Open a draft PR

    Implement - make it work

    • [x] Implement some form of dataset loading and partitioning in a separate dataset.py (doesn’t have to match the paper exactly)
    • [x] Implement the model in PyTorch
    • [x] Write a test that shows that the model has the number of parameters mentioned in the paper
    • [x] Implement the federated learning setup outlined in the paper, maybe starting with fewer clients
    • [x] Plot accuracy and loss
    • [x] Run it and check if the model starts to converge

    Align - make it converge

    • [x] Implement the exact data partitioning outlined in the paper
    • [x] Use the exact hyperparameters outlined in the paper
    • [ ] Make it converge to roughly the same accuracy that the paper states
    • [ ] Commit the final hyperparameters and plots
    • [ ] Mark the PR as ready
    good first issue new baseline 
    opened by charlesbvll 0
  • Add Flower Baseline: FedAvg+MNIST

    Add Flower Baseline: FedAvg+MNIST

    Paper

    McMahan et al., 2016, Communication-Efficient Learning of Deep Networks from Decentralized Data, MNIST

    Link

    https://arxiv.org/abs/1602.05629

    Maybe give motivations about why the paper should be implemented as a baseline.

    This paper was the first to propose a federated approach for deep learning on decentralized data.

    Is there something else you want to add?

    No response

    Implementation

    To implement this baseline, it is recommended to do the following items in that order:

    For first time contributors

    Prepare - understand the scope

    • [x] Read the paper linked above
    • [x] Create the directory structure in Flower Baselines (just the __init__.py files and a README.md)
    • [x] Before starting to write code, write down all of the specs of this experiment in a README (dataset, partitioning, model, number of clients, all hyperparameters, …)
    • [x] Open a draft PR

    Implement - make it work

    • [x] Implement some form of dataset loading and partitioning in a separate dataset.py (doesn’t have to match the paper exactly)
    • [x] Implement the model in PyTorch
    • [x] Write a test that shows that the model has the number of parameters mentioned in the paper
    • [x] Implement the federated learning setup outlined in the paper, maybe starting with fewer clients
    • [x] Plot accuracy and loss
    • [x] Run it and check if the model starts to converge

    Align - make it converge

    • [x] Implement the exact data partitioning outlined in the paper
    • [x] Use the exact hyperparameters outlined in the paper
    • [x] Make it converge to roughly the same accuracy that the paper states
    • [x] Commit the final hyperparameters and plots
    • [x] Mark the PR as ready
    good first issue new baseline 
    opened by charlesbvll 0
Releases(v1.1.0)
  • v1.1.0(Oct 31, 2022)

    Thanks to our contributors

    We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

    Akis Linardos, Christopher S, Daniel J. Beutel, George, Jan Schlicht, Mohammad Fares, Pedro Porto Buarque de Gusmão, Philipp Wiesner, Rob Luke, Taner Topal, VasundharaAgarwal, danielnugraha, edogab33

    What's new?

    • Introduce Differential Privacy wrappers (preview) (#1357, #1460)

      The first (experimental) preview of pluggable Differential Privacy wrappers enables easy configuration and usage of differential privacy (DP). The pluggable DP wrappers enable framework-agnostic and strategy-agnostic usage of both client-side DP and server-side DP. Head over to the Flower docs, a new explainer goes into more detail.

    • New iOS CoreML code example (#1289)

      Flower goes iOS! A massive new code example shows how Flower clients can be built for iOS. The code example contains both Flower iOS SDK components that can be used for many tasks, and one task example running on CoreML.

    • New FedMedian strategy (#1461)

      The new FedMedian strategy implements Federated Median (FedMedian) by Yin et al., 2018.

    • Log Client exceptions in Virtual Client Engine (#1493)

      All Client exceptions happening in the VCE are now logged by default and not just exposed to the configured Strategy (via the failures argument).

    • Improve Virtual Client Engine internals (#1401, #1453)

      Some internals of the Virtual Client Engine have been revamped. The VCE now uses Ray 2.0 under the hood, the value type of the client_resources dictionary changed to float to allow fractions of resources to be allocated.

    • Support optional Client/NumPyClient methods in Virtual Client Engine

      The Virtual Client Engine now has full support for optional Client (and NumPyClient) methods.

    • Provide type information to packages using flwr (#1377)

      The package flwr is now bundled with a py.typed file indicating that the package is typed. This enables typing support for projects or packages that use flwr by enabling them to improve their code using static type checkers like mypy.

    • Updated code example (#1344, #1347)

      The code examples covering scikit-learn and PyTorch Lightning have been updated to work with the latest version of Flower.

    • Updated documentation (#1355, #1558, #1379, #1380, #1381, #1332, #1391, #1403, #1364, #1409, #1419, #1444, #1448, #1417, #1449, #1465, #1467)

      There have been so many documentation updates that it doesn't even make sense to list them individually.

    • Restructured documentation (#1387)

      The documentation has been restructured to make it easier to navigate. This is just the first step in a larger effort to make the Flower documentation the best documentation of any project ever. Stay tuned!

    • Open in Colab button (#1389)

      The four parts of the Flower Federated Learning Tutorial now come with a new Open in Colab button. No need to install anything on your local machine, you can now use and learn about Flower in your browser, it's only a single click away.

    • Improved tutorial (#1468, #1470, #1472, #1473, #1474, #1475)

      The Flower Federated Learning Tutorial has two brand-new parts covering custom strategies (still WIP) and the distinction between Client and NumPyClient. The existing parts one and two have also been improved (many small changes and fixes).

    Incompatible changes

    None

    Source code(tar.gz)
    Source code(zip)
    flwr-1.1.0-py3-none-any.whl(118.72 KB)
    flwr-1.1.0.tar.gz(67.90 KB)
  • v1.0.0(Jul 28, 2022)

    Highlights

    • Stable Virtual Client Engine (accessible via start_simulation)
    • All Client/NumPyClient methods are now optional
    • Configurable get_parameters
    • Tons of small API cleanups resulting in a more coherent developer experience

    Thanks to our contributors

    We would like to give our special thanks to all the contributors who made Flower 1.0 possible (in reverse GitHub Contributors order):

    @rtaiello, @g-pichler, @rob-luke, @andreea-zaharia, @kinshukdua, @nfnt, @tatiana-s, @TParcollet, @vballoli, @negedng, @RISHIKESHAVAN, @hei411, @SebastianSpeitel, @AmitChaulwar, @Rubiel1, @FANTOME-PAN, @Rono-BC, @lbhm, @sishtiaq, @remde, @Jueun-Park, @architjen, @PratikGarai, @mrinaald, @zliel, @MeiruiJiang, @sandracl72, @gubertoli, @Vingt100, @MakGulati, @cozek, @jafermarq, @sisco0, @akhilmathurs, @CanTuerk, @mariaboerner1987, @pedropgusmao, @tanertopal, @danieljanes.

    Incompatible changes

    • All arguments must be passed as keyword arguments (#1338)

      Pass all arguments as keyword arguments, positional arguments are not longer supported. Code that uses positional arguments (e.g., start_client("127.0.0.1:8080", FlowerClient())) must add the keyword for each positional argument (e.g., start_client(server_address="127.0.0.1:8080", client=FlowerClient())).

    • Introduce configuration object ServerConfig in start_server and start_simulation (#1317)

      Instead of a config dictionary {"num_rounds": 3, "round_timeout": 600.0}, start_server and start_simulation now expect a configuration object of type flwr.server.ServerConfig. ServerConfig takes the same arguments that as the previous config dict, but it makes writing type-safe code easier and the default parameters values more transparent.

    • Rename built-in strategy parameters for clarity (#1334)

      The following built-in strategy parameters were renamed to improve readability and consistency with other API's:

      • fraction_eval --> fraction_evaluate
      • min_eval_clients --> min_evaluate_clients
      • eval_fn --> evaluate_fn
    • Update default arguments of built-in strategies (#1278)

      All built-in strategies now use fraction_fit=1.0 and fraction_evaluate=1.0, which means they select all currently available clients for training and evaluation. Projects that relied on the previous default values can get the previous behaviour by initializing the strategy in the following way:

      strategy = FedAvg(fraction_fit=0.1, fraction_evaluate=0.1)

    • Add server_round to Strategy.evaluate (#1334)

      The Strategy method evaluate now receives the current round of federated learning/evaluation as the first parameter.

    • Add server_round and config parameters to evaluate_fn (#1334)

      The evaluate_fn passed to built-in strategies like FedAvg now takes three parameters: (1) The current round of federated learning/evaluation (server_round), (2) the model parameters to evaluate (parameters), and (3) a config dictionary (config).

    • Rename rnd to server_round (#1321)

      Several Flower methods and functions (evaluate_fn, configure_fit, aggregate_fit, configure_evaluate, aggregate_evaluate) receive the current round of federated learning/evaluation as their first parameter. To improve reaability and avoid confusion with random, this parameter has been renamed from rnd to server_round.

    • Move flwr.dataset to flwr_baselines (#1273)

      The experimental package flwr.dataset was migrated to Flower Baselines.

    • Remove experimental strategies (#1280)

      Remove unmaintained experimental strategies (FastAndSlow, FedFSv0, FedFSv1).

    • Rename Weights to NDArrays (#1258, #1259)

      flwr.common.Weights was renamed to flwr.common.NDArrays to better capture what this type is all about.

    • Remove antiquated force_final_distributed_eval from start_server (#1258, #1259)

      The start_server parameter force_final_distributed_eval has long been a historic artefact, in this release it is finally gone for good.

    • Make get_parameters configurable (#1242)

      The get_parameters method now accepts a configuration dictionary, just like get_properties, fit, and evaluate.

    • Replace num_rounds in start_simulation with new config parameter (#1281)

      The start_simulation function now accepts a configuration dictionary config instead of the num_rounds integer. This improves the consistency between start_simulation and start_server and makes transitioning between the two easier.

    New features

    • Support Python 3.10 (#1320)

      The previous Flower release introduced experimental support for Python 3.10, this release declares Python 3.10 support as stable.

    • Make all Client and NumPyClient methods optional (#1260, #1277)

      The Client/NumPyClient methods get_properties, get_parameters, fit, and evaluate are all optional. This enables writing clients that implement, for example, only fit, but no other method. No need to implement evaluate when using centralized evaluation!

    • Enable passing a Server instance to start_simulation (#1281)

      Similar to start_server, start_simulation now accepts a full Server instance. This enables users to heavily customize the execution of eperiments and opens the door to running, for example, async FL using the Virtual Client Engine.

    • Update code examples (#1291, #1286, #1282)

      Many code examples received small or even large maintenance updates, among them are

      • scikit-learn
      • simulation_pytorch
      • quickstart_pytorch
      • quickstart_simulation
      • quickstart_tensorflow
      • advanced_tensorflow
    • Remove the obsolete simulation example (#1328)

      Removes the obsolete simulation example and renames quickstart_simulation to simulation_tensorflow so it fits withs the naming of simulation_pytorch

    • Update documentation (#1223, #1209, #1251, #1257, #1267, #1268, #1300, #1304, #1305, #1307)

      One substantial documentation update fixes multiple smaller rendering issues, makes titles more succinct to improve navigation, removes a deprecated library, updates documentation dependencies, includes the flwr.common module in the API reference, includes support for markdown-based documentation, migrates the changelog from .rst to .md, and fixes a number of smaller details!

    • Minor updates

      • Add round number to fit and evaluate log messages (#1266)
      • Add secure gRPC connection to the advanced_tensorflow code example (#847)
      • Update developer tooling (#1231, #1276, #1301, #1310)
      • Rename ProtoBuf messages to improve consistency (#1214, #1258, #1259)
    Source code(tar.gz)
    Source code(zip)
    flwr-1.0.0-py3-none-any.whl(88.05 KB)
    flwr-1.0.0.tar.gz(55.64 KB)
  • v0.19.0(May 18, 2022)

    What's new:

    • Flower Baselines (preview): FedOpt, FedBN, FedAvgM (919, 1127, 914)

      The first preview release of Flower Baselines has arrived! We're kickstarting Flower Baselines with implementations of FedOpt (FedYogi, FedAdam, FedAdagrad), FedBN, and FedAvgM. Check the documentation on how to use Flower Baselines. With this first preview release we're also inviting the community to contribute their own baselines.

    • C++ client SDK (preview) and code example (1111)

      Preview support for Flower clients written in C++. The C++ preview includes a Flower client SDK and a quickstart code example that demonstrates a simple C++ client using the SDK.

    • Add experimental support for Python 3.10 and Python 3.11 (1135)

      Python 3.10 is the latest stable release of Python and Python 3.11 is due to be released in October. This Flower release adds experimental support for both Python versions.

    • Aggregate custom metrics through user-provided functions (1144)

      Custom metrics (e.g., accuracy) can now be aggregated without having to customize the strategy. Built-in strategies support two new arguments, fit_metrics_aggregation_fn and evaluate_metrics_aggregation_fn, that allow passing custom metric aggregation functions.

    • User-configurable round timeout (1162)

      A new configuration value allows the round timeout to be set for start_server and start_simulation. If the config dictionary contains a round_timeout key (with a float value in seconds), the server will wait at least round_timeout seconds before it closes the connection.

    • Enable both federated evaluation and centralized evaluation to be used at the same time in all built-in strategies (1091)

      Built-in strategies can now perform both federated evaluation (i.e., client-side) and centralized evaluation (i.e., server-side) in the same round. Federated evaluation can be disabled by setting fraction_eval to 0.0.

    • Two new Jupyter Notebook tutorials (1141)

      Two Jupyter Notebook tutorials (compatible with Google Colab) explain basic and intermediate Flower features:

      An Introduction to Federated Learning: Open in Colab

      Using Strategies in Federated Learning: Open in Colab

    • New FedAvgM strategy (Federated Averaging with Server Momentum) (1076)

      The new FedAvgM strategy implements Federated Averaging with Server Momentum [Hsu et al., 2019].

    • New advanced PyTorch code example (1007)

      A new code example (advanced_pytorch) demonstrates advanced Flower concepts with PyTorch.

    • New JAX code example (906, 1143)

      A new code example (jax_from_centralized_to_federated) shows federated learning with JAX and Flower.

    • Minor updates

      • New option to keep Ray running if Ray was already initialized in start_simulation (1177)
      • Add support for custom ClientManager as a start_simulation parameter (1171)
      • New documentation for implementing strategies (1097, 1175)
      • New mobile-friendly documentation theme (1174)
      • Limit version range for (optional) ray dependency to include only compatible releases (>=1.9.2,<1.12.0) (1205)

    Incompatible changes:

    • Remove deprecated support for Python 3.6 (871)
    • Remove deprecated KerasClient (857)
    • Remove deprecated no-op extra installs (973)
    • Remove deprecated proto fields from FitRes and EvaluateRes (869)
    • Remove deprecated QffedAvg strategy (replaced by QFedAvg) (1107)
    • Remove deprecated DefaultStrategy strategy (1142)
    • Remove deprecated support for eval_fn accuracy return value (1142)
    • Remove deprecated support for passing initial parameters as NumPy ndarrays (1142)
    Source code(tar.gz)
    Source code(zip)
    flwr-0.19.0-py3-none-any.whl(104.01 KB)
    flwr-0.19.0.tar.gz(64.24 KB)
  • v0.18.0(Feb 28, 2022)

    What's new?

    • Improved Virtual Client Engine compatibility with Jupyter Notebook / Google Colab (866, 872, 833, 1036)

      Simulations (using the Virtual Client Engine through start_simulation) now work more smoothly on Jupyter Notebooks (incl. Google Colab) after installing Flower with the simulation extra (pip install flwr[simulation]).

    • New Jupyter Notebook code example (833)

      A new code example (quickstart_simulation) demonstrates Flower simulations using the Virtual Client Engine through Jupyter Notebook (incl. Google Colab).

    • Client properties (feature preview) (795)

      Clients can implement a new method get_properties to enable server-side strategies to query client properties.

    • Experimental Android support with TFLite (865)

      Android support has finally arrived in main! Flower is both client-agnostic and framework-agnostic by design. One can integrate arbitrary client platforms and with this release, using Flower on Android has become a lot easier.

      The example uses TFLite on the client side, along with a new FedAvgAndroid strategy. The Android client and FedAvgAndroid are still experimental, but they are a first step towards a fully-fledged Android SDK and a unified FedAvg implementation that integrated the new functionality from FedAvgAndroid.

    • Make gRPC keepalive time user-configurable and decrease default keepalive time (1069)

      The default gRPC keepalive time has been reduced to increase the compatibility of Flower with more cloud environments (for example, Microsoft Azure). Users can configure the keepalive time to customize the gRPC stack based on specific requirements.

    • New differential privacy example using Opacus and PyTorch (805)

      A new code example (opacus) demonstrates differentially-private federated learning with Opacus, PyTorch, and Flower.

    • New Hugging Face Transformers code example (863)

      A new code example (quickstart_huggingface) demonstrates usage of Hugging Face Transformers with Flower.

    • New MLCube code example (779, 1034, 1065, 1090)

      A new code example (quickstart_mlcube) demonstrates usage of MLCube with Flower.

    • SSL-enabled server and client (842, 844, 845, 847, 993, 994)

      SSL enables secure encrypted connections between clients and servers. This release open-sources the Flower secure gRPC implementation to make encrypted communication channels accessible to all Flower users.

    • Updated FedAdam and FedYogi strategies (885, 895)

      FedAdam and FedAdam match the latest version of the Adaptive Federated Optimization paper.

    • Initialize start_simulation with a list of client IDs (860)

      start_simulation can now be called with a list of client IDs (clients_ids, type: List[str]). Those IDs will be passed to the client_fn whenever a client needs to be initialized, which can make it easier to load data partitions that are not accessible through int identifiers.

    • Minor updates

      • Update num_examples calculation in PyTorch code examples in (909)
      • Expose Flower version through flwr.__version__ (952)
      • start_server in app.py now returns a History object containing metrics from training (974)
      • Make max_workers (used by ThreadPoolExecutor) configurable (978)
      • Increase sleep time after server start to three seconds in all code examples (1086)
      • Added a new FAQ section to the documentation (948)
      • And many more under-the-hood changes, library updates, documentation changes, and tooling improvements!

    Incompatible changes:

    • Removed flwr_example and flwr_experimental from release build (869)

      The packages flwr_example and flwr_experimental have been deprecated since Flower 0.12.0 and they are not longer included in Flower release builds. The associated extras (baseline, examples-pytorch, examples-tensorflow, http-logger, ops) are now no-op and will be removed in an upcoming release.

    Source code(tar.gz)
    Source code(zip)
  • v0.17.0(Sep 24, 2021)

    What's new?

    • Experimental virtual client engine (781 790 791)

      One of Flower's goals is to enable research at scale. This release enables a first (experimental) peek at a major new feature, codenamed the virtual client engine. Virtual clients enable simulations that scale to a (very) large number of clients on a single machine or compute cluster. The easiest way to test the new functionality is to look at the two new code examples called quickstart_simulation and simulation_pytorch.

      The feature is still experimental, so there's no stability guarantee for the API. It's also not quite ready for prime time and comes with a few known caveats. However, those who are curious are encouraged to try it out and share their thoughts.

    • New built-in strategies (828 822)

      • FedYogi - Federated learning strategy using Yogi on server-side. Implementation based on https://arxiv.org/abs/2003.00295
      • FedAdam - Federated learning strategy using Adam on server-side. Implementation based on https://arxiv.org/abs/2003.00295
    • New PyTorch Lightning code example (617)

    • New Variational Auto-Encoder code example (752)

    • New scikit-learn code example (748)

    • New experimental TensorBoard strategy (789)

    • Minor updates

      • Improved advanced TensorFlow code example (769)
      • Warning when min_available_clients is misconfigured (830)
      • Improved gRPC server docs (841)
      • Improved error message in NumPyClient (851)
      • Improved PyTorch quickstart code example (852)

    Incompatible changes:

    • Disabled final distributed evaluation (800)

      Prior behaviour was to perform a final round of distributed evaluation on all connected clients, which is often not required (e.g., when using server-side evaluation). The prior behaviour can be enabled by passing force_final_distributed_eval=True to start_server.

    • Renamed q-FedAvg strategy (802)

      The strategy named QffedAvg was renamed to QFedAvg to better reflect the notation given in the original paper (q-FFL is the optimization objective, q-FedAvg is the proposed solver). Note the the original (now deprecated) QffedAvg class is still available for compatibility reasons (it will be removed in a future release).

    • Deprecated and renamed code example simulation_pytorch to simulation_pytorch_legacy (791)

      This example has been replaced by a new example. The new example is based on the experimental virtual client engine, which will become the new default way of doing most types of large-scale simulations in Flower. The existing example was kept for reference purposes, but it might be removed in the future.

    Source code(tar.gz)
    Source code(zip)
    flwr-0.17.0-py3-none-any.whl(224.02 KB)
    flwr-0.17.0.tar.gz(115.40 KB)
  • v0.16.0(May 11, 2021)

    What's new?

    • New built-in strategies (#549)

      • (abstract) FedOpt
      • FedAdagrad
    • Custom metrics for server and strategies (#717)

      The Flower server is now fully task-agnostic, all remaining instances of task-specific metrics (such as :code:accuracy) have been replaced by custom metrics dictionaries. Flower 0.15 introduced the capability to pass a dictionary containing custom metrics from client to server. As of this release, custom metrics replace task-specific metrics on the server.

      Custom metric dictionaries are now used in two user-facing APIs: they are returned from Strategy methods :code:aggregate_fit/:code:aggregate_evaluate and they enable evaluation functions passed to build-in strategies (via :code:eval_fn) to return more than two evaluation metrics. Strategies can even return aggregated metrics dictionaries for the server to keep track of.

      Stratey implementations should migrate their :code:aggregate_fit and :code:aggregate_evaluate methods to the new return type (e.g., by simply returning an empty :code:{}), server-side evaluation functions should migrate from :code:return loss, accuracy to :code:return loss, {"accuracy": accuracy}.

      Flower 0.15-style return types are deprecated (but still supported), compatibility will be removed in a future release.

    • Migration warnings for deprecated functionality (#690)

      Earlier versions of Flower were often migrated to new APIs, while maintaining compatibility with legacy APIs. This release introduces detailed warning messages if usage of deprecated APIs is detected. The new warning messages often provide details on how to migrate to more recent APIs, thus easing the transition from one release to another.

    • Improved docs and docstrings (#691, #692, #713)

    • MXNet example and documentation

    • FedBN implementation in example PyTorch: From Centralized To Federated (#696, #702, #705)

    Incompatible changes:

    • Serialization-agnostic server (#721)

      The Flower server is now fully serialization-agnostic. Prior usage of class :code:Weights (which represents parameters as deserialized NumPy ndarrays) was replaced by class :code:Parameters (e.g., in :code:Strategy). :code:Parameters objects are fully serialization-agnostic and represents parameters as byte arrays, the :code:tensor_type attributes indicates how these byte arrays should be interpreted (e.g., for serialization/deserialization).

      Built-in strategies implement this approach by handling serialization and deserialization to/from :code:Weights internally. Custom/3rd-party Strategy implementations should update to the slighly changed Strategy method definitions. Strategy authors can consult PR #721 to see how strategies can easily migrate to the new format.

    • Deprecated :code:flwr.server.Server.evaluate, use :code:flwr.server.Server.evaluate_round instead (#717)

    Source code(tar.gz)
    Source code(zip)
    flwr-0.16.0-py3-none-any.whl(211.44 KB)
    flwr-0.16.0.tar.gz(109.91 KB)
  • v0.15.0(Mar 12, 2021)

    What's new?

    • Server-side parameter initialization (#658)

      Model parameters can now be initialized on the server-side. Server-side parameter initialization works via a new Strategy method called initialize_parameters.

      Built-in strategies support a new constructor argument called initial_parameters to set the initial parameters. Built-in strategies will provide these initial parameters to the server on startup and then delete them to free the memory afterward.

        # Create model
        model = tf.keras.applications.EfficientNetB0(
            input_shape=(32, 32, 3), weights=None, classes=10
        )
        model.compile("adam", "sparse_categorical_crossentropy", metrics=["accuracy"])
      
        # Create strategy and initilize parameters on the server-side
        strategy = fl.server.strategy.FedAvg(
            # ... (other constructor arguments)
            initial_parameters=model.get_weights(),
        )
      
        # Start Flower server with the strategy
        fl.server.start_server("[::]:8080", config={"num_rounds": 3}, strategy=strategy)
      

      If no initial parameters are provided to the strategy, the server will continue to use the current behavior (namely, it will ask one of the connected clients for its parameters and use these as the initial global parameters).

    Deprecations

    • Deprecate flwr.server.strategy.DefaultStrategy (migrate to flwr.server.strategy.FedAvg, which is equivalent)
    Source code(tar.gz)
    Source code(zip)
  • v0.14.0(Feb 18, 2021)

    What's new?

    • Generalized Client.fit and Client.evaluate return values (#610, #572, #633)

      Clients can now return an additional dictionary mapping str keys to values of the following types: bool, bytes, float, int, str. This means one can return almost arbitrary values from fit/evaluate and make use of them on the server side!

      This improvement also allowed for more consistent return types between fit and evaluate: evaluate should now return a tuple (float, int, dict) representing the loss, number of examples, and a dictionary holding arbitrary problem-specific values like accuracy.

      In case you wondered: this feature is compatible with existing projects, the additional dictionary return value is optional. New code should however migrate to the new return types to be compatible with upcoming Flower releases (fit: List[np.ndarray], int, Dict[str, Scalar], evaluate: float, int, Dict[str, Scalar]). See the example below for details.

      Code example: note the additional dictionary return values in both FlwrClient.fit and FlwrClient.evaluate:

      class FlwrClient(fl.client.NumPyClient):
          def fit(self, parameters, config):
              net.set_parameters(parameters)
              train_loss = train(net, trainloader)
              return net.get_weights(), len(trainloader), {"train_loss": train_loss}
      
          def evaluate(self, parameters, config):
              net.set_parameters(parameters)
              loss, accuracy, custom_metric = test(net, testloader)
              return loss, len(testloader), {"accuracy": accuracy, "custom_metric": custom_metric}
      
    • Generalized config argument in Client.fit and Client.evaluate (#595)

      The config argument used to be of type Dict[str, str], which means that dictionary values were expected to be strings. The new release generalizes this to enable values of the following types: bool, bytes, float, int, str.

      This means one can now pass almost arbitrary values to fit/evaluate using the config dictionary. Yay, no more str(epochs) on the server-side and int(config["epochs"]) on the client side!

      Code example: note that the config dictionary now contains non-str values in both Client.fit and Client.evaluate:

      class FlwrClient(fl.client.NumPyClient):
          def fit(self, parameters, config):
              net.set_parameters(parameters)
              epochs: int = config["epochs"]
              train_loss = train(net, trainloader, epochs)
              return net.get_weights(), len(trainloader), {"train_loss": train_loss}
      
          def evaluate(self, parameters, config):
              net.set_parameters(parameters)
              batch_size: int = config["batch_size"]
              loss, accuracy = test(net, testloader, batch_size)
              return loss, len(testloader), {"accuracy": accuracy}
      
    Source code(tar.gz)
    Source code(zip)
  • v0.13.0(Jan 8, 2021)

    What's new?

    • New example: PyTorch From Centralized To Federated (#549)
    • Improved documentation
      • New documentation theme (#551)
      • New API reference (#554)
      • Updated examples documentation (#549)
      • Removed obsolete documentation (#548)

    Bugfix:

    • Server.fit does not disconnect clients when finished, disconnecting the clients is now handled in flwr.server.start_server (#553, #540).
    Source code(tar.gz)
    Source code(zip)
  • v0.12.0(Dec 7, 2020)

  • v0.11.0(Nov 30, 2020)

    Incompatible changes:

    • Renamed strategy methods (#486) to unify the naming of Flower's public APIs. Other public methods/functions (e.g., every method in Client, but also Strategy.evaluate) do not use the on_ prefix, which is why we're removing it from the four methods in Strategy. To migrate rename the following Strategy methods accordingly:
      • on_configure_evaluate => configure_evaluate
      • on_aggregate_evaluate => aggregate_evaluate
      • on_configure_fit => configure_fit
      • on_aggregate_fit => aggregate_fit

    Important changes:

    • Deprecated DefaultStrategy (#479). To migrate use FedAvg instead.
    • Simplified examples and baselines (#484).
    • Removed presently unused on_conclude_round from strategy interface (#483).
    • Set minimal Python version to 3.6.1 instead of 3.6.9 (#471).
    • Improved Strategy docstrings (#470).
    Source code(tar.gz)
    Source code(zip)
    flwr-0.11.0-py3-none-any.whl(179.11 KB)
    flwr-0.11.0.tar.gz(93.04 KB)
  • v0.10.0(Nov 11, 2020)

Deep Learning: Architectures & Methods Project: Deep Learning for Audio Super-Resolution

Deep Learning: Architectures & Methods Project: Deep Learning for Audio Super-Resolution Figure: Example visualization of the method and baseline as a

Oliver Hahn 16 Dec 23, 2022
ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees

ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees This repository is the official implementation of the empirica

Kuan-Lin (Jason) Chen 2 Oct 02, 2022
Use of Attention Gates in a Convolutional Neural Network / Medical Image Classification and Segmentation

Attention Gated Networks (Image Classification & Segmentation) Pytorch implementation of attention gates used in U-Net and VGG-16 models. The framewor

Ozan Oktay 1.6k Dec 30, 2022
SigOpt wrappers for scikit-learn methods

SigOpt + scikit-learn Interfacing This package implements useful interfaces and wrappers for using SigOpt and scikit-learn together Getting Started In

SigOpt 73 Sep 30, 2022
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language mod

20.5k Jan 08, 2023
Discovering Dynamic Salient Regions with Spatio-Temporal Graph Neural Networks

Discovering Dynamic Salient Regions with Spatio-Temporal Graph Neural Networks This is the official code for DyReg model inroduced in Discovering Dyna

Bitdefender Machine Learning 11 Nov 08, 2022
Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks

Adversarially-Robust-Periphery Code + Data from the paper "Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks" by A

Anne Harrington 2 Feb 07, 2022
[ICCV2021] Official code for "Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition"

CTR-GCN This repo is the official implementation for Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition. The pap

Yuxin Chen 148 Dec 16, 2022
An SMPC companion library for Syft

SyMPC A library that extends PySyft with SMPC support SyMPC /ˈsɪmpəθi/ is a library which extends PySyft ≥0.3 with SMPC support. It allows computing o

Arturo Marquez Flores 0 Oct 13, 2021
A hybrid SOTA solution of LiDAR panoptic segmentation with C++ implementations of point cloud clustering algorithms. ICCV21, Workshop on Traditional Computer Vision in the Age of Deep Learning

ICCVW21-TradiCV-Survey-of-LiDAR-Cluster Motivation In contrast to popular end-to-end deep learning LiDAR panoptic segmentation solutions, we propose a

YimingZhao 103 Nov 22, 2022
A script written in Python that returns a consensus string and profile matrix of a given DNA string(s) in FASTA format.

A script written in Python that returns a consensus string and profile matrix of a given DNA string(s) in FASTA format.

Zain 1 Feb 01, 2022
Implementation of Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning

advantage-weighted-regression Implementation of Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning, by Peng et al. (

Omar D. Domingues 1 Dec 02, 2021
Official Implementation for Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation

Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation We present a generic image-to-image translation framework, pixel2style2pixel (pSp

2.8k Dec 30, 2022
Original Implementation of Prompt Tuning from Lester, et al, 2021

Prompt Tuning This is the code to reproduce the experiments from the EMNLP 2021 paper "The Power of Scale for Parameter-Efficient Prompt Tuning" (Lest

Google Research 282 Dec 28, 2022
A python library for implementing a recommender system

python-recsys A python library for implementing a recommender system. Installation Dependencies python-recsys is build on top of Divisi2, with csc-pys

Oscar Celma 1.5k Dec 17, 2022
HW3 ― GAN, ACGAN and UDA

HW3 ― GAN, ACGAN and UDA In this assignment, you are given datasets of human face and digit images. You will need to implement the models of both GAN

grassking100 1 Dec 13, 2021
Code for "Sparse Steerable Convolutions: An Efficient Learning of SE(3)-Equivariant Features for Estimation and Tracking of Object Poses in 3D Space"

Sparse Steerable Convolution (SS-Conv) Code for "Sparse Steerable Convolutions: An Efficient Learning of SE(3)-Equivariant Features for Estimation and

25 Dec 21, 2022
Python inverse kinematics for your robot model based on Pinocchio.

Python inverse kinematics for your robot model based on Pinocchio.

Stéphane Caron 50 Dec 22, 2022
Code for the CIKM 2019 paper "DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting".

Dual Self-Attention Network for Multivariate Time Series Forecasting 20.10.26 Update: Due to the difficulty of installation and code maintenance cause

Kyon Huang 223 Dec 16, 2022
CATE: Computation-aware Neural Architecture Encoding with Transformers

CATE: Computation-aware Neural Architecture Encoding with Transformers Code for paper: CATE: Computation-aware Neural Architecture Encoding with Trans

16 Dec 27, 2022