Skip to content

osmhpi/federated-learning-dag

Repository files navigation

Federated Learning DAG Experiments

This repository contains software artifacts to reproduce the experiments presented in the Middleware '21 paper "Implicit Model Specialization through DAG-based Decentralized Federated Learning"

General Usage

Since we are still using TensorFlow 1, Python <=3.7 is required.

Depending on your setup, you can obtain the old python version using a version manager such as pyenv or using a Docker container:

cd federated-learning-dag
docker run -d --name federated-learning-dag \
  -v $PWD:/workspace \
  --workdir /workspace \
  --init --shm-size 8g \
  mcr.microsoft.com/vscode/devcontainers/python:3.7-bullseye \
    tail -f /dev/null
docker exec -it federated-learning-dag bash
# Run pipenv commands in this shell

# Clean up
docker rm -f federated-learning-dag 

Then, use pipenv to set up your environment. VS Code users can use the provided devcontainer template as a base environment. Run pipenv install to download the dependencies and run the code within a pipenv shell.

There are two execution variants: A default, single-threaded one, and an extended version using the 'ray' parallelism library.

Basic usage: python -m tangle.lab --help (or python -m tangle.ray --help).

By default, all experiments_figure_[*].py use ray for parallelism. This requires lots of main memory and a shared memory option for use within Docker. VS Code devcontainer users have to add "--shm-size", "8gb" (depending on the available memory) to the runArgs in .devcontainer/devcontainer.json.

To view a DAG (sometimes called a tangle) in a web browser, run python -m http.server in the repository root and open http://localhost:8000/viewer/. Enter the name of your experiment run and adjust the round slider to see something.

Obtaining the datasets

The contents of the ./data directory can be obtained from https://data.osmhpi.de/ipfs/QmQMe1Bd8X7tqQHWqcuS17AQZUqcfRQmNRgrenJD2o8xsS/.

Reproduction of the evaluation in the paper

The experiements in the paper can be reproduced by running python scripts in the root folder of this repository. They are organized by the figures in which the respective evaluation is presented and named experiments_figure_[*].py

The results of the federated averaging runs presented in Figure 9 as baseline can be reproduced by running run_fed_avg_[fmnist,poets,cifar].py The results presented in Table 2 are generated by the scripts for DAG-IS of Figure 9 as well.

About

Implicit Model Specialization through DAG-based Decentralized Federated Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published