Federated_learning codes used for the the paper "Evaluation of Federated Learning Aggregation Algorithms" and "A Federated Learning Aggregation Algorithm for Pervasive Computing: Evaluation and Comparison"

Overview

Federated Distance (FedDist)

This is the code accompanying the Percom2021 paper "A Federated Learning Aggregation Algorithm for Pervasive Computing: Evaluation and Comparison" and the code of federated learning experiments by Sannara Ek during his master thesis.

Overview


This experiments compares 3 federated learning algorithms along with a new one, FedDist. The FedDist algorithm incorporates a pair-wise distance scheme for identifying outlier-like neurons/filters. These outlier-like neurons/filter may be in fact features learned from sparse data and so it is directly added to the server model for the next round of training.

Core Dependencies (tested and stable)


  • Tensorflow 2.2.2
  • PyTorch 1.1
  • scikit-learn 0.22.1

All the working scripts are presented in a Jupiter notebook file format.

There is an array of 3rd party packages that is necessary for the entirety of the scripts to run. It is recommended to run command "pip3 install -r requirements.txt" in your virtual environment and working directory to replicate the environments used in this experiment.

!Note! Visual Studio is required to solve dependency problems when working on a Windows Machine

Data Preparation


"DATA_UCI.ipynb" and "DATA_REALWORLD_SPLITSUB.ipynb" are respectively used to prepare the UCI and REALWORLD dataset for training. Simply run all cells in a Jupyter notebook. The formatted dataset will be placed in a new directory "datasetStand"

FL script implementations


The FedAvg and FedPer implementations are found in the file "FedAvg_FedPer.ipynb". You must specify which algorithm you which to run in the third cell of the notebook by changing the "algorithm" variable to either "FEDAVG" or "FEDPER"

FedDist is found in the "FedDist.ipynb" file.

FedMA is found in the "FedMA.ipynb" file.

For all the federated algorithms, the third cell gives a variety of options and testing environment to choose from. We recommend leaving the configuration in default other than changing the "algorithm" variable and specifying the GPU/CPU to use. Simply run all cells to start training.

If preferred to run as a python script, convert the files to a .py format VIA Jupiter notebook (FILES -> Download as -> Python (.py)).

Additionally with the command below from a console achieves the same result:

jupyter nbconvert --to script '[ScriptName].ipynb'

Simply specify the wanted parameters in the third cell beforehand.

Results Interpretability


All results of each experiments shall generate the "savedModels" folder. Within this folder will contain subfolders with the name of the chosen configuration and model architecture of the experiment. Additionally, within each model architecture folder will contain the another subfolder with the name of the dataset used for the experiment. E.g a directory should appear like:

./savedModels/FED_5C_10LE_50CR_400D_100D_BALANCED/UCI

Now within this folder:

The final server model is saved in a .h5 format. The recorded training statistics foreach communication round, such as the accuracy and loss of the clients model and server model, are stored in the trainingStats folder. The results regarding the Global accuracy and the detail of the server model can be found on the generated Server-Measure.csv file. Results for the Personalization accuracy can be found in the indivualClients Measure.csv file and finally the Generalization accuracy can be found at the AllClientsMeasure.csv file.

Sample script sequence:


An example of execution would be to first download and format the dataset (UCI and REALWORLD) then execute one of the FL algorithms (requires several days on CPU).

1.DATA_UCI.ipynb
2.DATA_REALWORLD_SPLITSUB.ipynb
3.FedAvg_FedPer.ipynb/FedDist.ipynb/FedMA.ipynb

Citing this work:


@INPROCEEDINGS{Lala2103:Federated,
AUTHOR="Sannara Ek and François Portet and Philippe Lalanda and German Vega",
TITLE="A Federated Learning Aggregation Algorithm for Pervasive Computing:
Evaluation and Comparison",
BOOKTITLE="2021 IEEE International Conference on Pervasive Computing and
Communications (PerCom) (PerCom 2021)",
ADDRESS="Kassel, Germany",
DAYS=21,
MONTH=mar,
YEAR=2021,
KEYWORDS="Federated Learning; Edge Computing; Human activity recognition"
}

Contact:


Please contact the authors by [firstname].[lastname]@univ-grenoble-alpes.fr if you have issues with the code.

To contact Sannara Ek, Please use [firstname].[lastname]@gmail.com

Owner
GETALP
Study Group for Machine Translation and Automated Processing of Languages and Speech
GETALP
gtfs2vec - Learning GTFS Embeddings for comparing PublicTransport Offer in Microregions

gtfs2vec This is a companion repository for a gtfs2vec - Learning GTFS Embeddings for comparing PublicTransport Offer in Microregions publication. Vis

Politechnika Wrocławska - repozytorium dla informatyków 5 Oct 10, 2022
PyTorch inference for "Progressive Growing of GANs" with CelebA snapshot

Progressive Growing of GANs inference in PyTorch with CelebA training snapshot Description This is an inference sample written in PyTorch of the origi

320 Nov 21, 2022
Layered Neural Atlases for Consistent Video Editing

Layered Neural Atlases for Consistent Video Editing Project Page | Paper This repository contains an implementation for the SIGGRAPH Asia 2021 paper L

Yoni Kasten 353 Dec 27, 2022
Pseudo-mask Matters in Weakly-supervised Semantic Segmentation

Pseudo-mask Matters in Weakly-supervised Semantic Segmentation By Yi Li, Zhanghui Kuang, Liyang Liu, Yimin Chen, Wayne Zhang SenseTime, Tsinghua Unive

33 Oct 14, 2022
Constructing interpretable quadratic accuracy predictors to serve as an objective function for an IQCQP problem that represents NAS under latency constraints and solve it with efficient algorithms.

IQNAS: Interpretable Integer Quadratic programming Neural Architecture Search Realistic use of neural networks often requires adhering to multiple con

0 Oct 24, 2021
Semantic Segmentation Architectures Implemented in PyTorch

pytorch-semseg Semantic Segmentation Algorithms Implemented in PyTorch This repository aims at mirroring popular semantic segmentation architectures i

Meet Shah 3.3k Dec 29, 2022
Prml - Repository of notes, code and notebooks in Python for the book Pattern Recognition and Machine Learning by Christopher Bishop

Pattern Recognition and Machine Learning (PRML) This project contains Jupyter notebooks of many the algorithms presented in Christopher Bishop's Patte

Gerardo Durán-Martín 1k Jan 07, 2023
Detail-Preserving Transformer for Light Field Image Super-Resolution

DPT Official Pytorch implementation of the paper "Detail-Preserving Transformer for Light Field Image Super-Resolution" accepted by AAAI 2022 . Update

50 Jan 01, 2023
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

76 Jan 03, 2023
C3DPO - Canonical 3D Pose Networks for Non-rigid Structure From Motion.

C3DPO: Canonical 3D Pose Networks for Non-Rigid Structure From Motion By: David Novotny, Nikhila Ravi, Benjamin Graham, Natalia Neverova, Andrea Vedal

Meta Research 309 Dec 16, 2022
Sparse-dense operators implementation for Paddle

Sparse-dense operators implementation for Paddle This module implements coo, csc and csr matrix formats and their inter-ops with dense matrices. Feel

北海若 3 Dec 17, 2022
TensorRT examples (Jetson, Python/C++)(object detection)

TensorRT examples (Jetson, Python/C++)(object detection)

Nobuo Tsukamoto 53 Dec 22, 2022
Pytorch Performace Tuning, WandB, AMP, Multi-GPU, TensorRT, Triton

Plant Pathology 2020 FGVC7 Introduction A deep learning model pipeline for training, experimentaiton and deployment for the Kaggle Competition, Plant

Bharat Giddwani 0 Feb 25, 2022
Resources related to our paper "CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain"

CLIN-X (CLIN-X-ES) & (CLIN-X-EN) This repository holds the companion code for the system reported in the paper: "CLIN-X: pre-trained language models a

Bosch Research 4 Dec 05, 2022
This repository contains the code for using the H3DS dataset introduced in H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction

H3DS Dataset This repository contains the code for using the H3DS dataset introduced in H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction Access

Crisalix 72 Dec 10, 2022
Implementation of the algorithm shown in the article "Modelo de Predicción de Éxito de Canciones Basado en Descriptores de Audio"

Success Predictor Implementation of the algorithm shown in the article "Modelo de Predicción de Éxito de Canciones Basado en Descriptores de Audio". B

Rodrigo Nazar Meier 4 Mar 17, 2022
Simple codebase for flexible neural net training

neural-modular Simple codebase for flexible neural net training. Allows for seamless exchange of models, dataset, and optimizers. Uses hydra for confi

Jannik Kossen 7 Apr 05, 2022
Official implementation of Long-Short Transformer in PyTorch.

Long-Short Transformer (Transformer-LS) This repository hosts the code and models for the paper: Long-Short Transformer: Efficient Transformers for La

NVIDIA Corporation 198 Dec 29, 2022
Vpw analyzer - A visual J1850 VPW analyzer written in Python

VPW Analyzer A visual J1850 VPW analyzer written in Python Requires Tkinter, Pan

7 May 01, 2022
Code release for Convolutional Two-Stream Network Fusion for Video Action Recognition

Convolutional Two-Stream Network Fusion for Video Action Recognition

Christoph Feichtenhofer 676 Dec 31, 2022