Federated_learning codes used for the the paper "Evaluation of Federated Learning Aggregation Algorithms" and "A Federated Learning Aggregation Algorithm for Pervasive Computing: Evaluation and Comparison"

Overview

Federated Distance (FedDist)

This is the code accompanying the Percom2021 paper "A Federated Learning Aggregation Algorithm for Pervasive Computing: Evaluation and Comparison" and the code of federated learning experiments by Sannara Ek during his master thesis.

Overview


This experiments compares 3 federated learning algorithms along with a new one, FedDist. The FedDist algorithm incorporates a pair-wise distance scheme for identifying outlier-like neurons/filters. These outlier-like neurons/filter may be in fact features learned from sparse data and so it is directly added to the server model for the next round of training.

Core Dependencies (tested and stable)


  • Tensorflow 2.2.2
  • PyTorch 1.1
  • scikit-learn 0.22.1

All the working scripts are presented in a Jupiter notebook file format.

There is an array of 3rd party packages that is necessary for the entirety of the scripts to run. It is recommended to run command "pip3 install -r requirements.txt" in your virtual environment and working directory to replicate the environments used in this experiment.

!Note! Visual Studio is required to solve dependency problems when working on a Windows Machine

Data Preparation


"DATA_UCI.ipynb" and "DATA_REALWORLD_SPLITSUB.ipynb" are respectively used to prepare the UCI and REALWORLD dataset for training. Simply run all cells in a Jupyter notebook. The formatted dataset will be placed in a new directory "datasetStand"

FL script implementations


The FedAvg and FedPer implementations are found in the file "FedAvg_FedPer.ipynb". You must specify which algorithm you which to run in the third cell of the notebook by changing the "algorithm" variable to either "FEDAVG" or "FEDPER"

FedDist is found in the "FedDist.ipynb" file.

FedMA is found in the "FedMA.ipynb" file.

For all the federated algorithms, the third cell gives a variety of options and testing environment to choose from. We recommend leaving the configuration in default other than changing the "algorithm" variable and specifying the GPU/CPU to use. Simply run all cells to start training.

If preferred to run as a python script, convert the files to a .py format VIA Jupiter notebook (FILES -> Download as -> Python (.py)).

Additionally with the command below from a console achieves the same result:

jupyter nbconvert --to script '[ScriptName].ipynb'

Simply specify the wanted parameters in the third cell beforehand.

Results Interpretability


All results of each experiments shall generate the "savedModels" folder. Within this folder will contain subfolders with the name of the chosen configuration and model architecture of the experiment. Additionally, within each model architecture folder will contain the another subfolder with the name of the dataset used for the experiment. E.g a directory should appear like:

./savedModels/FED_5C_10LE_50CR_400D_100D_BALANCED/UCI

Now within this folder:

The final server model is saved in a .h5 format. The recorded training statistics foreach communication round, such as the accuracy and loss of the clients model and server model, are stored in the trainingStats folder. The results regarding the Global accuracy and the detail of the server model can be found on the generated Server-Measure.csv file. Results for the Personalization accuracy can be found in the indivualClients Measure.csv file and finally the Generalization accuracy can be found at the AllClientsMeasure.csv file.

Sample script sequence:


An example of execution would be to first download and format the dataset (UCI and REALWORLD) then execute one of the FL algorithms (requires several days on CPU).

1.DATA_UCI.ipynb
2.DATA_REALWORLD_SPLITSUB.ipynb
3.FedAvg_FedPer.ipynb/FedDist.ipynb/FedMA.ipynb

Citing this work:


@INPROCEEDINGS{Lala2103:Federated,
AUTHOR="Sannara Ek and François Portet and Philippe Lalanda and German Vega",
TITLE="A Federated Learning Aggregation Algorithm for Pervasive Computing:
Evaluation and Comparison",
BOOKTITLE="2021 IEEE International Conference on Pervasive Computing and
Communications (PerCom) (PerCom 2021)",
ADDRESS="Kassel, Germany",
DAYS=21,
MONTH=mar,
YEAR=2021,
KEYWORDS="Federated Learning; Edge Computing; Human activity recognition"
}

Contact:


Please contact the authors by [firstname].[lastname]@univ-grenoble-alpes.fr if you have issues with the code.

To contact Sannara Ek, Please use [firstname].[lastname]@gmail.com

Owner
GETALP
Study Group for Machine Translation and Automated Processing of Languages and Speech
GETALP
Automatic library of congress classification, using word embeddings from book titles and synopses.

Automatic Library of Congress Classification The Library of Congress Classification (LCC) is a comprehensive classification system that was first deve

Ahmad Pourihosseini 3 Oct 01, 2022
A collection of scripts I developed for personal and working projects.

A collection of scripts I developed for personal and working projects Table of contents Introduction Repository diagram structure List of scripts pyth

Gianluca Bianco 109 Dec 26, 2022
DeepCO3: Deep Instance Co-segmentation by Co-peak Search and Co-saliency

[CVPR19] DeepCO3: Deep Instance Co-segmentation by Co-peak Search and Co-saliency (Oral paper) Authors: Kuang-Jui Hsu, Yen-Yu Lin, Yung-Yu Chuang PDF:

Kuang-Jui Hsu 139 Dec 22, 2022
Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166

Region Proportion Regularized Inference (RePRI) for Few-Shot Segmentation In this repo, we provide the code for our paper : "Few-Shot Segmentation Wit

Malik Boudiaf 138 Dec 12, 2022
Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)

T-Zero This repository serves primarily as codebase and instructions for training, evaluation and inference of T0. T0 is the model developed in Multit

BigScience Workshop 253 Dec 27, 2022
Tensorflow implementation for Self-supervised Graph Learning for Recommendation

If the compilation is successful, the evaluator of cpp implementation will be called automatically. Otherwise, the evaluator of python implementation will be called.

152 Jan 07, 2023
1st Place Solution to ECCV-TAO-2020: Detect and Represent Any Object for Tracking

Instead, two models for appearance modeling are included, together with the open-source BAGS model and the full set of code for inference. With this code, you can achieve around 79 Oct 08, 2022

RMTD: Robust Moving Target Defence Against False Data Injection Attacks in Power Grids

RMTD: Robust Moving Target Defence Against False Data Injection Attacks in Power Grids Real-time detection performance. This repo contains the code an

0 Nov 10, 2021
Semi Supervised Learning for Medical Image Segmentation, a collection of literature reviews and code implementations.

Semi-supervised-learning-for-medical-image-segmentation. Recently, semi-supervised image segmentation has become a hot topic in medical image computin

Healthcare Intelligence Laboratory 1.3k Jan 03, 2023
Posterior temperature optimized Bayesian models for inverse problems in medical imaging

Posterior temperature optimized Bayesian models for inverse problems in medical imaging Max-Heinrich Laves*, Malte Tölle*, Alexander Schlaefer, Sandy

Artificial Intelligence in Cardiovascular Medicine (AICM) 6 Sep 19, 2022
Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN", accepted to ACM MM 2021 BNI Track.

RecycleD Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN

Yunan Zhu 23 Nov 05, 2022
Pytorch implementation of SELF-ATTENTIVE VAD, ICASSP 2021

SELF-ATTENTIVE VAD: CONTEXT-AWARE DETECTION OF VOICE FROM NOISE (ICASSP 2021) Pytorch implementation of SELF-ATTENTIVE VAD | Paper | Dataset Yong Rae

97 Dec 23, 2022
VLGrammar: Grounded Grammar Induction of Vision and Language

VLGrammar: Grounded Grammar Induction of Vision and Language

Yining Hong 27 Dec 23, 2022
Improving Contrastive Learning by Visualizing Feature Transformation, ICCV 2021 Oral

Improving Contrastive Learning by Visualizing Feature Transformation This project hosts the codes, models and visualization tools for the paper: Impro

Bingchen Zhao 83 Dec 15, 2022
Inkscape extensions for figure resizing and editing

Academic-Inkscape: Extensions for figure resizing and editing This repository contains several Inkscape extensions designed for editing plots. Scale P

192 Dec 26, 2022
Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome

bottom-up-attention This code implements a bottom-up attention model, based on multi-gpu training of Faster R-CNN with ResNet-101, using object and at

Peter Anderson 1.3k Jan 09, 2023
(CVPR2021) DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation

DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation CVPR2021(oral) [arxiv] Requirements python3.7 pytorch==

W-zx-Y 85 Dec 07, 2022
PyTorch Implementation of Fully Convolutional Networks. (Training code to reproduce the original result is available.)

pytorch-fcn PyTorch implementation of Fully Convolutional Networks. Requirements pytorch = 0.2.0 torchvision = 0.1.8 fcn = 6.1.5 Pillow scipy tqdm

Kentaro Wada 1.6k Jan 07, 2023
SGPT: Multi-billion parameter models for semantic search

SGPT: Multi-billion parameter models for semantic search This repository contains code, results and pre-trained models for the paper SGPT: Multi-billi

Niklas Muennighoff 182 Dec 29, 2022
Repository for open research on optimizers.

Open Optimizers Repository for open research on optimizers. This is a test in sharing research/exploration as it happens. If you use anything from thi

Ariel Ekgren 6 Jun 24, 2022