Fedlearn支持前沿算法研发的Python工具库 | Fedlearn algorithm toolkit for researchers

Overview

FedLearn-algo

Installation

Development Environment Checklist

python3 (3.6 or 3.7) is required. To configure and check the development environment is correct, a checklist file is provided: environment_checklist.sh. Under the path of FedLearn-algo, please run:

sh environment_checklist.sh

Recommended Python Package

Package License Version Github
datasets MIT 1.8.0 https://github.com/huggingface/datasets
gmpy2 LGPL 3.0 2.0.8 https://github.com/BrianGladman/gmpy2
grpc Apache 2.0 1.38.0 https://github.com/grpc/grpc
numpy BSD 3 1.19.2 https://github.com/numpy/numpy
omegaconf BSD 3 2.1.0 https://github.com/omry/omegaconf
oneflow Apache 2.0 0.4.0 https://github.com/Oneflow-Inc/oneflow
orjson Apache 2.0 3.5.2 https://github.com/ijl/orjson
pandas BSD 3 1.2.4 https://github.com/pandas-dev/pandas
phe LGPL 3.0 1.4.0 https://github.com/data61/python-paillier
sklearn BSD 3 0.24.2 https://github.com/scikit-learn/scikit-learn
tensorflow Apache 2.0 2.4.1 https://github.com/tensorflow/tensorflow
torch BSD 1.9 https://github.com/pytorch/pytorch
tornado Apache 2.0 6.1 https://github.com/tornadoweb/tornado
transformers Apache 2.0 4.7.0 https://github.com/huggingface/transformers
protobuf 3-Clause BSD 3.12.2 https://github.com/protocolbuffers/protobuf

Device Deployment

The device deployment is a centralized distributed topology, as shown in the above figure. The server terminal controls the training loop, and the N client terminals operate independent algorithm computation, respectively. For non-deep learning algorithms, each client terminal depends on CPU-based computation, otherwise GPU (e.g., NVIDIA series) should be configured to guarantee training speed.

Optional Text

Run an Example

An algorithm flow example is provided to demonstrate a customized algorithm development (one server terminal with three client terminals). Server should communicate with each client. The server and three clients could be sited on different machines or started by command line terminal in one machine.

First, users should set the IP, port, and token. In client terminals, run the following commands, respectively.

python demos/custom_alg_demo/custom_client.py -I 127.0.0.1 -P 8891 -T client_1
python demos/custom_alg_demo/custom_client.py -I 127.0.0.1 -P 8892 -T client_2
python demos/custom_alg_demo/custom_client.py -I 127.0.0.1 -P 8893 -T client_3

Second, in the server terminal, run the following to start the server and complete a simulated training pipeline.

python demos/custom_alg_demo/custom_server.py

Architecture Design

FedLearn-algo is an open source framework in the research environment to promote the study of novel federated learning algorithms. FedLearn-algo proposes a distributed machine learning architecture enabling both vertical and horizontal federated learning (FL) development. This architecture supports flexible module configurations for each particular algorithm design, and can be extended to build state-of-the-art algorithms and systems. FedLearn-algo also provides comprehensive examples, including FL-based kernel methods, random forest, and neural networks. At last, the horizontal FL extension in FedLearn-algo is compatible with popular deep learning frameworks, e.g., PyTorch, OneFlow.

Optional Text

The above figure shows the proposed FL framework. It has one server and multiple clients to complete the multi-party joint modeling in the federated learning procedure. The server is located at the centre of architecture topology, and it coordinates the training pipeline. Clients operate modeling computation independently in local terminals without sharing data, and thus could protect data privacy and data security.

Demonstration Algorithms

According to the data partition differences, existing FL algorithms can be mainly categorized into horizontal FL algorithms and vertical FL algorithms. Horizontal FL refers to the setting that samples on the involved machines share the same feature space while the machines have different sample ID space. Vertical FL means all machines share the same sample ID space while each machine has a unique feature space.

Vertical Federated Learning

  • Federated Kernel Learning. Kernel method is a nonlinear machine learning algorithm to handle linearly non-separable data.

  • Federated Random Forest. Random forest is an ensemble machine learning method for classification and regression by building a multitude of decision trees in model training.

Horizontal Federated Learning

  • Federated HFL. An extension framework in FedLearn-algo designed to provide flexible and easy-to-use algorithms in Horizontal Federated scenarios.

Documentation

License

The distribution of FedLearn-algo in this repository is under Apache 2.0 license.

Citation

Please cite FedLearn-algo in your publications if it makes some contributions to your research/development:

@article{liu2021fedlearn,
  title={Fedlearn-Algo: A flexible open-source privacy-preserving machine learning platform},
  author={Liu, Bo and Tan, Chaowei and Wang, Jiazhou and Zeng, Tao and Shan, Huasong and Yao, Houpu and Heng, Huang and Dai, Peng and Bo, Liefeng and Chen, Yanqing},
  journal={arXiv preprint arXiv:2107.04129},
  url={https://arxiv.org/abs/2107.04129},
  year={2021}
}

Contact us

Please contact us at [email protected] if you have any questions.

Comments
  • 咨询random_forest demo的Prediction结果

    咨询random_forest demo的Prediction结果

    冒昧求教:框架中的random_forest demo处理的是一个二分类问题, 1、为什么Prediction的结果是一个有9个元素的数组?我猜测预测的是不是inference数据集中9位uid得糖尿病的概率?如果是这样的话如何去检查预测的准确度呢? 2、为什么每次运行Prediction的结果都是不同的?

    opened by ZhangQiusi 4
  • how the complete tree form in random forest demo_local,py

    how the complete tree form in random forest demo_local,py

    0: {0: {'processed': True}, 1: {'processed': True}, 2: {'processed': True}, 3: {'processed': True, 'is_leaf': True}, 4: {'processed': True}, 5: {'processed': True, 'is_leaf': True}, 6: {'processed': True}, 9: {'processed': True, 'is_leaf': True}, 10: {'processed': True}, 13: {'processed': True, 'is_leaf': True}, 14: {'processed': True}, 21: {'processed': True, 'is_leaf': True}, 22: {'processed': True, 'is_leaf': True}, 29: {'processed': True, 'is_leaf': True}, 30: {'processed': True, 'is_leaf': True}}} how with this information we can analyse the tree,which one is root which one is leaf

    opened by monuheeya 3
  • Incompatible package version in `environment_checklist.sh`

    Incompatible package version in `environment_checklist.sh`

    Problem: The intel-numpy package is incompatible with pandas package which are installed by running the file environment_checklist.sh

    Details: The version of the installed intel-numpy is 1.15.1. The version of the installed pandas package is incompatible with numpy < 1.15.4

    Reproduce the error:

    1. Under the environment of Python 3.6.13
    2. Go to the root repository of fedlearn-algo
    3. Install the python packages by running ./environment_checklist.sh
    4. See this error when running command python demos/random_forest/client.py -I 0 -C demos/random_forest/config.py
    opened by flyingcat047 2
  • Failure to set up the local environment by running environment_checklist.sh

    Failure to set up the local environment by running environment_checklist.sh

    Below is the error message that I got when I tried to set up the local envrionment by running environment_checklist.sh. Some of the dependency failed to be installed.

    (py36) * xuebin.wang$ sh environment_checklist.sh 
    
    Run the checklist...
    
    environment_checklist.sh: line 6: yum: command not found
    
    1. check development env...
    
    environment_checklist.sh: line 11: yum: command not found
    
    environment_checklist.sh: line 12: yum: command not found
    
    Development env checking finished.
    
    2. check python 3.6 env...
    
    environment_checklist.sh: line 20: yum: command not found
    
    Python 3.6 checking finished.
    
    3. check paillier packages...
    
    environment_checklist.sh: line 28: yum: command not found
    
    Collecting gmpy2
    
      Using cached gmpy2-2.0.8.zip (280 kB)
    
    Collecting phe
    
      Using cached phe-1.4.0.tar.gz (35 kB)
    
    Building wheels for collected packages: gmpy2, phe
    
      Building wheel for gmpy2 (setup.py) ... error
    
      ERROR: Command errored out with exit status 1:
    
       command: /Users/xuebin.wang/opt/anaconda3/envs/py36/bin/python3 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/v6/bhhlhsqs21x3rlqfctn9tbr80000gp/T/pip-install-tus8ysrk/gmpy2_804b98d559b94d669dd65ed828081209/setup.py'"'"'; __file__='"'"'/private/var/folders/v6/bhhlhsqs21x3rlqfctn9tbr80000gp/T/pip-install-tus8ysrk/gmpy2_804b98d559b94d669dd65ed828081209/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /private/var/folders/v6/bhhlhsqs21x3rlqfctn9tbr80000gp/T/pip-wheel-qxmh5v9h
    
           cwd: /private/var/folders/v6/bhhlhsqs21x3rlqfctn9tbr80000gp/T/pip-install-tus8ysrk/gmpy2_804b98d559b94d669dd65ed828081209/
    
      Complete output (14 lines):
    
      running bdist_wheel
    
      running build
    
      running build_ext
    
      building 'gmpy2' extension
    
      creating build
    
      creating build/temp.macosx-10.9-x86_64-3.6
    
      creating build/temp.macosx-10.9-x86_64-3.6/src
    
      gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/xuebin.wang/opt/anaconda3/envs/py36/include -arch x86_64 -I/Users/xuebin.wang/opt/anaconda3/envs/py36/include -arch x86_64 -DWITHMPFR -DWITHMPC -I/Users/xuebin.wang/opt/anaconda3/envs/py36/include/python3.6m -c src/gmpy2.c -o build/temp.macosx-10.9-x86_64-3.6/src/gmpy2.o
    
      In file included from src/gmpy2.c:426:
    
      src/gmpy.h:106:12: fatal error: 'gmp.h' file not found
    
      #  include "gmp.h"
    
                 ^~~~~~~
    
      1 error generated.
    
      error: command 'gcc' failed with exit status 1
    
      ----------------------------------------
    
      ERROR: Failed building wheel for gmpy2
    
      Running setup.py clean for gmpy2
    
      Building wheel for phe (setup.py) ... done
    
      Created wheel for phe: filename=phe-1.4.0-py2.py3-none-any.whl size=37362 sha256=1b08747fb6775a103f53ac225fefc0e13206acabbdeeff65bd15bec56a809975
    
      Stored in directory: /Users/xuebin.wang/Library/Caches/pip/wheels/61/2c/64/036a5dd340f2608a6d3c7cb8e88333a841d7ad3457ca9fd7f9
    
    Successfully built phe
    
    Failed to build gmpy2
    
    Installing collected packages: phe, gmpy2
    
        Running setup.py install for gmpy2 ... error
    
        ERROR: Command errored out with exit status 1:
    
         command: /Users/xuebin.wang/opt/anaconda3/envs/py36/bin/python3 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/v6/bhhlhsqs21x3rlqfctn9tbr80000gp/T/pip-install-tus8ysrk/gmpy2_804b98d559b94d669dd65ed828081209/setup.py'"'"'; __file__='"'"'/private/var/folders/v6/bhhlhsqs21x3rlqfctn9tbr80000gp/T/pip-install-tus8ysrk/gmpy2_804b98d559b94d669dd65ed828081209/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/v6/bhhlhsqs21x3rlqfctn9tbr80000gp/T/pip-record-o053dhrt/install-record.txt --single-version-externally-managed --compile --install-headers /Users/xuebin.wang/opt/anaconda3/envs/py36/include/python3.6m/gmpy2
    
             cwd: /private/var/folders/v6/bhhlhsqs21x3rlqfctn9tbr80000gp/T/pip-install-tus8ysrk/gmpy2_804b98d559b94d669dd65ed828081209/
    
        Complete output (14 lines):
    
        running install
    
        running build
    
        running build_ext
    
        building 'gmpy2' extension
    
        creating build
    
        creating build/temp.macosx-10.9-x86_64-3.6
    
        creating build/temp.macosx-10.9-x86_64-3.6/src
    
        gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/xuebin.wang/opt/anaconda3/envs/py36/include -arch x86_64 -I/Users/xuebin.wang/opt/anaconda3/envs/py36/include -arch x86_64 -DWITHMPFR -DWITHMPC -I/Users/xuebin.wang/opt/anaconda3/envs/py36/include/python3.6m -c src/gmpy2.c -o build/temp.macosx-10.9-x86_64-3.6/src/gmpy2.o
    
        In file included from src/gmpy2.c:426:
    
        src/gmpy.h:106:12: fatal error: 'gmp.h' file not found
    
        ```#  include "gmp.h"```
    
                   ^~~~~~~
    
        1 error generated.
    
        error: command 'gcc' failed with exit status 1
    
        ----------------------------------------
    
    ERROR: Command errored out with exit status 1: /Users/xuebin.wang/opt/anaconda3/envs/py36/bin/python3 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/v6/bhhlhsqs21x3rlqfctn9tbr80000gp/T/pip-install-tus8ysrk/gmpy2_804b98d559b94d669dd65ed828081209/setup.py'"'"'; __file__='"'"'/private/var/folders/v6/bhhlhsqs21x3rlqfctn9tbr80000gp/T/pip-install-tus8ysrk/gmpy2_804b98d559b94d669dd65ed828081209/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/v6/bhhlhsqs21x3rlqfctn9tbr80000gp/T/pip-record-o053dhrt/install-record.txt --single-version-externally-managed --compile --install-headers /Users/xuebin.wang/opt/anaconda3/envs/py36/include/python3.6m/gmpy2 Check the logs for full command output.
    
    Paillier packages checking finished.
    
    4. check numpy/scipy related...
    
    Collecting intel-numpy
    
      Downloading intel_numpy-1.15.1-cp36-cp36m-macosx_10_12_intel.macosx_10_12_x86_64.whl (6.0 MB)
    
         |████████████████████████████████| 6.0 MB 3.3 MB/s 
    
    Collecting intel-scipy
    
      Downloading intel_scipy-1.1.0-cp36-cp36m-macosx_10_12_intel.macosx_10_12_x86_64.whl (28.2 MB)
    
         |████████████████████████████████| 28.2 MB 18.4 MB/s 
    
    Collecting icc-rt
    
      Downloading icc_rt-2019.0-py2.py3-none-macosx_10_12_intel.macosx_10_12_x86_64.whl (9.5 MB)
    
         |████████████████████████████████| 9.5 MB 34.1 MB/s 
    
    Collecting mkl-fft
    
      Downloading mkl_fft-1.0.6-cp36-cp36m-macosx_10_12_intel.macosx_10_12_x86_64.whl (232 kB)
    
         |████████████████████████████████| 232 kB 29.0 MB/s 
    
    Collecting mkl
    
      Downloading mkl-2019.0-py2.py3-none-macosx_10_12_intel.macosx_10_12_x86_64.whl (193.8 MB)
    
         |████████████████████████████████| 193.8 MB 28.7 MB/s 
    
    Collecting tbb4py
    
      Downloading tbb4py-2019.0-cp36-cp36m-macosx_10_12_intel.macosx_10_12_x86_64.whl (47 kB)
    
         |████████████████████████████████| 47 kB 13.7 MB/s 
    
    Collecting mkl-random
    
      Downloading mkl_random-1.0.1.1-cp36-cp36m-macosx_10_12_intel.macosx_10_12_x86_64.whl (393 kB)
    
         |████████████████████████████████| 393 kB 43.2 MB/s 
    
    Collecting intel-openmp
    
      Downloading intel_openmp-2019.0-py2.py3-none-macosx_10_12_intel.macosx_10_12_x86_64.whl (1.1 MB)
    
         |████████████████████████████████| 1.1 MB 22.5 MB/s 
    
    Collecting tbb==2019.*
    
      Downloading tbb-2019.0-py2.py3-none-macosx_10_12_intel.macosx_10_12_x86_64.whl (565 kB)
    
         |████████████████████████████████| 565 kB 21.6 MB/s 
    
    Installing collected packages: tbb, intel-openmp, tbb4py, mkl-random, mkl-fft, mkl, icc-rt, intel-numpy, intel-scipy
    
    Successfully installed icc-rt-2019.0 intel-numpy-1.15.1 intel-openmp-2019.0 intel-scipy-1.1.0 mkl-2019.0 mkl-fft-1.0.6 mkl-random-1.0.1.1 tbb-2019.0 tbb4py-2019.0
    
    Collecting grpcio
    
      Downloading grpcio-1.39.0-cp36-cp36m-macosx_10_10_x86_64.whl (3.9 MB)
    
         |████████████████████████████████| 3.9 MB 4.0 MB/s 
    
    Collecting grpcio-tools
    
      Downloading grpcio_tools-1.39.0-cp36-cp36m-macosx_10_10_x86_64.whl (2.0 MB)
    
         |████████████████████████████████| 2.0 MB 19.1 MB/s 
    
    Collecting six>=1.5.2
    
      Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
    
    Requirement already satisfied: setuptools in /Users/xuebin.wang/opt/anaconda3/envs/py36/lib/python3.6/site-packages (from grpcio-tools) (52.0.0.post20210125)
    
    Collecting protobuf<4.0dev,>=3.5.0.post1
    
      Downloading protobuf-3.17.3-cp36-cp36m-macosx_10_9_x86_64.whl (1.0 MB)
    
         |████████████████████████████████| 1.0 MB 25.4 MB/s 
    
    Installing collected packages: six, protobuf, grpcio, grpcio-tools
    
    Successfully installed grpcio-1.39.0 grpcio-tools-1.39.0 protobuf-3.17.3 six-1.16.0
    
    opened by flyingcat047 2
  • Need fix for core.encrypt.RandomizedIterativeAffine module for combinatory operations

    Need fix for core.encrypt.RandomizedIterativeAffine module for combinatory operations

    I found one problem of core.encrypt.RandomizedIterativeAffine module. When combinatory scalar products and additions are applied to the encrypted objects, the result may turn wrong. For example, with the inputs below,

    p1= 4592.146866155027
    p2= 532.2228109095383
    k1= 872.0311515320057
    k2= -1033.819189454349
    

    the decrypted result of ([p1]*k1)*k2+[p2] is -532.2228107452393 I also tried with other inputs. It seems that the decryption of ([p1]*k1)*k2+[p2] generally gives me the value of -p2

    opened by flyingcat047 1
  • Deperated riac

    Deperated riac

    For updates and changes

    Changes:

    1. Add readme in he folder
    2. Add announcement for RIAC: The RIAC scheme has been found security flaws, we decide to disable this scheme until a secure version is released.
    opened by guabao 0
  • Clean up code and fix secure inference

    Clean up code and fix secure inference

    Changes:

    1. Clean up develop code in secure inference.
    2. Reposition secure inference data.
    3. Fix running issue in secure inference demo.
    4. Add corresponding instructions in Readme.
    opened by cyqclark 0
  • Add secure inference and async supports

    Add secure inference and async supports

    Changes:

    1. Add Secure Inference demo for sphereface model
    2. Add both synchronized and asynchronized implementation for Secure Inference
    3. Add sphereface related data
    4. Change communication core to support asynchronized communication
    opened by cyqclark 0
  • Vertically federated linear regression algorithm is ready

    Vertically federated linear regression algorithm is ready

    Changes:

    1. Added vertically federated linear regression algorithm based on QR in the demos/linear_regression folder
    2. The scripts for local and remote demos are also under the demos/linear_regression folder
    opened by flyingcat047 0
  • New client

    New client

    For issue fixes

    Fixes ISSUE #xxx

    For updates and changes

    Changes:

    1. update kernel regression code and FDNN code to fit the new client and coordinator
    2. move FDNN code to tensorflow sub-folder, pytorch version is scheduled to push to the code base.
    3. start new feature engineering demo.
    opened by guabao 0
  • A sync hfl

    A sync hfl

    For issue fixes

    Fixes ISSUE #xxx

    For updates and changes

    Changes:

    1. support Fed-aSync framework
    2. support aSyncFedAvg aggregation Algo
    3. support fed-text-classification-model
    4. support 20newsgroups dataset preprocessing
    opened by monadyn 0
Releases(v0.1.0-alpha)
Atif Hassan 103 Dec 14, 2022
Implementation of Wasserstein adversarial attacks.

Stronger and Faster Wasserstein Adversarial Attacks Code for Stronger and Faster Wasserstein Adversarial Attacks, appeared in ICML 2020. This reposito

21 Oct 06, 2022
A Large Scale Benchmark for Individual Treatment Effect Prediction and Uplift Modeling

large-scale-ITE-UM-benchmark This repository contains code and data to reproduce the results of the paper "A Large Scale Benchmark for Individual Trea

10 Nov 19, 2022
Official implement of "CAT: Cross Attention in Vision Transformer".

CAT: Cross Attention in Vision Transformer This is official implement of "CAT: Cross Attention in Vision Transformer". Abstract Since Transformer has

100 Dec 15, 2022
CityLearn Challenge Multi-Agent Reinforcement Learning for Intelligent Energy Management, 2020, PikaPika team

Citylearn Challenge This is the PyTorch implementation for PikaPika team, CityLearn Challenge Multi-Agent Reinforcement Learning for Intelligent Energ

bigAIdream projects 10 Oct 10, 2022
Deploy optimized transformer based models on Nvidia Triton server

🤗 Hugging Face Transformer submillisecond inference 🤯 and deployment on Nvidia Triton server Yes, you can perfom inference with transformer based mo

Lefebvre Sarrut Services 1.2k Jan 05, 2023
Code for our SIGCOMM'21 paper "Network Planning with Deep Reinforcement Learning".

0. Introduction This repository contains the source code for our SIGCOMM'21 paper "Network Planning with Deep Reinforcement Learning". Notes The netwo

NetX Group 68 Nov 24, 2022
PyTorch implementation of the paper Deep Networks from the Principle of Rate Reduction

Deep Networks from the Principle of Rate Reduction This repository is the official PyTorch implementation of the paper Deep Networks from the Principl

459 Dec 27, 2022
Tensorflow python implementation of "Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos"

Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos This repository is the official tensorflow python implementation

Yasamin Jafarian 287 Jan 06, 2023
This is the implementation of "SELF SUPERVISED REPRESENTATION LEARNING WITH DEEP CLUSTERING FOR ACOUSTIC UNIT DISCOVERY FROM RAW SPEECH" submitted to ICASSP 2022

CPC_DeepCluster This is the implementation of "SELF SUPERVISED REPRESENTATION LEARNING WITH DEEP CLUSTERING FOR ACOUSTIC UNIT DISCOVERY FROM RAW SPEEC

LEAP Lab 2 Sep 15, 2022
Official repository for Automated Learning Rate Scheduler for Large-Batch Training (8th ICML Workshop on AutoML)

Automated Learning Rate Scheduler for Large-Batch Training The official repository for Automated Learning Rate Scheduler for Large-Batch Training (8th

Kakao Brain 35 Jan 04, 2023
PyTorch implementation of an end-to-end Handwritten Text Recognition (HTR) system based on attention encoder-decoder networks

AttentionHTR PyTorch implementation of an end-to-end Handwritten Text Recognition (HTR) system based on attention encoder-decoder networks. Scene Text

Dmitrijs Kass 31 Dec 22, 2022
pix2pix in tensorflow.js

pix2pix in tensorflow.js This repo is moved to https://github.com/yining1023/pix2pix_tensorflowjs_lite See a live demo here: https://yining1023.github

Yining Shi 47 Oct 04, 2022
Pytorch implementation of Rosca, Mihaela, et al. "Variational Approaches for Auto-Encoding Generative Adversarial Networks."

alpha-GAN Unofficial pytorch implementation of Rosca, Mihaela, et al. "Variational Approaches for Auto-Encoding Generative Adversarial Networks." arXi

Victor Shepardson 78 Dec 08, 2022
PlenOctree Extraction algorithm

PlenOctrees_NeRF-SH This is an implementation of the Paper PlenOctrees for Real-time Rendering of Neural Radiance Fields. Not only the code provides t

49 Nov 05, 2022
Simulation code and tutorial for BBHnet training data

Simulation Dataset for BBHnet NOTE: OLD README, UPDATE IN PROGRESS We generate simulation dataset to train BBHnet, our deep learning framework for det

0 May 31, 2022
This is a file about Unet implemented in Pytorch

Unet this is an implemetion of Unet in Pytorch and it's architecture is as follows which is the same with paper of Unet component of Unet Convolution

Dragon 1 Dec 03, 2021
A fast, dataset-agnostic, deep visual search engine for digital art history

imgs.ai imgs.ai is a fast, dataset-agnostic, deep visual search engine for digital art history based on neural network embeddings. It utilizes modern

Fabian Offert 5 Dec 14, 2022
[ICCV 2021] Code release for "Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural Networks"

Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural Networks By Yikai Wang, Yi Yang, Fuchun Sun, Anbang Yao. This is the pytorc

Yikai Wang 26 Nov 20, 2022
Learning Features with Parameter-Free Layers (ICLR 2022)

Learning Features with Parameter-Free Layers (ICLR 2022) Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo | Paper NAVER AI Lab, NAVER CLOVA Up

NAVER AI 65 Dec 07, 2022