Segment axon and myelin from microscopy data using deep learning

Overview

Binder Build Status Documentation Status Coverage Status Twitter Follow

Segment axon and myelin from microscopy data using deep learning. Written in Python. Using the TensorFlow framework. Based on a convolutional neural network architecture. Pixels are classified as either axon, myelin or background.

For more information, see the documentation website.

alt tag

Help

Whether you are a newcomer or an experienced user, we will do our best to help and reply to you as soon as possible. Of course, please be considerate and respectful of all people participating in our community interactions.

  • If you encounter difficulties during installation and/or while using AxonDeepSeg, or have general questions about the project, you can start a new discussion on the AxonDeepSeg GitHub Discussions forum. We also encourage you, once you've familiarized yourself with the software, to continue participating in the forum by helping answer future questions from fellow users!
  • If you encounter bugs during installation and/or use of AxonDeepSeg, you can open a new issue ticket on the AxonDeepSeg GitHub issues webpage.

FSLeyes plugin

A tutorial demonstrating the installation procedure and basic usage of our FSLeyes plugin is available on YouTube, and can be viewed by clicking this link.

References

AxonDeepSeg

Applications

Reviews

Citation

If you use this work in your research, please cite it as follows:

Zaimi, A., Wabartha, M., Herman, V., Antonsanti, P.-L., Perone, C. S., & Cohen-Adad, J. (2018). AxonDeepSeg: automatic axon and myelin segmentation from microscopy data using convolutional neural networks. Scientific Reports, 8(1), 3816. Link to paper: https://doi.org/10.1038/s41598-018-22181-4.

Copyright (c) 2018 NeuroPoly (Polytechnique Montreal)

Licence

The MIT License (MIT)

Copyright (c) 2018 NeuroPoly, École Polytechnique, Université de Montréal

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Contributors

Pierre-Louis Antonsanti, Stoyan Asenov, Mathieu Boudreau, Oumayma Bounou, Marie-Hélène Bourget, Julien Cohen-Adad, Victor Herman, Melanie Lubrano, Antoine Moevus, Christian Perone, Vasudev Sharma, Thibault Tabarin, Maxime Wabartha, Aldo Zaimi.

Comments
  • Refactored data augmentation, changed loss function, cleaned notebooks and other improvements

    Refactored data augmentation, changed loss function, cleaned notebooks and other improvements

    This major PR aims to handle the improvement in the performance of model as well as an improvised version of data augmentation.

    DONE

    • Implemented data augmentation (Albumentation library) similar to previous version of ADS

    • Changed from Cross Entropy as a loss function to Dice coefficient to improve model performance as indicated in issue #19 .

    • Interpolation changed fromlinear to nearest neighbour

    • Notebooks are cleaned and removed irrelevant notebooks as indicated in #148

    • Migrated models to OSF storage to prevent bloating of the repository

    Fixes #148, Fixes #19, Fixes #241, Fixes #278, Fixes #240 Fixes #273

    opened by vasudev-sharma 75
  • Implement Ellipse Minor Axis  as Diameter

    Implement Ellipse Minor Axis as Diameter

    Following the discussions in #363, #349, this PR aims to implement the diameter of axon using minor axis of ellipse.

    DONE:

    • [x] Implement minor axis as an additional feature to compute the diameter of an axon, thickness of myelin, diameter of axon_myelin
    • [x] In order to allow the user to chose minor axis or equivalent diameter while computing morphometrics as its diameter, user can manually set the boolen variable ellipse to be True or False.
    • [x] Made the necessary changes in 04-compute-morphometrics.ipynb notebook file, allowing the user to set his choice for diameter computation.
    • [x] Added comprehensive tests to test this new feature.
    • [x] Implement the similar behaviour in FSLeyes plugin, where the user is prompted to set his preferred choice of diameter either as equivalent diameter or ellipse minor axis. Opened seperate issue for this see #432 and it will be dealt in seperate PR.
    • [x] Add documentation for this feature in notebook 04-morphometrics_extraction.ipynb
    • [x] Add a flag to select the shape of the axons
    • [x] Documentation: Add literature for axon shape (circle and ellipse)
    • [x] Add cli tests for the axon shape -a flag

    What are the main contributions of this PR?

    1. Implements Ellipse Minor axis as an additional way to compute morphometrics
    2. For generating morphometrics via CLI, adds an flag -a to select the shape of axon. Usage: Refer to the docs for usage.
    3. Updated the RTD

    NOTE: - The default behaviour is set as equivalent diameter(circle) for measuring morphometrics. However, if the user wants to consider the axon as an "oblong ellipse", the user can set ellipse boolean variable to True.

    Fix #363, #349

    feature 
    opened by vasudev-sharma 46
  • Add FSLeyes plugin

    Add FSLeyes plugin

    This PR implements the following changes:

    • Changed numpy and scikit-image versions
    • Implemented a GUI design for the plugin
    • Implemented an image loader for the plugin
    • Implemented buttons on the control panel of the plugin to apply a prediction model
    • Implemented a button to load existing masks
    • "Active" images/masks are determined by their visibility status (eye icon on the overlay list)
    • added the following tools : watershed segmentation, axon auto-fill

    Fixes #159, Fixes #162, Fixes #191, Fixes #192, Fixes #193, Fixes #201, Fixes #209

    TODO

    • Write tests for the plugin (and add to Travis) - Will be adressed in https://github.com/neuropoly/axondeepseg/issues/224

    How to test / install

    The installation procedure can by found here : https://github.com/neuropoly/axondeepseg/blob/FSLeyes_integration/docs/source/documentation.rst

    Tools description

    Tooltips were added to the GUI. If you hover your cursor over a button on the plugin, a description should popup

    opened by Stoyan-I-A 45
  • Release version 4.0.0

    Release version 4.0.0

    Checklist

    • [x] I've given this PR a concise, self-descriptive, and meaningful title
    • [x] I've linked relevant issues in the PR body
    • [x] I've applied the relevant labels to this PR
    • [x] I've added relevant tests for my contribution
    • [x] I've updated the documentation and/or added correct docstrings
    • [x] I've assigned a reviewer
    • [x] I've consulted ADS's internal developer documentation to ensure my contribution is in line with any relevant design decisions

    Description

    Release version 4.0.0 of AxonDeepSeg that integrated IVADOMED into the project and provides Mac M1 compatibility.

    Linked issues

    Resolves #523 #536

    enhancement feature installation dependencies refactoring 
    opened by mathieuboudreau 43
  • Change how ADS dependencies are installed

    Change how ADS dependencies are installed

    This branch is a child of the branch from the fork in #441, and thus that PR needs to be merged first. I had to branch out of that PR because we updated the OSF filenames, and thus test fails in the meantime.

    This PR seeks to resolve the tests failing in #441 highlights here , and also simplifies the installation of FSLeyes by merging the requirements.txt file with the FSLeyes installation commands into a single environment.yml file. (I thought @jcohenadad had opened an issue about this at a previous meeting, but maybe we just discussed it) Now, all the tools will be installed at the conda venv installation stage, inistead of afterwards. pip install -e . is still needed to install AxonDeepSeg itself.

    With this PR, FSLeyes will always be installed by default in the conda environment.

    With this PR, don't think including ADS in pipy is a viable option anymore. Adding it to conda-forge may be possible though

    To do:

    • [x] Resolve the failing test
      • This is likely due to one of the packages pulling the latest version instead of the fixed version in the previous requirements file.
    • [x] Someone with a Linux machine needs to test FSLeyes locally to make sure the GUI actually works.
    • [x] Update documentation on how to install AxonDeepSeg.
    • [x] Once this PR passes the travis tests, squash merge #441 before merging this one so that the diff is cleaner.
    opened by mathieuboudreau 38
  • Move to Python 3.6 compatibility

    Move to Python 3.6 compatibility

    This branch isn't ready for merging yet, please standby. I'm simply making this PR to see the merge conflicts. There's still 3 failing tests and 1 errored test in Windows.

    opened by mathieuboudreau 34
  • Add pre-commit hooks

    Add pre-commit hooks

    This PR aims to use pre-commit hooks to limit the file size. We wish to set a limit on the file sizes so that contributors don't commit massive files in the repo.

    pre-commit has been added to prevent files > 500KB from being committed, do a yaml check and clear the outputs of the Jupter notebook files.

    Asides from local pre-commit hooks being implemented, checks using pre-commit hooks are added for Travis CI.

    The changes implemented are similar to what was implemented on the sister projects (see here and here)

    3 checks are being done using pre-commit hooks both locally and on Travis CI.

    • Large files greater than 500kb are prevented from being committed.
    • YAML files syntax check
    • Jupyter notebook output clear check --> This hook basically clears the output of the cells in case you are committing the notebooks with outputs. So at the time of committing the changes, it will clear the output of the executed cells, and the next time when you try to re-commit the changes, the notebooks with no cell outputs will be committed.

    Instructions to test this PR.

    1. In your virtual environment, first run conda env update --name ads_venv --file environment.yml

    2. Do, pip install -e .

    3. You can now try to test each of the hooks individually.

      3.1 (Hook for files > 500kb): Try to either commit any ADS model or either run all the cells of 00-getting_started.ipynb(after running all the cells of this notebook, the size of this file would be around 1.5MB). Now, try to commit the model or this notebook or both. The expected behavior would be that this pre-commit hook won't allow you to commit these big files as the file size > 500KB

      3.2 (Hook for YAML File syntax): Try to change the syntax of .travis.yml file (of course, update this file with wrong syntax). Now, try to commit this file. Expected behavior should be that this pre-commit hook won't allow you to commit this YAML file having incorrect syntax.

      3.3 (Hook for no output of notebooks). Execute the cells of one of the notebook files. Now, try to commit the notebook file with the output of cells. This commit will modify the notebook such that the outputs of the cells will be cleared. One can then commit the notebook file with cleared output.

    Linked Issues

    Fixes #423

    dependencies ci 
    opened by vasudev-sharma 32
  • Improve and force imread/imwrite conversion to 8bit int

    Improve and force imread/imwrite conversion to 8bit int

    Checklist

    • [x] I've given this PR a concise, self-descriptive, and meaningful title
    • [x] I've linked relevant issues in the PR body
    • [x] I've applied the relevant labels to this PR
    • [x] I've added relevant tests for my contribution
    • [ ] I've updated the documentation and/or added correct docstrings
    • [x] I've assigned a reviewer
    • [x] I've consulted ADS's internal developer documentation to ensure my contribution is in line with any relevant design decisions

    Description

    Changes how bit 8-bit depth conversion is done in ADS's imread, removes the optional bit depth arg for this function (it appeared to have been unused for a long time, always using the default value of 8), and add test verifying that the same image saved at different int and float precision loaded with ads.imread outputs the same 8bit image array.

    Linked issues

    Resolves #175

    processing testing 
    opened by mathieuboudreau 31
  • No matching distribution found for tensorflow==1.3.0

    No matching distribution found for tensorflow==1.3.0

    using 2bee818b5be963b11f57733b110f1818daebf402 on rosenberg, i cannot properly install tensorflow==1.3.0:

    [...]
    Collecting tensorflow==1.3.0 (from AxonDeepSeg==2.2.dev0)
      Could not find a version that satisfies the requirement tensorflow==1.3.0 (from AxonDeepSeg==2.2.dev0) (from versions: )
    No matching distribution found for tensorflow==1.3.0 (from AxonDeepSeg==2.2.dev0)
    (venv_ads) [[email protected] axondeepseg]$ pip install tensorflow==1.3.0
    Collecting tensorflow==1.3.0
      Could not find a version that satisfies the requirement tensorflow==1.3.0 (from versions: )
    No matching distribution found for tensorflow==1.3.0
    (venv_ads) [[email protected] axondeepseg]$ pip -V
    pip 18.1 from /home/jcohen/miniconda3/envs/venv_ads/lib/python3.7/site-packages/pip (python 3.7)
    (venv_ads) [[email protected] axondeepseg]$ python
    Python 3.7.1 (default, Oct 23 2018, 19:19:42) 
    [GCC 7.3.0] :: Anaconda, Inc. on linux
    
    installation 
    opened by jcohenadad 30
  • Fix Naming Convention

    Fix Naming Convention

    Fixes #439

    OSF: - Test files to upload on OSF test_files.zip

    DONE:

    • [x] fix naming convention on FSLeyes plugin
    • [x] fix naming convention in Notebooks
    • [x] Fix naming convention in apply_model.py scipt
    • [x] Upload test files on OSF

    TODO:

    • [ ] Add documentation on Wiki

    To test this PR :

    The segmented images names should follow a common convention, that is

    1. image_name_seg-axonmyelin.png (axon +myelin segmented mask)
    2. image_name_seg-axon.png (axon mask)
    3. image_name_seg-myelin.png (myelin mask)

    To check if the naming conventions are being followed, for the cases given below ADS should segment images adhering to the naming convention.

    1. FSLeyes: Test Apply ADS Segmentation Model and Save segmentation buttons and check if they are following the naming convention.
    2. Notebooks : Run all the notebooks and check whether the naming conventions are being followed.
    3. CLI: This has been tested in the ADS unit tests, so you should expect all the test cases to pass.
    opened by vasudev-sharma 27
  • v4 ivadomed implementation

    v4 ivadomed implementation

    Checklist

    • [x] I've given this PR a concise, self-descriptive, and meaningful title
    • [x] I've linked relevant issues in the PR body
    • [x] I've applied the relevant labels to this PR
    • [x] I've added relevant tests for my contribution
    • [x] I've updated the documentation and/or added correct docstrings
    • [x] I've assigned a reviewer
    • [x] I've consulted ADS's internal developer documentation to ensure my contribution is in line with any relevant design decisions

    Description

    Implements IVADOMED automated segmentation inside of the ADS framework.

    Linked issues

    Resolves #523

    enhancement feature fsleyes dependencies refactoring ivadomed-refactoring 
    opened by mathieuboudreau 26
  • RAM limitations with no-patch option

    RAM limitations with no-patch option

    Describe the problem

    In PR #696 and #700, we added the option for the user to segment images without patches.

    After comparing the segmentation results with our models, we conclude that:

    • Qualitatively: the "no-patch" segmentation produces generally better qualitative results with less border irregularities, less false positive pixels clusters and less incomplete axons and/or holes in axons.
    • Quantitatively: the "no-patch" option gives segmentation metrics close to the patch option but better detection metrics because of less false positive small pixel clusters.

    However, some issues arose while testing:

    1. By design in ivadomed, when both PT and ONNX models are available, PT models are selected automatically with GPU. However, PT models require more GPU memory than ONNX. Some larger images could be segmented with ONNX and not with PT but there is no way for the user to select the ONNX model when on GPU without removing the PT model from the folder.
    2. Some images are just too big to segment without patches even with GPU and ONNX model and resulted in “segmentation fault” (memory error) on bireli, rosenberg and romane. I was not able to identify how or if we can intercept this error. I was also not able to reproduce this error on CPU on my laptop and had to killed the process (Ctrl+C or closing the terminal) to avoid it crashing.
    3. We currently cannot choose which GPU to use in ADS but we can in ivadomed.

    Details of the tests and issues can be found in these slides.

    Proposed solutions

    We talked in meeting of different solutions/approaches to deal with these issues respectively:

    1. The model (PT or ONNX) is selected in ivadomed here depending on device (CPU/GPU) and availability. We could try to add a try-intercept block to try the PT and switch to ONNX model if PT fails and ONNX is availble. This would need to be done in ivadomed.
    2. Several solutions were suggested:
      • Estimate the RAM needed based on image size and use psutils to estimate the RAM available before launching the segmentation, then warning the user if RAM is not sufficient.
      • Estimate the maximum patch size that could be used given free RAM, add it to the warning to the user and implement a way to changer the patch size (currently fixed for a given model).
      • Warn the user when using "no-patch" that this may not be suitable for larger images, and warn the user when not using "no-patch" that "no-patch" could potentially produce better results if RAM is sufficient --> For now, we decide to go with this last option, implementation in-progress in PR #704.
    3. The implementation to choose which GPU to use in ADS is in progress in #701.

    Additional details can be found in these slides, including ideas on "where" to fix these issues (ivadomed or ADS) and what are the elements to consider in each case.

    enhancement discussion 
    opened by mariehbourget 0
  • Idea: stop supporting combined axon-myelin images and switch to only separate

    Idea: stop supporting combined axon-myelin images and switch to only separate

    This came up a couple of group meetings ago. If I recall, the reasoning behind it being that this is because it's how IVADOMED treats the images anyways, and would make things simpler for the GUIs as well.

    It wasn't clear to me if this was for both input and outputs of ADS; to me it makes sense to be only for inputs, as generating a combined axon-myelin image is still quite useful for us to look at. But it might make sense so stop supporint this combined image as an input into ADS. One potential issue I can see is that it might cause some issues if a user does manual correction of the separate masks on another software, as there might be overlapping pixels identified as both myelin and axon (something that is avoided by using the combined masks.

    opened by mathieuboudreau 3
  • Prepare support for 3-class segmentation

    Prepare support for 3-class segmentation

    In order to support 3-class segmentation (context: unmyelinated fibers), we will need to change some stuff on the ADS side. For ivadomed, nothing really changes: we will use 3 ground-truths in the BIDS derivatives and change the training config accordingly. However, in ADS we will need to add some flexibility, notably:

    • For the segmentation process, axon_segmentation(...) will need to also save the third prediction. Maybe also add flexibility to the merge_masks(...) if we want to support it: https://github.com/axondeepseg/axondeepseg/blob/821074c2c8b539bcec69686cce72304656124d51/AxonDeepSeg/apply_model.py#L46-L50 Not exactly sure how we would handle the 3rd class using grayscale format in the combined prediction image though
    • Most of the work will probably be on the morphometrics process. Thanks to @Stoyan-I-A's refactoring, this should be easier to do because I think we will need to add columns for the 3rd class metrics (e.g. area, etc.). Fortunately, processing unmyelinated axons should be the exact same as processing axons. I'm thinking of adding a parameter to get_axon_morphometrics(...) indicating if we want 3-class morphometrics, and if so will load the 3rd segmentation and run the usual axon metrics on it. Not sure how we could merge this data in the morphometrics file (because the "myelin" columns will not apply to unmyelinated axons). In that case, maybe we would need 2 separate morphometrics file, but I don't really like that idea.
    enhancement refactoring discussion morphometrics 
    opened by hermancollin 4
  • Colorization instance map question

    Colorization instance map question

    Hello,

    So I have been playing around with the colorization feature in the morphometrics extraction pipeline. My question refers to the colorization instance map and how that relates to the morphometrics extraction. The segmentation I have with my images is pretty good at the moment. However the colorization instance map shows myelin identity boundary creep between touching axons.

    Does the colorization identity mismatch contribute to the calculation of the myelin thickness? And if this is the case what is the best way to address this issue?

    Thanks a lot,

    Michael

    LM_1

    LM_1_axonmyelin_index

    LM_1_instance-map

    opened by GrimmSnark 5
  • Create a Napari plugin for ADS

    Create a Napari plugin for ADS

    Checklist

    • [ ] I've given this PR a concise, self-descriptive, and meaningful title
    • [ ] I've linked relevant issues in the PR body
    • [ ] I've applied the relevant labels to this PR
    • [ ] I've added relevant tests for my contribution
    • [ ] I've updated the documentation and/or added correct docstrings
    • [ ] I've assigned a reviewer
    • [ ] I've consulted ADS's internal developer documentation to ensure my contribution is in line with any relevant design decisions

    Description

    This PR contains the code I used for testing a Napari plugin

    Linked issues

    Resolves #681

    opened by Stoyan-I-A 1
Releases(v4.1.0)
Owner
NeuroPoly
Ecole Polytechnique, Université de Montréal
NeuroPoly
SEC'21: Sparse Bitmap Compression for Memory-Efficient Training onthe Edge

Training Deep Learning Models on The Edge Training on the Edge enables continuous learning from new data for deployed neural networks on memory-constr

Brown University Scale Lab 4 Nov 18, 2022
Madanalysis5 - A package for event file analysis and recasting of LHC results

Welcome to MadAnalysis 5 Outline What is MadAnalysis 5? Requirements Downloading

MadAnalysis 15 Jan 01, 2023
Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation

TVT Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation Datasets: Digit: MNIST, SVHN, USPS Object: Office, Office-Home, Vi

37 Dec 15, 2022
Python Implementation of algorithms in Graph Mining, e.g., Recommendation, Collaborative Filtering, Community Detection, Spectral Clustering, Modularity Maximization, co-authorship networks.

Graph Mining Author: Jiayi Chen Time: April 2021 Implemented Algorithms: Network: Scrabing Data, Network Construbtion and Network Measurement (e.g., P

Jiayi Chen 3 Mar 03, 2022
CNN visualization tool in TensorFlow

tf_cnnvis A blog post describing the library: https://medium.com/@falaktheoptimist/want-to-look-inside-your-cnn-we-have-just-the-right-tool-for-you-ad

InFoCusp 778 Jan 02, 2023
Hand-distance-measurement-game - Hand Distance Measurement Game

Hand Distance Measurement Game This is program is made to calculate the distance

Priyansh 2 Jan 12, 2022
Technical Analysis library in pandas for backtesting algotrading and quantitative analysis

bta-lib - A pandas based Technical Analysis Library bta-lib is pandas based technical analysis library and part of the backtrader family. Links Main P

DRo 393 Dec 20, 2022
PyMove is a Python library to simplify queries and visualization of trajectories and other spatial-temporal data

Use PyMove and go much further Information Package Status License Python Version Platforms Build Status PyPi version PyPi Downloads Conda version Cond

Insight Data Science Lab 64 Nov 15, 2022
Learning Dynamic Network Using a Reuse Gate Function in Semi-supervised Video Object Segmentation.

Training Script for Reuse-VOS This code implementation of CVPR 2021 paper : Learning Dynamic Network Using a Reuse Gate Function in Semi-supervised Vi

HYOJINPARK 22 Jan 01, 2023
Tidy interface to polars

tidypolars tidypolars is a data frame library built on top of the blazingly fast polars library that gives access to methods and functions familiar to

Mark Fairbanks 144 Jan 08, 2023
YOLOX_AUDIO is an audio event detection model based on YOLOX

YOLOX_AUDIO is an audio event detection model based on YOLOX, an anchor-free version of YOLO. This repo is an implementated by PyTorch. Main goal of YOLOX_AUDIO is to detect and classify pre-defined

intflow Inc. 77 Dec 19, 2022
An intelligent, flexible grammar of machine learning.

An english representation of machine learning. Modify what you want, let us handle the rest. Overview Nylon is a python library that lets you customiz

Palash Shah 79 Dec 02, 2022
Tandem Mass Spectrum Prediction with Graph Transformers

MassFormer This is the original implementation of MassFormer, a graph transformer for small molecule MS/MS prediction. Check out the preprint on arxiv

Röst Lab 13 Oct 27, 2022
Reading list for research topics in Masked Image Modeling

awesome-MIM Reading list for research topics in Masked Image Modeling(MIM). We list the most popular methods for MIM, if I missed something, please su

ligang 231 Dec 07, 2022
CARL provides highly configurable contextual extensions to several well-known RL environments.

CARL (context adaptive RL) provides highly configurable contextual extensions to several well-known RL environments.

AutoML-Freiburg-Hannover 51 Dec 28, 2022
Learning Time-Critical Responses for Interactive Character Control

Learning Time-Critical Responses for Interactive Character Control Abstract This code implements the paper Learning Time-Critical Responses for Intera

Movement Research Lab 227 Dec 31, 2022
Pytorch Implementation of the paper "Cross-domain Correspondence Learning for Exemplar-based Image Translation"

CoCosNet Pytorch Implementation of the paper "Cross-domain Correspondence Learning for Exemplar-based Image Translation" (CVPR 2020 oral). Update: 202

Lingbo Yang 38 Sep 22, 2021
Iowa Project - My second project done at General Assembly, focused on feature engineering and understanding Linear Regression as a concept

Project 2 - Ames Housing Data and Kaggle Challenge PROBLEM STATEMENT Inferring or Predicting? What's more valuable for a housing model? When creating

Adam Muhammad Klesc 1 Jan 03, 2022
An end-to-end library for editing and rendering motion of 3D characters with deep learning [SIGGRAPH 2020]

Deep-motion-editing This library provides fundamental and advanced functions to work with 3D character animation in deep learning with Pytorch. The co

1.2k Dec 29, 2022
Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face Manipulation" published in CVPR 2020.

FFD Source Code Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face M

88 Nov 22, 2022