GitHub repository for the ICLR Computational Geometry & Topology Challenge 2021

Overview

ICLR Computational Geometry & Topology Challenge 2022

Welcome to the ICLR 2022 Computational Geometry & Topology challenge 2022 --- by the ICLR 2022 Workshop on Geometrical and Topological Representation Learning.

Lead organizers: Adele Myers, Saiteja Utpala, and Nina Miolane (UC Santa Barbara).

DOI

Description of the challenge

The purpose of this challenge is to foster reproducible research in geometric (deep) learning, by crowdsourcing the open-source implementation of learning algorithms on manifolds. Participants are asked to contribute code for a published/unpublished algorithm, following Scikit-Learn/Geomstats' or pytorch's APIs and computational primitives, benchmark it, and demonstrate its use in real-world scenarios.

Each submission takes the form of a Jupyter Notebook leveraging the coding infrastructure and building blocks from the package Geomstats. The participants submit their Jupyter Notebook via Pull Requests (PR) to this GitHub repository, see Guidelines below.

In addition to the challenge's prizes, participants will have the opportunity to co-author a white paper summarizing the findings of the competition.

This is the second edition of this challenge! Feel free to look at last year's guidelines, submissions, winners and paper for additional information.

Note: We invite participants to review this README regularly, as details are added to the guidelines when questions are submitted to the organizers.

Deadline

The final Pull Request submission date and hour will have to take place before:

  • April 4th, 2022 at 16:59 PST (Pacific Standard Time).

The participants can freely commit to their Pull Request and modify their submission until this time.

Winners announcement and prizes

The first 3 winners will be announced at the ICLR 2022 virtual workshop Geometrical and Topological Representation Learning and advertised through the web. The winners will also be contacted directly via email.

The prizes are:

  • $2000 for the 1st place,
  • $1000 for the 2nd place,
  • $500 for the 3rd place.

Subscription

Anyone can participate and participation is free. It is enough to:

  • send a Pull Request,
  • follow the challenge guidelines, to be automatically considered amongst the participants.

An acceptable PR automatically subscribes a participant to the challenge.

Guidelines

We encourage the participants to start submitting their Pull Request early on. This allows to debug the tests and helps to address potential issues with the code.

Teams are accepted and there is no restriction on the number of team members.

The principal developpers of Geomstats (i.e. the co-authors of Geomstats published papers) are not allowed to participate.

A submission should respect the following Jupyter Notebook’s structure:

  1. Introduction and Motivation
  • Explain and motivate the choice of learning algorithm
  1. Related Work and Implementations
  • Contrast the chosen learning algorithms with other algorithms
  • Describe existing implementations, if any
  1. Implementation of the Learning Algorithm --- with guidelines:
  • Follow Scikit-Learn/Geomstats APIs, see RiemannianKMeans example, or Pytorch base classes such as torch.nn.Module.
  • IMPORTANT: Use Geomstats computational primitives (e.g. exponential, geodesics, parallel transport, etc). Note that the functions in geomstats.backend are not considered computational primitives, as they are only wrappers around autograd, numpy, torch and tensorflow functions.
  1. Test on Synthetic Datasets and Benchmark
  2. Application to Real-World Datasets

Examples of possible submissions

  • Comparing embedding on trees in hyperbolic plane and variants, e.g. from Sarkar 2011.
  • Hypothesis testing on manifolds, e.g. from Osborne et al 2013..
  • (Extended/Unscented) Kalman Filters on Lie groups and variants, e.g. from Bourmaud et al 2013.
  • Gaussian Processes on Riemannian Manifolds and variants, e.g. from Calandra et al 2014.
  • Barycenter Subspace Analysis on Manifolds and variants, e.g. from Pennec 2016.
  • Curve fitting on manifolds and variants, e.g. from Gousenbourger et al 2018.
  • Smoothing splines on manifolds, e.g. from Kim et al 2020.
  • Recurrent models on manifolds and variants, e.g. from Chakraborty et al 2018.
  • Geodesic CNNs on manifolds and variants, e.g. from Masci et al 2018.
  • Variational autoencoders on Riemannian manifolds and variants, e.g. from Miolane et al 2019.
  • Probabilistic Principal Geodesic Analysis and variants, e.g. from Zhang et al 2019.
  • Gauge-equivariant neural networks and variants, e.g. from Cohen et al 2019.
  • and many more, as long as you implement them using Geomstats computational primitives (e.g. exponential, geodesics, parallel transport, etc).

Before starting your implementation, make sure that the algorithm that you want to contribute is not already in the learning module of Geomstats.

The notebook provided in the submission-example-* folders is also an example of submission that can help the participants to design their proposal and to understand how to use/inherit from Scikit-Learning, Geomstats, Pytorch. Note that this example is "naive" on purpose and is only meant to give illustrative templates rather than to provide a meaningful data analysis. More examples on how to use the packages can be found on the GitHub repository of Geomstats.

The code should be compatible with Python 3.8 and make an effort to respect the Python style guide PEP8. The portion of the code using geomstats only needs to run with numpy or pytorch backends. However, it will be appreciated by the reviewers/voters if the code can run in all backends: numpy, autograd, tensorflow and pytorch, using geomstats gs., when applicable.

The Jupyter notebooks are automatically tested when a Pull Request is submitted. The tests have to pass. Their running time should not exceed 3 hours, although exceptions can be made by contacting the challenge organizers.

If a dataset is used, the dataset has to be public and referenced. There is no constraint on the data type to be used.

A participant can raise GitHub issues and/or request help or guidance at any time through Geomstats slack. The help/guidance will be provided modulo availability of the maintainers.

Submission procedure

  1. Fork this repository to your GitHub.

  2. Create a new folder with your team leader's GitHub username in the root folder of the forked repository, in the main branch.

  3. Place your submission inside the folder created at step 2, with:

  • the unique Jupyter notebook (the file shall end with .ipynb),
  • datasets (if needed),
  • auxiliary Python files (if needed).

Datasets larger than 10MB shall be directly imported from external URLs or from data sharing platforms such as OpenML.

If your project requires external pip installable libraries that are not amongst Geomstats’ requirements.txt, you can include them at the beginning of your Jupyter notebook, e.g. with:

import sys
!{sys.executable} -m pip install numpy scipy torch

Evaluation and ranking

The Condorcet method will be used to rank the submissions and decide on the winners. The evaluation criteria will be:

  1. How "interesting"/"important"/"useful" is the learning algorithm? Note that this is a subjective evaluation criterion, where the reviewers will evaluate what the implementation of this algorithm brings to the community (regardless of the quality of the code).
  2. How readable/clean is the implementation? How well does the submission respect Scikit-Learn/Geomstats/Pytorch's APIs? If applicable: does it run across backends?
  3. Is the submission well-written? Does the docstrings help understand the methods?
  4. How informative are the tests on synthetic datasets, the benchmarks, and the real-world application?

Note that these criteria do not reward new learning algorithms, nor learning algorithms that outperform the state-of-the-art --- but rather clean code and exhaustive tests that will foster reproducible research in our field.

Selected Geomstats maintainers and collaborators, as well as each team whose submission respects the guidelines, will vote once on Google Form to express their preference for the 3 best submissions according to each criterion. Note that each team gets only one vote, even if there are several participants in the team.

The 3 preferences must all 3 be different: e.g. one cannot select the same Jupyter notebook for both first and second place. Such irregular votes will be discarded. A link to a Google Form will be provided to record the votes. It will be required to insert an email address to identify the voter. The voters will remain secret, only the final ranking will be published.

Questions?

Feel free to contact us through GitHub issues on this repository, on Geomstats repository or through Geomstats slack. Alternatively, you can contact Nina Miolane at [email protected].

Comments
  • Question about what algorithms would count

    Question about what algorithms would count

    Hi,

    I was wondering whether a couple of algorithms that are about learning metrics and embeddings would be within the scope.

    Specifically, if the following two algorithms (either individually or collectively) would be within scope

    1. TreeRep from paper. This is an algorithm that takes in a metric and outputs a tree.
    2. Tree embeddings in Hyperbolic space from this paper. This is an algorithm that takes a weighted tree and then embeds into the hyperbolic manifold.

    Thanks, Rishi

    opened by rsonthal 8
  • NeuroSEED for Small Open Reading Frame Proteins Submission

    NeuroSEED for Small Open Reading Frame Proteins Submission

    All the files for the code are in a branch called "master." There is one folder and inside are all the codes and folders necessary to run the code.

    opened by xiongjeffrey 3
  • Challenge submission: Sasaki Metric and Applications in Geodesic Analysis

    Challenge submission: Sasaki Metric and Applications in Geodesic Analysis

    Dear Challenge Team,

    we are happy to contribute our Project Sasaki Metric and Applications in Geodesic Analysis to the ICLR Challgene 2022.

    Best regards, Felix Ambellan, Martin Hanik, Esfandiar Nava-Yazdani, and Christoph von Tycowicz

    opened by vontycowicz 1
  • NeuroSEED for Small Open Reading Frame Proteins

    NeuroSEED for Small Open Reading Frame Proteins

    Unfortunately, I don't have access to the Geomstats Slack and I am unsure how to accurately submit a pull request. The link to our research folder is below. https://github.com/xiongjeffrey/NeuroSEED

    opened by xiongjeffrey 0
  • autodiff fails on svd in pre_shape.py

    autodiff fails on svd in pre_shape.py

    Dear geomstats team,

    we are trying to perform geodesic regression in Kendall shape space but encountered the issue that the current implementation is not compatible with autodiff functionality. In particular, the align method in geomstats/geometry/pre_shape.py employs singular value decomposition for which autodiff fails if a full set of left/right singular vectors are requested. However, providing the flag 'full_matrices=False' avoids this pitfall and should yield the same alignment.

    We added the flag and, indeed, are now able to run regression. We will submit the modified pre_shape.py along with our project s.t. it does not rely on short notice updates of geomstats.

    Best regards, Christoph

    opened by vontycowicz 1
Releases(final)
Denoising Normalizing Flow

Denoising Normalizing Flow Christian Horvat and Jean-Pascal Pfister 2021 We combine Normalizing Flows (NFs) and Denoising Auto Encoder (DAE) by introd

CHrvt 17 Oct 15, 2022
Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting

Official code of APHYNITY Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting (ICLR 2021, Oral) Yuan Yin*, Vincent Le Guen*

Yuan Yin 24 Oct 24, 2022
SymmetryNet: Learning to Predict Reflectional and Rotational Symmetries of 3D Shapes from Single-View RGB-D Images

SymmetryNet SymmetryNet: Learning to Predict Reflectional and Rotational Symmetries of 3D Shapes from Single-View RGB-D Images ACM Transactions on Gra

26 Dec 05, 2022
darija <-> english dictionary

darija-dictionary Having advanced IT solutions that are well adapted to the Moroccan context passes inevitably through understanding Moroccan dialect.

DODa 102 Jan 01, 2023
Code and models for "Rethinking Deep Image Prior for Denoising" (ICCV 2021)

DIP-denosing This is a code repo for Rethinking Deep Image Prior for Denoising (ICCV 2021). Addressing the relationship between Deep image prior and e

Computer Vision Lab. @ GIST 36 Dec 29, 2022
The Codebase for Causal Distillation for Language Models.

Causal Distillation for Language Models Zhengxuan Wu*,Atticus Geiger*, Josh Rozner, Elisa Kreiss, Hanson Lu, Thomas Icard, Christopher Potts, Noah D.

Zen 20 Dec 31, 2022
Byzantine-robust decentralized learning via self-centered clipping

Byzantine-robust decentralized learning via self-centered clipping In this paper, we study the challenging task of Byzantine-robust decentralized trai

EPFL Machine Learning and Optimization Laboratory 4 Aug 27, 2022
Privacy as Code for DSAR Orchestration: Privacy Request automation to fulfill GDPR, CCPA, and LGPD data subject requests.

Meet Fidesops: Privacy as Code for DSAR Orchestration A part of the greater Fides ecosystem. ⚡ Overview Fidesops (fee-dez-äps, combination of the Lati

Ethyca 44 Dec 06, 2022
PyTorch implementation of 'Gen-LaneNet: a generalized and scalable approach for 3D lane detection'

(pytorch) Gen-LaneNet: a generalized and scalable approach for 3D lane detection Introduction This is a pytorch implementation of Gen-LaneNet, which p

Yuliang Guo 233 Jan 06, 2023
Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation

CorrNet This project provides the code and results for 'Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation'

Gongyang Li 13 Nov 03, 2022
DECA: Detailed Expression Capture and Animation (SIGGRAPH 2021)

DECA: Detailed Expression Capture and Animation (SIGGRAPH2021) input image, aligned reconstruction, animation with various poses & expressions This is

Yao Feng 1.5k Jan 02, 2023
TensorFlow implementation of Style Transfer Generative Adversarial Networks: Learning to Play Chess Differently.

Adversarial Chess TensorFlow implementation of Style Transfer Generative Adversarial Networks: Learning to Play Chess Differently. Requirements To run

Muthu Chidambaram 30 Sep 07, 2021
When are Iterative GPs Numerically Accurate?

When are Iterative GPs Numerically Accurate? This is a code repository for the paper "When are Iterative GPs Numerically Accurate?" by Wesley Maddox,

Wesley Maddox 1 Jan 06, 2022
Repository containing the PhD Thesis "Formal Verification of Deep Reinforcement Learning Agents"

Getting Started This repository contains the code used for the following publications: Probabilistic Guarantees for Safe Deep Reinforcement Learning (

Edoardo Bacci 5 Aug 31, 2022
Range Image-based LiDAR Localization for Autonomous Vehicles Using Mesh Maps

Range Image-based 3D LiDAR Localization This repo contains the code for our ICRA2021 paper: Range Image-based LiDAR Localization for Autonomous Vehicl

Photogrammetry & Robotics Bonn 208 Dec 15, 2022
A GUI to automatically create a TOPAS-readable MLC simulation file

Python script to create a TOPAS-readable simulation file descriring a Multi-Leaf-Collimator. Builds the MLC using the data from a 3D .stl file.

Sebastian Schäfer 0 Jun 19, 2022
[NeurIPS 2021] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods

Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods Large Scale Learning on Non-Homophilous Graphs: New Benchmark

60 Jan 03, 2023
Oscar and VinVL

Oscar: Object-Semantics Aligned Pre-training for Vision-and-Language Tasks VinVL: Revisiting Visual Representations in Vision-Language Models Updates

Microsoft 938 Dec 26, 2022
Pytorch implementations of Bayes By Backprop, MC Dropout, SGLD, the Local Reparametrization Trick, KF-Laplace, SG-HMC and more

Bayesian Neural Networks Pytorch implementations for the following approximate inference methods: Bayes by Backprop Bayes by Backprop + Local Reparame

1.4k Jan 07, 2023
Rule Extraction Methods for Interactive eXplainability

REMIX: Rule Extraction Methods for Interactive eXplainability This repository contains a variety of tools and methods for extracting interpretable rul

Mateo Espinosa Zarlenga 21 Jan 03, 2023