Template repository for managing machine learning research projects built with PyTorch-Lightning

Overview

Mjolnir

Mjolnir: Thor's hammer, a divine instrument making its holder worthy of wielding lightning.

Template repository for managing machine learning research projects built with PyTorch-Lightning, using Anaconda for Python Dependencies and Sane Quality Defaults (Black, Flake, isort).

Template created by Sidd Karamcheti.


Contributing

Key section if this is a shared research project (e.g., other collaborators). Usually you should have a detailed set of instructions in CONTRIBUTING.md - Notably, before committing to the repository, make sure to set up your dev environment and pre-commit install (pre-commit install)!

Here are sample contribution guidelines (high-level):

  • Install and activate the Conda Environment using the QUICKSTART instructions below.

  • On installing new dependencies (via pip or conda), please make sure to update the environment- .yaml files via the following command (note that you need to separately create the environment-cpu.yaml file by exporting from your local development environment!):

    make serialize-env --arch=


Quickstart

Note: Replace instances of mjolnir and other instructions with instructions specific to your repository!

Clones mjolnir to the working directory, then walks through dependency setup, mostly leveraging the environment- .yaml files.

Shared Environment (for Clusters w/ Centralized Conda)

Note: The presence of this subsection depends on your setup. With the way the Stanford NLP Cluster has been set up, and the way I've set up the ILIAD Cluster, this section makes it really easy to maintain dependencies across multiple users via centralized conda environments, but YMMV.

@Sidd (or central repository maintainer) has already set up the conda environments in Stanford-NLP/ILIAD. The only necessary steps for you to take are cloning the repo, activating the appropriate environment, and running pre-commit install to start developing.

Local Development - Linux w/ GPU & CUDA 11.0

Note: Assumes that conda (Miniconda or Anaconda are both fine) is installed and on your path.

Ensure that you're using the appropriate environment- .yaml file --> if PyTorch doesn't build properly for your setup, checking the CUDA Toolkit is usually a good place to start. We have environment- .yaml files for CUDA 11.0 (and any additional CUDA Toolkit support can be added -- file an issue if necessary).

git clone https://github.com/pantheon-616/mjolnir.git
cd mjolnir
conda env create -f environments/environment-gpu.yaml  # Choose CUDA Kernel based on Hardware - by default used 11.0!
conda activate mjolnir
pre-commit install  # Important!

Local Development - CPU (Mac OS & Linux)

Note: Assumes that conda (Miniconda or Anaconda are both fine) is installed and on your path. Use the -cpu environment file.

git clone https://github.com/pantheon-616/mjolnir.git
cd mjolnir
conda env create -f environments/environment-cpu.yaml
conda activate mjolnir
pre-commit install  # Important!

Usage

This repository comes with sane defaults for black, isort, and flake8 for formatting and linting. It additionally defines a bare-bones Makefile (to be extended for your specific build/run needs) for formatting/checking, and dumping updated versions of the dependencies (after installing new modules).

Other repository-specific usage notes should go here (e.g., training models, running a saved model, running a visualization, etc.).

Repository Structure

High-level overview of repository file-tree (expand on this as you build out your project). This is meant to be brief, more detailed implementation/architectural notes should go in ARCHITECTURE.md.

  • conf - Quinine Configurations (.yaml) for various runs (used in lieu of argparse or typed-argument-parser)
  • environments - Serialized Conda Environments for both CPU and GPU (CUDA 11.0). Other architectures/CUDA toolkit environments can be added here as necessary.
  • src/ - Source Code - has all utilities for preprocessing, Lightning Model definitions, utilities.
    • preprocessing/ - Preprocessing Code (fill in details for specific project).
    • models/ - Lightning Modules (fill in details for specific project).
  • tests/ - Tests - Please test your code... just, please (more details to come).
  • train.py - Top-Level (main) entry point to repository, for training and evaluating models. Can define additional top-level scripts as necessary.
  • Makefile - Top-level Makefile (by default, supports conda serialization, and linting). Expand to your needs.
  • .flake8 - Flake8 Configuration File (Sane Defaults).
  • .pre-commit-config.yaml - Pre-Commit Configuration File (Sane Defaults).
  • pyproject.toml - Black and isort Configuration File (Sane Defaults).
  • ARCHITECTURE.md - Write up of repository architecture/design choices, how to extend and re-work for different applications.
  • CONTRIBUTING.md - Detailed instructions for contributing to the repository, in furtherance of the default instructions above.
  • README.md - You are here!
  • LICENSE - By default, research code is made available under the MIT License. Change as you see fit, but think deeply about why!

Start-Up (from Scratch)

Use these commands if you're starting a repository from scratch (this shouldn't be necessary for your collaborators , since you'll be setting things up, but I like to keep this in the README in case things break in the future). Generally, if you're just trying to run/use this code, look at the Quickstart section above.

GPU & Cluster Environments (CUDA 11.0)

conda create --name mjolnir python=3.8
conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch   # CUDA=11.0 on most of Cluster!
conda install ipython
conda install pytorch-lightning -c conda-forge

pip install black flake8 isort matplotlib pre-commit quinine wandb

# Install other dependencies via pip below -- conda dependencies should be added above (always conda before pip!)
...

CPU Environments (Usually for Local Development -- Geared for Mac OS & Linux)

Similar to the above, but installs the CPU-only versions of Torch and similar dependencies.

conda create --name mjolnir python=3.8
conda install pytorch torchvision torchaudio -c pytorch
conda install ipython
conda install pytorch-lightning -c conda-forge

pip install black flake8 isort matplotlib pre-commit quinine wandb

# Install other dependencies via pip below -- conda dependencies should be added above (always conda before pip!)
...

Containerized Setup

Support for running mjolnir inside of a Docker or Singularity container is TBD. If this support is urgently required, please file an issue.

Owner
Sidd Karamcheti
PhD Student at Stanford & Research Intern at Hugging Face 🤗
Sidd Karamcheti
Teaches a student network from the knowledge obtained via training of a larger teacher network

Distilling-the-knowledge-in-neural-network Teaches a student network from the knowledge obtained via training of a larger teacher network This is an i

Abhishek Sinha 146 Dec 11, 2022
PyTorch implementation of Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network

hierarchical-multi-label-text-classification-pytorch Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network Approach This

Mingu Kang 17 Dec 13, 2022
Spatial Intention Maps for Multi-Agent Mobile Manipulation (ICRA 2021)

spatial-intention-maps This code release accompanies the following paper: Spatial Intention Maps for Multi-Agent Mobile Manipulation Jimmy Wu, Xingyua

Jimmy Wu 70 Jan 02, 2023
Pytorch reimplementation of the Mixer (MLP-Mixer: An all-MLP Architecture for Vision)

MLP-Mixer Pytorch reimplementation of Google's repository for the MLP-Mixer (Not yet updated on the master branch) that was released with the paper ML

Eunkwang Jeon 18 Dec 08, 2022
A PyTorch-Based Framework for Deep Learning in Computer Vision

TorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision @misc{you2019torchcv, author = {Ansheng You and Xiangtai Li and Zhen Zhu a

Donny You 2.2k Jan 09, 2023
Sequence-tagging using deep learning

Classification using Deep Learning Requirements PyTorch version = 1.9.1+cu111 Python version = 3.8.10 PyTorch-Lightning version = 1.4.9 Huggingface

Vineet Kumar 2 Dec 20, 2022
Codes for paper "KNAS: Green Neural Architecture Search"

KNAS Codes for paper "KNAS: Green Neural Architecture Search" KNAS is a green (energy-efficient) Neural Architecture Search (NAS) approach. It contain

90 Dec 22, 2022
Generating Images with Recurrent Adversarial Networks

Generating Images with Recurrent Adversarial Networks Python (Theano) implementation of Generating Images with Recurrent Adversarial Networks code pro

Daniel Jiwoong Im 121 Sep 08, 2022
DI-smartcross - Decision Intelligence Platform for Traffic Crossing Signal Control

DI-smartcross DI-smartcross - Decision Intelligence Platform for Traffic Crossin

OpenDILab 213 Jan 02, 2023
Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022.

Jadena Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022. arXiv

Qing Guo 13 Nov 29, 2022
Pytorch implementation of 'Fingerprint Presentation Attack Detector Using Global-Local Model'

RTK-PAD This is an official pytorch implementation of 'Fingerprint Presentation Attack Detector Using Global-Local Model', which is accepted by IEEE T

6 Aug 01, 2022
JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine Translation

JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine Translation This the repository for this paper. Find extensions of this w

Zhuoyuan Mao 14 Oct 26, 2022
Neural-net-from-scratch - A simple Neural Network from scratch in Python using the Pymathrix library

A Simple Neural Network from scratch A Simple Neural Network from scratch in Pyt

Youssef Chafiqui 2 Jan 07, 2022
A coin flip game in which you can put the amount of money below or equal to 1000 and then choose heads or tail

COIN_FLIPPY ##This is a simple example package. You can use Github-flavored Markdown to write your content. Coinflippy A coin flip game in which you c

2 Dec 26, 2021
Code for ViTAS_Vision Transformer Architecture Search

Vision Transformer Architecture Search This repository open source the code for ViTAS: Vision Transformer Architecture Search. ViTAS aims to search fo

46 Dec 17, 2022
⚡️Optimizing einsum functions in NumPy, Tensorflow, Dask, and more with contraction order optimization.

Optimized Einsum Optimized Einsum: A tensor contraction order optimizer Optimized einsum can significantly reduce the overall execution time of einsum

Daniel Smith 653 Dec 30, 2022
You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors

You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors In this paper, we propose a novel local descriptor-based fra

Haiping Wang 80 Dec 15, 2022
Official implementation of "A Unified Objective for Novel Class Discovery", ICCV2021 (Oral)

A Unified Objective for Novel Class Discovery This is the official repository for the paper: A Unified Objective for Novel Class Discovery Enrico Fini

Enrico Fini 118 Dec 26, 2022
一个免费开源一键搭建的通用验证码识别平台,大部分常见的中英数验证码识别都没啥问题。

captcha_server 一个免费开源一键搭建的通用验证码识别平台,大部分常见的中英数验证码识别都没啥问题。 使用方法 python = 3.8 以上环境 pip install -r requirements.txt -i https://pypi.douban.com/simple gun

Sml2h3 189 Dec 02, 2022
ConformalLayers: A non-linear sequential neural network with associative layers

ConformalLayers: A non-linear sequential neural network with associative layers ConformalLayers is a conformal embedding of sequential layers of Convo

Prograf-UFF 5 Sep 28, 2022