Classifies galaxy morphology with Bayesian CNN

Related tags

Deep Learningzoobot
Overview

Zoobot

Documentation Status

Zoobot classifies galaxy morphology with deep learning. This code will let you:

  • Reproduce and improve the Galaxy Zoo DECaLS automated classifications
  • Finetune the classifier for new tasks

For example, you can train a new classifier like so:

model = define_model.get_model(
    output_dim=len(schema.label_cols),  # schema defines the questions and answers
    input_size=initial_size, 
    crop_size=int(initial_size * 0.75),
    resize_size=resize_size
)

model.compile(
    loss=losses.get_multiquestion_loss(schema.question_index_groups),
    optimizer=tf.keras.optimizers.Adam()
)

training_config.train_estimator(
    model, 
    train_config,  # parameters for how to train e.g. epochs, patience
    train_dataset,
    test_dataset
)

Install using git and pip: git clone [email protected]:mwalmsley/zoobot.git pip install -r zoobot/requirements.txt (virtual env or conda highly recommended) pip install -e zoobot The main branch is for stable-ish releases. The dev branch includes the shiniest features but may change at any time.

To get started, see the documentation.

I also include some working examples for you to copy and adapt:

Latest cool features on dev branch (June 2021):

  • Multi-GPU distributed training
  • Support for Weights and Biases (wandb)
  • Worked examples for custom representations

Contributions are welcome and will be credited in any future work.

If you use this repo for your research, please cite the paper.

Comments
  • Benchmarks

    Benchmarks

    It's important that Zoobot has proper benchmarks so that we can be confident new releases work properly for users. This PR adds those benchmarks.

    In the course of setting up the benchmarks, I have made some major changes/improvements:

    • pytorch-galaxy-datasets refactored to work for tensorflow, imports adapted
    • both tensorflow and pytorch zoobot versions use albumentations for augmentations. Old TF code removed.
    • tensorflow version bumped to 2.10 (current latest) while I'm at it
    • pytorch version now has logging for per-question loss. Loss func aggregation has new option to support this.
    • TensorFlow version has per-question logging also, but awaiting issue with Keras team to enable
    • Created minimal_example.py for TensorFlow (thanks, @katgre )
    • Support CPU-only PyTorch training
    • Refactor TF TrainingConfig to Trainer object, Lightning style, for consistency
    enhancement 
    opened by mwalmsley 3
  • on_train_batch_end is slow in TF

    on_train_batch_end is slow in TF

    Unclear what's causing this slowness. Presumably a callback I added - but none look like they should be heavy? Perhaps something wandb is doing?

    • Remove all callbacks and rerun
    • Remove wandb and rerun For each, check if slow warning continues (or if training speed changes at all)
    enhancement 
    opened by mwalmsley 3
  • add gh action to publish package to pypi

    add gh action to publish package to pypi

    Related to https://github.com/mwalmsley/zoobot/issues/18#issuecomment-1278635788

    This PR adds an auto CI release mechanism for publishing zoobot to pypi. It uses the GH action to release to pypi https://github.com/pypa/gh-action-pypi-publish

    opened by camallen 3
  • Publish latest version to PyPi?

    Publish latest version to PyPi?

    A question rather than a request. Are there any plans to publish the refactored work ?

    PyPi shows v0.0.1 is published https://pypi.org/project/zoobot/#history on 15th March 2021 but the latest code is ~v0.0.3 (tags) and the refactor seems to be working well.

    Ideally I can pull in these packages to my own env / container and then train with the latest code vs pulling in from github etc.

    opened by camallen 3
  • setup branch protection rules on 'main'

    setup branch protection rules on 'main'

    https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/managing-a-branch-protection-rule

    It may be too restrictive for your use case / dev flows but we use this for contributor PRs etc. Basically we ensure that a PR meets certain criteria in terms of our CI runs, can only merge a PR once one of the CI runs v3.7 or v3.9 tests pass.

    Feel free to close if you don't think this is useful.

    enhancement 
    opened by camallen 2
  • Deprecate TFRecords

    Deprecate TFRecords

    TFRecords are cumbersome and take up a lot of disk space. It's much simpler to learn directly from images on disk, at the cost of some I/O performance.

    This PR removes support for TFRecords in favour of images-on-disk. This will ultimately enable new TensorFlow weights trained on all of DESI (impractical with TFRecords).

    Breaking change for anyone using TFRecords (i.e. everyone using TensorFlow to train from scratch). Finetuning should not be affected.

    TODO - will require new greyscale/colour pretrained models, just for safety.

    opened by mwalmsley 2
  • feat(CI): Add proposed python CI GH Action

    feat(CI): Add proposed python CI GH Action

    This PR proposes to add a simple GH Action script that establishes a python environment, downloads the requirements and runs pytest.

    Some other things to consider might be to use conda for virtual environments and creating CI scripts for Docker as well.

    opened by SauravMaheshkar 2
  • Improve data files for docker

    Improve data files for docker

    This PR changes the docker / compose setup, specifically it

    • consolidates the docker files to cuda and tensorflow base images (no need for a python base image)
    • adds a .dockerignore entry for all data files when building the container to keep the size down
    • and provides an easy way to inject them at run time via local directory mounts in the compose file
    • finally this removes specific to my machine local directory setup for injecting unrelated data files
    opened by camallen 2
  • add wandb logging, freeze batchnorm by default

    add wandb logging, freeze batchnorm by default

    Doing some polishing on finetuning

    • Add wandb logging to the full_tree example. @camallen use this for dashboard. You will need to add import wandb, wandb.init(authkey, etc) just before when running on Azure.
    • Freeze batch norm layers by default when finetuning, with new recursive function
    • Pass additional params via config (thanks Cam)
    • Minor cleanup
    opened by mwalmsley 1
  • Add PyTorch Finetuning Capability, Examples

    Add PyTorch Finetuning Capability, Examples

    Key change is adding pytorch.training.finetune() method. Works on either classification (e.g. 0, 1) data or count (e.g. 12 said yes, 4 said no) data.

    Includes three working examples:

    • Binary classification, with tiny rings subset
    • Counts for single question, with full internal rings data
    • Counts for all questions, with GZ Cosmic Dawn schema

    Also updates various imports for the galaxy-datasets refactor, fixes prediction method to work on unlabelled data, minor QoL improvements.

    Finally, changes PyTorch dense layer initialisation to custom high-uncertainty initialisation - see efficientnet_custom.py

    cc @camallen

    opened by mwalmsley 1
  • Add v0.02 changes

    Add v0.02 changes

    Adds support (minimal working examples, a guide) for calculating new representations with a trained model.

    Also adds significant new features:

    • Distributed training with several GPUs
    • Metric logging with Weights&Biases (add your own login credentials)
    • Train on color (3-band) images, not just greyscale

    Also adds a critical bugfix (when loading images for direct predictions i.e. not via TFRecords, correctly normalise to the 0-1 interval expected (without documentation) by the tf.keras.experimental.preprocessing layers).

    Also adds misc. minor fixes and documentation tweaks.

    This code was used for the morphology tools paper (to be submitted shortly).

    opened by mwalmsley 1
  • Avoid --extra-index-url via dependency_links

    Avoid --extra-index-url via dependency_links

    It should be possible to search for non-standard package repositories using just setup.py, without having the user also set --extra-index-url.

    https://setuptools.pypa.io/en/latest/deprecated/dependency_links.html

    But I couldn't get this to work on a quick try.

    enhancement help wanted 
    opened by mwalmsley 1
  • Can't import finetune while going through finetune_binary_classification.py

    Can't import finetune while going through finetune_binary_classification.py

    I tried to go through finetune_binary_classification.py, but got the error:

    ImportError: cannot import name 'finetune' from 'zoobot.pytorch.training' (/usr/local/lib/python3.8/dist-packages/zoobot/pytorch/training/init.py)

    I tried it both with kasia and dev branch, went through "git clone" and "pip install" (I remembered there were some issues during Hackaton regarding that), also tried to import other features from the folder (i.e. losses) and it worked fine.

    bug 
    opened by katgre 2
  • Create a simple decision tree in minimal_example.py

    Create a simple decision tree in minimal_example.py

    Instead of using on of the complicated decision trees from decals dr5, let's create a simple decision tree with one dependency already written in the minimal_example.py.

    opened by katgre 0
Releases(v0.0.3)
  • v0.0.3(Apr 25, 2022)

    Improved documentation and refactored train API (pytorch).

    Awaiting results from several segmentation experiments ahead of public release (inc pytorch version).

    Source code(tar.gz)
    Source code(zip)
  • v0.0.2(Oct 4, 2021)

  • beta(Sep 29, 2021)

    Initial release.

    This had enough documentation and code to replicate the DECaLS model and make predictions. There are a few minor missing arguments and similar typos that you might have stumbled into, because I made some last minute changes without updating the docs, but everything worked with a little stack tracing.

    Source code(tar.gz)
    Source code(zip)
Owner
Mike Walmsley
Mike Walmsley
Annealed Flow Transport Monte Carlo

Annealed Flow Transport Monte Carlo Open source implementation accompanying ICML 2021 paper by Michael Arbel*, Alexander G. D. G. Matthews* and Arnaud

DeepMind 30 Nov 21, 2022
Official repository for "On Generating Transferable Targeted Perturbations" (ICCV 2021)

On Generating Transferable Targeted Perturbations (ICCV'21) Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Fatih Porikli Paper:

Muzammal Naseer 46 Nov 17, 2022
1st Solution For NeurIPS 2021 Competition on ML4CO Dual Task

KIDA: Knowledge Inheritance in Data Aggregation This project releases our 1st place solution on NeurIPS2021 ML4CO Dual Task. Slide and model weights a

MEGVII Research 24 Sep 08, 2022
A diff tool for language models

LMdiff Qualitative comparison of large language models. Demo & Paper: http://lmdiff.net LMdiff is a MIT-IBM Watson AI Lab collaboration between: Hendr

Hendrik Strobelt 27 Dec 29, 2022
免费获取http代理并生成proxifier配置文件

freeproxy 免费获取http代理并生成proxifier配置文件 公众号:台下言书 工具说明:https://mp.weixin.qq.com/s?__biz=MzIyNDkwNjQ5Ng==&mid=2247484425&idx=1&sn=56ccbe130822aa35038095317

说书人 32 Mar 25, 2022
Clustering with variational Bayes and population Monte Carlo

pypmc pypmc is a python package focusing on adaptive importance sampling. It can be used for integration and sampling from a user-defined target densi

45 Feb 06, 2022
This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".

Project This repo has been populated by an initial template to help get you started. Please make sure to update the content to build a great experienc

Microsoft 674 Dec 26, 2022
Byte-based multilingual transformer TTS for low-resource/few-shot language adaptation.

One model to speak them all 🌎 Audio Language Text ▷ Chinese 人人生而自由,在尊严和权利上一律平等。 ▷ English All human beings are born free and equal in dignity and rig

Mutian He 60 Nov 14, 2022
DIVeR: Deterministic Integration for Volume Rendering

DIVeR: Deterministic Integration for Volume Rendering This repo contains the training and evaluation code for DIVeR. Setup python 3.8 pytorch 1.9.0 py

64 Dec 27, 2022
Efficient-GlobalPointer - Pytorch Efficient GlobalPointer

引言 感谢苏神带来的模型,原文地址:https://spaces.ac.cn/archives/8877 如何运行 对应模型EfficientGlobalPoi

powerycy 40 Dec 14, 2022
(JMLR' 19) A Python Toolbox for Scalable Outlier Detection (Anomaly Detection)

Python Outlier Detection (PyOD) Deployment & Documentation & Stats & License PyOD is a comprehensive and scalable Python toolkit for detecting outlyin

Yue Zhao 6.6k Jan 05, 2023
Pytorch implementation of Hinton's Dynamic Routing Between Capsules

pytorch-capsule A Pytorch implementation of Hinton's "Dynamic Routing Between Capsules". https://arxiv.org/pdf/1710.09829.pdf Thanks to @naturomics fo

Tim Omernick 625 Oct 27, 2022
This repository implements WGAN_GP.

Image_WGAN_GP This repository implements WGAN_GP. Image_WGAN_GP This repository uses wgan to generate mnist and fashionmnist pictures. Firstly, you ca

Lieon 6 Dec 10, 2021
"Domain Adaptive Semantic Segmentation without Source Data" (ACM MM 2021)

LDBE Pytorch implementation for two papers (the paper will be released soon): "Domain Adaptive Semantic Segmentation without Source Data", ACM MM2021.

benfour 16 Sep 28, 2022
Mae segmentation - Reproduction of semantic segmentation using masked autoencoder (mae)

ADE20k Semantic segmentation with MAE Getting started Install the mmsegmentation

97 Dec 17, 2022
Code for Emergent Translation in Multi-Agent Communication

Emergent Translation in Multi-Agent Communication PyTorch implementation of the models described in the paper Emergent Translation in Multi-Agent Comm

Facebook Research 75 Jul 15, 2022
Reimplementation of the paper "Attention, Learn to Solve Routing Problems!" in jax/flax.

JAX + Attention Learn To Solve Routing Problems Reinplementation of the paper Attention, Learn to Solve Routing Problems! using Jax and Flax. Fully su

Gabriela Surita 7 Dec 01, 2022
[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

CodingMan 45 Dec 12, 2022
The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".

Watermark-Robustness-Toolbox - Official PyTorch Implementation This repository contains the official PyTorch implementation of the following paper to

49 Dec 19, 2022
Source codes of CenterTrack++ in 2021 ICME Workshop on Big Surveillance Data Processing and Analysis

MOT Tracked object bounding box association (CenterTrack++) New association method based on CenterTrack. Two new branches (Tracked Size and IOU) are a

36 Oct 04, 2022