Surrogate- and Invariance-Boosted Contrastive Learning (SIB-CL)

Related tags

Deep LearningSIB-CL
Overview

Surrogate- and Invariance-Boosted Contrastive Learning (SIB-CL)

This repository contains all source code used to generate the results in the article "Surrogate- and invariance-boosted contrastive learning for data-scarce applications in science". (url: to-be-updated)

  • The folder generate_datasets contains all numerical programs used to generate the datasets, for both Photonic Crystals (PhC) and the Time-independent Schrodinger Equation (TISE)
  • main.py is the main code used to train the neural networks (explained in detail below)

Dependencies

Please install the required Python packages: pip install -r requirements.txt

A python3 environment can be created prior to this: conda create -n sibcl python=3.8; conda activate sibcl

Assess to MATLAB is required to calculate the density-of-states (DOS) of PhCs.

Dataset Generation

Photonic Crystals (PhCs)

Relevant code stored in generate_datasets/PhC/. Periodic unit cells are defined using a level set of a Fourier sum; different unit cells can be generated using the get_random() method of the FourierPhC class defined in fourier_phc.py.

To generate the labeled PhC datasets, we first compute their band structures using MPB. This can be executed via:

For the target dataset of random fourier unit cells, python phc_gendata.py --h5filename="mf1-s1" --pol="tm" --nsam=5000 --maxF=1 --seed=1;

and for the source dataset of simple cylinders, python phc_gencylin.py --h5filename="cylin" --pol="tm" --nsam=10000;

each program will create a dataset with the eigen-frequencies, group velocities, etc, stored in a .h5 file (which can be accessed using the h5py package). We then calculate the DOS using the GRR method provided by the MATLAB code https://github.com/boyuanliuoptics/DOS-calculation/blob/master/DOS_GGR.m. To do so, we first parse the data to create the .txt files required as inputs to the program, compute the DOS using MATLAB and then add the DOS labels back to the original .h5 files. These steps will be executed automatically by simply running the shell script get_DOS.sh after modifying the h5 filename identifier defined at the top. Note that for this to run smoothly, python and MATLAB will first need to be added to PATH.

Time-independent Schrodinger Equation (TISE)

Relevant code stored in generate_datasets/TISE/. Example usage:

To generate target dataset, e.g. in 3D, python tise_gendata.py --h5filename="tise3d" --ndim 3 --nsam 5000

To generate low resolution dataset, python tise_gendata.py --h5filename='tise3d_lr' --ndim 3 --nsam 10000 --lowres --orires=32 (--orires defines the resolution of the input to the neural network)

To generate qho dataset, python tise_genqho.py --h5filename='tise2d_qho' --ndim 2 --nsam 10000

SIB-CL and baselines training

Training of the neural networks for all problems introduced in the article (i.e. PhC DOS prediction, PhC Band structure prediction, TISE ground state energy prediction using both low resolution or QHO data as surrogate) can all be executed using main.py by indicating the appropriate flags (see below). This code also allows training via the SIB-CL framework or any of the baselines, again with the use of the appropriate flag. This code also contains other prediction problems not presented in the article, such as predicting higher energy states of TISE, TISE wavefunctions and single band structure.

Important flags:

--path_to_h5: indicate directory where h5 datasets are located. The h5 filenames defined in the dataset classes in datasets_PhC_SE.py should also be modified according to the names used during dataset generation.

--predict: defines prediction task. Options: 'DOS', 'bandstructures', 'eigval', 'oneband', 'eigvec'

--train: specify if training via SIB-CL or baselines. Options: 'sibcl', 'tl', 'sl', 'ssl' ('ssl' performs regular contrastive learning without surrogate dataset). For invariance-boosted baselines, e.g. TL-I or SL-I, specify 'tl' or 'sl' here and add the relevant invariances flags (see below).

--iden: required; specify identifier for saving of models, training logs and results

Invariances flags: --translate_pbc (set this flag to include rolling translations), --pg_uniform (set this flag to uniformly sample the point group symmetry transformations), --scale (set this flag to scale unit cell - used for PhC), --rotate (set this flag to do 4-fold rotations), --flip (set this flag to perform horizontal and vertical mirrors). If --pg_uniform is used, there is no need to include --rotate and --flip.

Other optional flags can be displayed via python main.py --help. Examples of shell scripts can be found in the sh_scripts folder.

Training outputs:

By default, running main.py will create 3 subdirectories:

  • ./pretrained_models/: state dictionaries of pretrained models at various epochs indicated in the eplist variable will be saved to this directory. These models are used for further fine-tuning.
  • ./dicts/: stores the evaluation losses on the test set as dictionaries saved as .json files. The results can then be plotted using plot_results.py.
  • ./tlogs/: training curves for pre-training and fine-tuning are stored in dictionaries saved as .json files. The training curves can be plotted using get_training_logs.py. Alternatively, the --log_to_tensorboard flag can be set and training curves can be viewed using tensorboard; in this case, the dictionaries will not be generated.
You might also like...
pytorch implementation of
pytorch implementation of "Contrastive Multiview Coding", "Momentum Contrast for Unsupervised Visual Representation Learning", and "Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination"

Unofficial implementation: MoCo: Momentum Contrast for Unsupervised Visual Representation Learning (Paper) InsDis: Unsupervised Feature Learning via N

Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021.
Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021.

Dense Contrastive Learning for Self-Supervised Visual Pre-Training This project hosts the code for implementing the DenseCL algorithm for se

CRLT: A Unified Contrastive Learning Toolkit for Unsupervised Text Representation Learning
CRLT: A Unified Contrastive Learning Toolkit for Unsupervised Text Representation Learning

CRLT: A Unified Contrastive Learning Toolkit for Unsupervised Text Representation Learning This repository contains the code and relevant instructions

Source code and dataset for ACL2021 paper: "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning".

ERICA Source code and dataset for ACL2021 paper: "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive L

PyTorch implementation of
PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

VIMPAC: Video Pre-Training via Masked Token Prediction and Contrastive Learning

This is a release of our VIMPAC paper to illustrate the implementations. The pretrained checkpoints and scripts will be soon open-sourced in HuggingFace transformers.

Code for 'Single Image 3D Shape Retrieval via Cross-Modal Instance and Category Contrastive Learning', ICCV 2021
Code for 'Single Image 3D Shape Retrieval via Cross-Modal Instance and Category Contrastive Learning', ICCV 2021

CMIC-Retrieval Code for Single Image 3D Shape Retrieval via Cross-Modal Instance and Category Contrastive Learning. ICCV 2021. Introduction In this wo

Official Pytorch implementation of "Unbiased Classification Through Bias-Contrastive and Bias-Balanced Learning (NeurIPS 2021)

Unbiased Classification Through Bias-Contrastive and Bias-Balanced Learning (NeurIPS 2021) Official Pytorch implementation of Unbiased Classification

This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning].

CG3 This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning]. R

Releases(v1.0)
Owner
Charlotte Loh
PhD candidate at MIT EECS
Charlotte Loh
PyTorch evaluation code for Delving Deep into the Generalization of Vision Transformers under Distribution Shifts.

Out-of-distribution Generalization Investigation on Vision Transformers This repository contains PyTorch evaluation code for Delving Deep into the Gen

Chongzhi Zhang 72 Dec 13, 2022
使用OpenCV部署全景驾驶感知网络YOLOP,可同时处理交通目标检测、可驾驶区域分割、车道线检测,三项视觉感知任务,包含C++和Python两种版本的程序实现。本套程序只依赖opencv库就可以运行, 从而彻底摆脱对任何深度学习框架的依赖。

YOLOP-opencv-dnn 使用OpenCV部署全景驾驶感知网络YOLOP,可同时处理交通目标检测、可驾驶区域分割、车道线检测,三项视觉感知任务,依然是包含C++和Python两种版本的程序实现 onnx文件从百度云盘下载,链接:https://pan.baidu.com/s/1A_9cldU

178 Jan 07, 2023
DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral

Generative Image Inpainting An open source framework for generative image inpainting task, with the support of Contextual Attention (CVPR 2018) and Ga

2.9k Dec 16, 2022
This is a pytorch implementation of the NeurIPS paper GAN Memory with No Forgetting.

GAN Memory for Lifelong learning This is a pytorch implementation of the NeurIPS paper GAN Memory with No Forgetting. Please consider citing our paper

Miaoyun Zhao 43 Dec 27, 2022
A PyTorch implementation of Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks

SVHNClassifier-PyTorch A PyTorch implementation of Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks If

Potter Hsu 182 Jan 03, 2023
This repo implements a 3D segmentation task for an airport baggage dataset.

3D CT Scan Segmentation With Occupancy Network This repo implements a 3D superresolution segmentation task for an airport baggage dataset. Our final p

Christoph Reich 2 Mar 28, 2022
Ejemplo Algoritmo Viterbi - Example of a Viterbi algorithm applied to a hidden Markov model on DNA sequence

Ejemplo Algoritmo Viterbi Ejemplo de un algoritmo Viterbi aplicado a modelo ocul

Mateo Velásquez Molina 1 Jan 10, 2022
A module that used for encrypt code which includes RSA and AES

软件加密模块 requirement: Crypto,pycryptodome,pyqt5 本地加密信息为随机字符串 使用说明 命令行参数 -h 帮助 -checkWorking 检查是否能正常工作,后接1确认指令 -checkEndDate 检查截至日期,后接1确认指令 -activateCode

2 Sep 27, 2022
Implementation of "Semi-supervised Domain Adaptive Structure Learning"

Semi-supervised Domain Adaptive Structure Learning - ASDA This repo contains the source code and dataset for our ASDA paper. Illustration of the propo

3 Dec 13, 2021
This implements one of result networks from Large-scale evolution of image classifiers

Exotic structured image classifier This implements one of result networks from Large-scale evolution of image classifiers by Esteban Real, et. al. Req

54 Nov 25, 2022
Implementation of the SUMO (Slim U-Net trained on MODA) model

SUMO - Slim U-Net trained on MODA Implementation of the SUMO (Slim U-Net trained on MODA) model as described in: TODO: add reference to paper once ava

6 Nov 19, 2022
A high performance implementation of HDBSCAN clustering.

HDBSCAN HDBSCAN - Hierarchical Density-Based Spatial Clustering of Applications with Noise. Performs DBSCAN over varying epsilon values and integrates

2.3k Jan 02, 2023
StocksMA is a package to facilitate access to financial and economic data of Moroccan stocks.

Creating easier access to the Moroccan stock market data What is StocksMA ? StocksMA is a package to facilitate access to financial and economic data

Salah Eddine LABIAD 28 Jan 04, 2023
"MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction" (CVPRW 2022) & (Winner of NTIRE 2022 Challenge on Spectral Reconstruction from RGB)

MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction (CVPRW 2022) Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Z

Yuanhao Cai 274 Jan 05, 2023
Code for ViTAS_Vision Transformer Architecture Search

Vision Transformer Architecture Search This repository open source the code for ViTAS: Vision Transformer Architecture Search. ViTAS aims to search fo

46 Dec 17, 2022
source code for 'Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge' by A. Shah, K. Shanmugam, K. Ahuja

Source code for "Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge" Reference: Abhin Shah, Karthikeyan Shanmugam, Kartik Ahu

Abhin Shah 1 Jun 03, 2022
Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning

Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning This repository is official Tensorflow implementation of paper: Ensemb

Seunghyun Lee 12 Oct 18, 2022
An AI made using artificial intelligence (AI) and machine learning algorithms (ML) .

DTech.AIML An AI made using artificial intelligence (AI) and machine learning algorithms (ML) . This is created by help of some members in my team and

1 Jan 06, 2022
Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot. Graph Convolutional Networks for Hyperspectral Image Classification, IEEE TGRS, 2021.

Graph Convolutional Networks for Hyperspectral Image Classification Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot T

Danfeng Hong 154 Dec 13, 2022
PyTorch implementation of Pay Attention to MLPs

gMLP PyTorch implementation of Pay Attention to MLPs. Quickstart Clone this repository. git clone https://github.com/jaketae/g-mlp.git Navigate to th

Jake Tae 34 Dec 13, 2022