GalaXC: Graph Neural Networks with Labelwise Attention for Extreme Classification

Overview

GalaXC

GalaXC: Graph Neural Networks with Labelwise Attention for Extreme Classification

@InProceedings{Saini21,
	author       = {Saini, D. and Jain, A.K. and Dave, K. and Jiao, J. and Singh, A. and Zhang, R. and Varma, M.},
	title        = {GalaXC: Graph Neural Networks with Labelwise Attention for Extreme Classification},
	booktitle    = {Proceedings of The Web Conference},
	month = "April",
	year = "2021",
	}

Setup GalaXC

git clone https://github.com/Extreme-classification/GalaXC.git
conda env create -f GalaXC/environment.yml
conda activate galaxc
pip install hnswlib
git clone https://github.com/kunaldahiya/pyxclib.git
cd pyxclib
python setup.py install
cd ../GalaXC

Dataset Structure

Your dataset should have the following structure:

DatasetName (e.g. LF-AmazonTitles-131K)
│   trn_X.txt   (text for trn documents, one text in each line)
|   tst_X.tst   (text for tst documents, one text in each line)
|   Y.txt       (text for labels, one text in each line)
│   trn_X_Y.txt (trn labels in spmat format)
|   tst_X_Y.txt (tst labels in spmat format)
|   filter_labels_test.txt (filter labels where label and test documents are same)
│
└───XXCondensedData (embeddings for tst, trn documents and labels, for benchmark datasets, XX=DX[Astec])
    │   trn_point_embs.npy (2D numpy matrix for trn document embeddings)
    │   tst_point_embs.npy (2D numpy matrix for tst document embeddings)
    |   label_embs.npy     (2D numpy matrix for label embeddings)

We have provided the DX(embeddings from Module 1 of Astec) embeddings for public benchmark datasets for ease of use. Got better(higher recall) embeddings from somewhere? Just plug the new ones and GalaXC will have better preformance, no need to make any code change! These files for LF-AmazonTitles-131K, LF-WikiSeeAlsoTitles-320K and LF-AmazonTitles-1.3M can be found here. Except the files in DXCondensedData, all other files are copy of the datasets from The Extreme Classification Repository.

Sample Runs

To reproduce the numbers on public benchmark datasets reported in the paper, the sample runs are

LF-AmazonTitles-131K

python -u -W ignore train_main.py --dataset /your/path/to/data/LF-AmazonTitles-131K --save-model 0  --devices cuda:0  --num-epochs 30  --num-HN-epochs 0  --batch-size 256  --lr 0.001  --attention-lr 0.001 --adjust-lr 5,10,15,20,25,28  --dlr-factor 0.5  --mpt 0  --restrict-edges-num -1  --restrict-edges-head-threshold 20  --num-random-samples 30000  --random-shuffle-nbrs 0  --fanouts 4,3,2  --num-HN-shortlist 500   --embedding-type DX  --run-type NR  --num-validation 25000  --validation-freq -1  --num-shortlist 500 --predict-ova 0  --A 0.6  --B 2.6

LF-WikiSeeAlsoTitles-320K

python -u -W ignore train_main.py --dataset /your/path/to/data/LF-WikiSeeAlsoTitles-320K --save-model 0  --devices cuda:0  --num-epochs 30  --num-HN-epochs 0  --batch-size 256  --lr 0.001  --attention-lr 0.05 --adjust-lr 5,10,15,20,25,28  --dlr-factor 0.5  --mpt 0  --restrict-edges-num -1  --restrict-edges-head-threshold 20  --num-random-samples 32000  --random-shuffle-nbrs 0  --fanouts 4,3,2  --num-HN-shortlist 500  --repo 1  --embedding-type DX --run-type NR  --num-validation 25000  --validation-freq -1  --num-shortlist 500  --predict-ova 0  --A 0.55  --B 1.5

LF-AmazonTitles-1.3M

python -u -W ignore train_main.py --dataset /your/path/to/data/LF-AmazonTitles-1.3M --save-model 0  --devices cuda:0  --num-epochs 24  --num-HN-epochs 15  --batch-size 512  --lr 0.001  --attention-lr 0.05 --adjust-lr 4,8,12,16,18,20,22  --dlr-factor 0.5  --mpt 0  --restrict-edges-num 5  --restrict-edges-head-threshold 20  --num-random-samples 100000  --random-shuffle-nbrs 1  --fanouts 3,3,3  --num-HN-shortlist 500   --embedding-type DX  --run-type NR  --num-validation 25000  --validation-freq -1  --num-shortlist 500 --predict-ova 0  --A 0.6  --B 2.6

YOU MAY ALSO LIKE

Owner
Extreme Classification
Extreme Classification
OpenMMLab Text Detection, Recognition and Understanding Toolbox

Introduction English | 简体中文 MMOCR is an open-source toolbox based on PyTorch and mmdetection for text detection, text recognition, and the correspondi

OpenMMLab 3k Jan 07, 2023
Official code for the ICLR 2021 paper Neural ODE Processes

Neural ODE Processes Official code for the paper Neural ODE Processes (ICLR 2021). Abstract Neural Ordinary Differential Equations (NODEs) use a neura

Cristian Bodnar 50 Oct 28, 2022
[TIP 2021] SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction

SADRNet Paper link: SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction Requirements python

Multimedia Computing Group, Nanjing University 99 Dec 30, 2022
Code for Reciprocal Adversarial Learning for Brain Tumor Segmentation: A Solution to BraTS Challenge 2021 Segmentation Task

BRATS 2021 Solution For Segmentation Task This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmenta

Himashi Amanda Peiris 6 Sep 15, 2022
Junction Tree Variational Autoencoder for Molecular Graph Generation (ICML 2018)

Junction Tree Variational Autoencoder for Molecular Graph Generation Official implementation of our Junction Tree Variational Autoencoder https://arxi

Wengong Jin 418 Jan 07, 2023
Learn about quantum computing and algorithm on quantum computing

quantum_computing this repo contains everything i learn about quantum computing and algorithm on quantum computing what is aquantum computing quantum

arfy slowy 8 Dec 25, 2022
Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras (ICCV 2021)

N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Gra

32 Dec 26, 2022
Code for paper "Multi-level Disentanglement Graph Neural Network"

Multi-level Disentanglement Graph Neural Network (MD-GNN) This is a PyTorch implementation of the MD-GNN, and the code includes the following modules:

Lirong Wu 6 Dec 29, 2022
Real-time VIBE: Frame by Frame Inference of VIBE (Video Inference for Human Body Pose and Shape Estimation)

Real-time VIBE Inference VIBE frame-by-frame. Overview This is a frame-by-frame inference fork of VIBE at [https://github.com/mkocabas/VIBE]. Usage: i

23 Jul 02, 2022
Time-series-deep-learning - Developing Deep learning LSTM, BiLSTM models, and NeuralProphet for multi-step time-series forecasting of stock price.

Stock Price Prediction Using Deep Learning Univariate Time Series Predicting stock price using historical data of a company using Neural networks for

Abdultawwab Safarji 7 Nov 27, 2022
Clustergram - Visualization and diagnostics for cluster analysis in Python

Clustergram Visualization and diagnostics for cluster analysis Clustergram is a diagram proposed by Matthias Schonlau in his paper The clustergram: A

Martin Fleischmann 96 Dec 26, 2022
Model serving at scale

Run inference at scale Cortex is an open source platform for large-scale machine learning inference workloads. Workloads Realtime APIs - respond to pr

Cortex Labs 7.9k Jan 06, 2023
Convert scikit-learn models to PyTorch modules

sk2torch sk2torch converts scikit-learn models into PyTorch modules that can be tuned with backpropagation and even compiled as TorchScript. Problems

Alex Nichol 101 Dec 16, 2022
VQMIVC - Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion

VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion (Interspeech

Disong Wang 262 Dec 31, 2022
This repository provides a basic implementation of our GCPR 2021 paper "Learning Conditional Invariance through Cycle Consistency"

Learning Conditional Invariance through Cycle Consistency This repository provides a basic TensorFlow 1 implementation of the proposed model in our GC

BMDA - University of Basel 1 Nov 04, 2022
Code for ViTAS_Vision Transformer Architecture Search

Vision Transformer Architecture Search This repository open source the code for ViTAS: Vision Transformer Architecture Search. ViTAS aims to search fo

46 Dec 17, 2022
Colab notebook for openai/glide-text2im.

GLIDE text2im on Colab This repository provides a Colab notebook to produce images conditioned on text prompts with GLIDE [1]. Usage Run text2im.ipynb

Wok 19 Oct 19, 2022
Machine learning notebooks in different subjects optimized to run in google collaboratory

Notebooks Name Description Category Link Training pix2pix This notebook shows a simple pipeline for training pix2pix on a simple dataset. Most of the

Zaid Alyafeai 363 Dec 06, 2022
Nicholas Lee 3 Jan 09, 2022
SpeechBrain is an open-source and all-in-one speech toolkit based on PyTorch.

The SpeechBrain Toolkit SpeechBrain is an open-source and all-in-one speech toolkit based on PyTorch. The goal is to create a single, flexible, and us

SpeechBrain 5.1k Jan 02, 2023