Semantic Scholar's Author Disambiguation Algorithm & Evaluation Suite

Related tags

Deep LearningS2AND
Overview

S2AND

This repository provides access to the S2AND dataset and S2AND reference model described in the paper S2AND: A Benchmark and Evaluation System for Author Name Disambiguation by Shivashankar Subramanian, Daniel King, Doug Downey, Sergey Feldman (https://arxiv.org/abs/2103.07534).

The reference model will be live on semanticscholar.org later this year, but the trained model is available now as part of the data download (see below).

Installation

To install this package, run the following:

git clone https://github.com/allenai/S2AND.git
cd S2AND
conda create -y --name s2and python==3.7
conda activate s2and
pip install -r requirements.in
pip install -e .

To obtain the training data, run this command after the package is installed (from inside the S2AND directory):
[Expected download size is: 50.4 GiB]

aws s3 sync --no-sign-request s3://ai2-s2-research-public/s2and-release data/

If you run into cryptic errors about GCC on macOS while installing the requirments, try this instead:

CFLAGS='-stdlib=libc++' pip install -r requirements.in

Configuration

Modify the config file at data/path_config.json. This file should look like this

{
    "main_data_dir": "absolute path to wherever you downloaded the data to",
    "internal_data_dir": "ignore this one unless you work at AI2"
}

As the dummy file says, main_data_dir should be set to the location of wherever you downloaded the data to, and internal_data_dir can be ignored, as it is used for some scripts that rely on unreleased data, internal to Semantic Scholar.

How to use S2AND for loading data and training a model

Once you have downloaded the datasets, you can go ahead and load up one of them:

from os.path import join
from s2and.data import ANDData

dataset_name = "pubmed"
parent_dir = "data/pubmed/
dataset = ANDData(
    signatures=join(parent_dir, f"{dataset_name}_signatures.json"),
    papers=join(parent_dir, f"{dataset_name}_papers.json"),
    mode="train",
    specter_embeddings=join(parent_dir, f"{dataset_name}_specter.pickle"),
    clusters=join(parent_dir, f"{dataset_name}_clusters.json"),
    block_type="s2",
    train_pairs_size=100000,
    val_pairs_size=10000,
    test_pairs_size=10000,
    name=dataset_name,
    n_jobs=8,
)

This may take a few minutes - there is a lot of text pre-processing to do.

The first step in the S2AND pipeline is to specify a featurizer and then train a binary classifier that tries to guess whether two signatures are referring to the same person.

We'll do hyperparameter selection with the validation set and then get the test area under ROC curve.

Here's how to do all that:

from s2and.model import PairwiseModeler
from s2and.featurizer import FeaturizationInfo
from s2and.eval import pairwise_eval

featurization_info = FeaturizationInfo()
# the cache will make it faster to train multiple times - it stores the features on disk for you
train, val, test = featurize(dataset, featurization_info, n_jobs=8, use_cache=True)
X_train, y_train = train
X_val, y_val = val
X_test, y_test = test

# calibration fits isotonic regression after the binary classifier is fit
# monotone constraints help the LightGBM classifier behave sensibly
pairwise_model = PairwiseModeler(
    n_iter=25, calibrate=True, monotone_constraints=featurization_info.lightgbm_monotone_constraints
)
# this does hyperparameter selection, which is why we need to pass in the validation set.
pairwise_model.fit(X_train, y_train, X_val, y_val)

# this will also dump a lot of useful plots (ROC, PR, SHAP) to the figs_path
pairwise_metrics = pairwise_eval(X_test, y_test, pairwise_model.classifier, figs_path='figs/', title='example')
print(pairwise_metrics)

The second stage in the S2AND pipeline is to tune hyperparameters for the clusterer on the validation data and then evaluate the full clustering pipeline on the test blocks.

We use agglomerative clustering as implemented in fastcluster with average linkage. There is only one hyperparameter to tune.

from s2and.model import Clusterer, FastCluster
from hyperopt import hp

clusterer = Clusterer(
    featurization_info,
    pairwise_model,
    cluster_model=FastCluster(linkage="average"),
    search_space={"eps": hp.uniform("eps", 0, 1)},
    n_iter=25,
    n_jobs=8,
)
clusterer.fit(dataset)

# the metrics_per_signature are there so we can break out the facets if needed
metrics, metrics_per_signature = cluster_eval(dataset, clusterer)
print(metrics)

For a fuller example, please see the transfer script: scripts/transfer_experiment.py.

How to use S2AND for predicting with a saved model

Assuming you have a clusterer already fit, you can dump the model to disk like so

import pickle

with open("saved_model.pkl", "wb") as _pkl_file:
    pickle.dump(clusterer, _pkl_file)

You can then reload it, load a new dataset, and run prediction

import pickle

with open("saved_model.pkl", "rb") as _pkl_file:
    clusterer = pickle.load(_pkl_file)

anddata = ANDData(
    signatures=signatures,
    papers=papers,
    specter_embeddings=paper_embeddings,
    name="your_name_here",
    mode="inference",
    block_type="s2",
)
pred_clusters, pred_distance_matrices = clusterer.predict(anddata.get_blocks(), anddata)

Our released models are in the s3 folder referenced above, and are called production_model.pickle and full_union_seed_*.pickle. They can be loaded the same way, except that the pickled object is a dictionary, with a clusterer key.

Incremental prediction

There is a also a predict_incremental function on the Clusterer, that allows prediction for just a small set of new signatures. When instantiating ANDData, you can pass in cluster_seeds, which will be used instead of model predictions for those signatures. If you call predict_incremental, the full distance matrix will not be created, and the new signatures will simply be assigned to the cluster they have the lowest average distance to, as long as it is below the model's eps, or separately reclustered with the other unassigned signatures, if not within eps of any existing cluster.

Reproducibility

The experiments in the paper were run with the python (3.7.9) package versions in paper_experiments_env.txt. You can install these packages exactly by running pip install pip==21.0.0 and then pip install -r paper_experiments_env.txt --use-feature=fast-deps --use-deprecated=legacy-resolver. Rerunning on the branch s2and_paper should produce the same numbers as in the paper (we will udpate here if this becomes not true).

Licensing

The code in this repo is released under the Apache 2.0 license (license included in the repo. The dataset is released under ODC-BY (included in S3 bucket with the data). We would also like to acknowledge that some of the affiliations data comes directly from the Microsoft Academic Graph (https://aka.ms/msracad).

Citation

@misc{subramanian2021s2and, title={S2AND: A Benchmark and Evaluation System for Author Name Disambiguation}, author={Shivashankar Subramanian and Daniel King and Doug Downey and Sergey Feldman}, year={2021}, eprint={2103.07534}, archivePrefix={arXiv}, primaryClass={cs.DL} }

Comments
  • Find some wrong labels in dataset?

    Find some wrong labels in dataset?

    For example, in Pubmed dataset, in "clusters.json" file, There is a cluster “PM_352”: ['18834', '18835', '18836', '18837', '18838', '18839', '18840', '18841']. But I checked from "signatures.json", since '18834' in given_block "z zhang" while '18836' is in given_block "d zhang", how could they be in a same cluster? Is anything I misunderstand?

    opened by hapoyige 15
  • Add extra name incompatibility check

    Add extra name incompatibility check

    This PR attempts prevent new name incompatibilities from being added to a cluster. So if a claimed cluster contains S Govender and Sharlene Govender, s2and might break that claimed cluster up into two, and then attach Suendharan Govender to the S Govender piece, and then we we remerge, we have a cluster with S Govender, Sharlene Govender, and Suendharan Govender. I suspect this is the issue behind https://github.com/allenai/scholar/issues/27801#issuecomment-847397953, but did not verify that.

    opened by dakinggg 5
  • Question: Can predictions run in multi-core

    Question: Can predictions run in multi-core

    I see that the current implementation of the prediction using the production model will run on a single-core which is very slow when working with larger datasets. I was wondering if there is some already explored way of doing this using multiple cores if not a GPU?

    opened by jinamshah 4
  • global_dataset trick not working?

    global_dataset trick not working?

    @dakinggg I've got a branch going to make S2AND work for paper deduplication. I haven't really messed with your global_dataset trick (I think), but now it stopped working if n_jobs > 1. Works fine for when it's in serial.

    Test fails with FAILED tests/test_featurizer.py::TestData::test_featurizer - NameError: name 'global_dataset' is not defined

    Did you run into this when making it work originally? Any ideas?

    opened by sergeyf 2
  • Question: How does one go about converting their own dataset to the one used for training

    Question: How does one go about converting their own dataset to the one used for training

    Hello, I understand that this is not technically an issue but I just want to understand how to convert a dataset of my own ( that has information like the research paper name, the authors' details like name,affiliation, email id etc) to a dataset that can be consumed for training from scratch.

    opened by jinamshah 2
  • No cluster.json in the medline dataset.

    No cluster.json in the medline dataset.

    I found that medline dataset does not contain "medline_cluster.json" file which prevents me to reproduce the results. Please add the cluster.json file to S2AND.

    opened by skojaku 1
  • Link to evaluation dataset

    Link to evaluation dataset

    Thank you for this excellent open AND-algorithm and data!

    I followed the link from the paper to this repository, but I was not able to find the S2AND dataset. Could you add some help to the readme, please?

    opened by tomthe 1
  • Update readme.md

    Update readme.md

    I added intro language and it includes reference to the saved models. Are these uploaded already? If so, can you add a commit somewhere in the readme about it and maybe a short example about how to load them?

    opened by sergeyf 1
  • Thanks for the reply! Actually, I am using the dataset to do a clustering task so the divided block and cluster labels matter. Here comes another confusion for me, which is, I thought

    Thanks for the reply! Actually, I am using the dataset to do a clustering task so the divided block and cluster labels matter. Here comes another confusion for me, which is, I thought "block" came from original source data and "given_block" is the modified version of S2AND since the number of statistics matches the #Block in TableⅡ in the S2AND paper. Any suggestions?

    Thanks for the reply! Actually, I am using the dataset to do a clustering task so the divided block and cluster labels matter. Here comes another confusion for me, which is, I thought "block" came from original source data and "given_block" is the modified version of S2AND since the number of statistics matches the #Block in TableⅡ in the S2AND paper. Any suggestions?

    Originally posted by @hapoyige in https://github.com/allenai/S2AND/issues/25#issuecomment-1046418074

    opened by hapoyige 0
  • Be more explicit about use_cache to avoid

    Be more explicit about use_cache to avoid

    Zhipeng and the SPECTER+ team missed the cache specification and were debugging for a long time. These changes should hopefully make the cache easier to understand and notice.

    opened by sergeyf 0
  • Incremental bug

    Incremental bug

    Fixes an issue with the incremental code clustering where we were not splitting claimed profiles properly to align with the expected s2and output. The result was the incompatible clusters resulting from claims remained incompatible, and new mentions could not be assigned to them.

    opened by dakinggg 0
  • Future improvements

    Future improvements

    • [ ] Unify the set of languages between cld2 and fasttext (see unify_lang branch for a start)
    • [ ] Audit the list of name pairs (noticed (maria, mary), (kathleen, katherine))
    • [ ] Generally improve language detection on titles (would require a whole model)
    • [ ] if a person has two very disjoint "personas", they will end up as two clusters. Probably not resolvable, but putting here anyway
    • [ ] somehow do better with low information papers (e.g. no abstract, venue, affiliation, references)
    opened by dakinggg 0
Releases(v1.1_no_refs)
Non-stationary GP package written from scratch in PyTorch

NSGP-Torch Examples gpytorch model with skgpytorch # Import packages import torch from regdata import NonStat2D from gpytorch.kernels import RBFKernel

Zeel B Patel 1 Mar 06, 2022
High-Resolution Image Synthesis with Latent Diffusion Models

Latent Diffusion Models arXiv | BibTeX High-Resolution Image Synthesis with Latent Diffusion Models Robin Rombach*, Andreas Blattmann*, Dominik Lorenz

CompVis Heidelberg 5.6k Dec 30, 2022
Ladder Variational Autoencoders (LVAE) in PyTorch

Ladder Variational Autoencoders (LVAE) PyTorch implementation of Ladder Variational Autoencoders (LVAE) [1]: where the variational distributions q at

Andrea Dittadi 63 Dec 22, 2022
Decision Transformer: A brand new Offline RL Pattern

DecisionTransformer_StepbyStep Intro Decision Transformer: A brand new Offline RL Pattern. 这是关于NeurIPS 2021 热门论文Decision Transformer的复现。 👍 原文地址: Deci

Irving 14 Nov 22, 2022
Husein pet projects in here!

project-suka-suka Husein pet projects in here! List of projects mysejahtera-density. Generate resolution points using meshgrid and request each points

HUSEIN ZOLKEPLI 47 Dec 09, 2022
A Transformer-Based Siamese Network for Change Detection

ChangeFormer: A Transformer-Based Siamese Network for Change Detection (Under review at IGARSS-2022) Wele Gedara Chaminda Bandara, Vishal M. Patel Her

Wele Gedara Chaminda Bandara 214 Dec 29, 2022
Reinforcement Learning with Q-Learning Algorithm on gym's frozen lake environment implemented in python

Reinforcement Learning with Q Learning Algorithm Q learning algorithm is trained on the gym's frozen lake environment. Libraries Used gym Numpy tqdm P

1 Nov 10, 2021
tensorrt int8 量化yolov5 4.0 onnx模型

onnx模型转换为 int8 tensorrt引擎

123 Dec 28, 2022
This repository contains the code needed to train Mega-NeRF models and generate the sparse voxel octrees

Mega-NeRF This repository contains the code needed to train Mega-NeRF models and generate the sparse voxel octrees used by the Mega-NeRF-Dynamic viewe

cmusatyalab 260 Dec 28, 2022
Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code

Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code

STARS Laboratory 8 Sep 14, 2022
PyTorch implementation of UPFlow (unsupervised optical flow learning)

UPFlow: Upsampling Pyramid for Unsupervised Optical Flow Learning By Kunming Luo, Chuan Wang, Shuaicheng Liu, Haoqiang Fan, Jue Wang, Jian Sun Megvii

kunming luo 87 Dec 20, 2022
PyTorch code for the NAACL 2021 paper "Improving Generation and Evaluation of Visual Stories via Semantic Consistency"

Improving Generation and Evaluation of Visual Stories via Semantic Consistency PyTorch code for the NAACL 2021 paper "Improving Generation and Evaluat

Adyasha Maharana 28 Dec 08, 2022
Improving Non-autoregressive Generation with Mixup Training

MIST Training MIST TRAIN_FILE=/your/path/to/train.json VALID_FILE=/your/path/to/valid.json OUTPUT_DIR=/your/path/to/save_checkpoints CACHE_DIR=/your/p

7 Nov 22, 2022
Adversarial Graph Augmentation to Improve Graph Contrastive Learning

ADGCL : Adversarial Graph Augmentation to Improve Graph Contrastive Learning Introduction This repo contains the Pytorch [1] implementation of Adversa

susheel suresh 62 Nov 19, 2022
Pytorch implementation of Rosca, Mihaela, et al. "Variational Approaches for Auto-Encoding Generative Adversarial Networks."

alpha-GAN Unofficial pytorch implementation of Rosca, Mihaela, et al. "Variational Approaches for Auto-Encoding Generative Adversarial Networks." arXi

Victor Shepardson 78 Dec 08, 2022
A robust pointcloud registration pipeline based on correlation.

PHASER: A Robust and Correspondence-Free Global Pointcloud Registration Ubuntu 18.04+ROS Melodic: Overview Pointcloud registration using correspondenc

ETHZ ASL 101 Dec 01, 2022
STEM: An approach to Multi-source Domain Adaptation with Guarantees

STEM: An approach to Multi-source Domain Adaptation with Guarantees Introduction This is the official implementation of ``STEM: An approach to Multi-s

5 Dec 19, 2022
Implementation and replication of ProGen, Language Modeling for Protein Generation, in Jax

ProGen - (wip) Implementation and replication of ProGen, Language Modeling for Protein Generation, in Pytorch and Jax (the weights will be made easily

Phil Wang 71 Dec 01, 2022
Python version of the amazing Reaction Mechanism Generator (RMG).

Reaction Mechanism Generator (RMG) Description This repository contains the Python version of Reaction Mechanism Generator (RMG), a tool for automatic

Reaction Mechanism Generator 284 Dec 27, 2022
Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated Learning

Federated_Learning This repo provides a federated learning framework that allows to carry out backdoor attacks under varying conditions. This is a ker

Arçelik ARGE Açık Kaynak Yazılım Organizasyonu 0 Nov 30, 2021