AI-powered literature discovery and review engine for medical/scientific papers

Overview

AI-powered literature discovery and review engine for medical/scientific papers

Version GitHub Release Date GitHub issues GitHub last commit Build Status Coverage Status


demo

paperai is an AI-powered literature discovery and review engine for medical/scientific papers. paperai helps automate tedious literature reviews allowing researchers to focus on their core work. Queries are run to filter papers with specified criteria. Reports powered by extractive question-answering are run to identify answers to key questions within sets of medical/scientific papers.

paperai was used to analyze the COVID-19 Open Research Dataset (CORD-19), winning multiple awards in the CORD-19 Kaggle challenge.

paperai and/or NeuML has been recognized in the following articles:

Installation

The easiest way to install is via pip and PyPI

pip install paperai

You can also install paperai directly from GitHub. Using a Python Virtual Environment is recommended.

pip install git+https://github.com/neuml/paperai

Python 3.6+ is supported

See this link to help resolve environment-specific install issues.

Docker

A Dockerfile with commands to install paperai, all dependencies and scripts is available in this repository.

Clone this git repository and run the following to build and run the Docker image.

docker build -t paperai -f docker/Dockerfile .
docker run --name paperai --rm -it paperai

This will bring up a paperai command shell. Standard Docker commands can be used to copy files over or commands can be run directly in the shell to retrieve input content. All scripts in the following examples are available in this environment.

paperetl's Dockerfile can be combined with this Dockerfile to have a single image that can index and query content. The files from the paperetl project scripts directory needs to be placed in paperai's scripts directory. The paperetl Dockerfile also needs to be copied over (it's referenced as paperetl.Dockerfile here).

docker build -t base -f docker/Dockerfile .
docker build -t paperai --build-arg BASE_IMAGE=base -f docker/paperetl.Dockerfile .
docker run --name paperai --rm -it paperai

Examples

The following notebooks and applications demonstrate the capabilities provided by paperai.

Notebooks

Notebook Description
CORD-19 Analysis with Sentence Embeddings Builds paperai-based submissions for the CORD-19 Challenge
CORD-19 Report Builder Template for building new reports

Applications

Application Description
Search Search a paperai index. Set query parameters, execute searches and display results.

Building a model

paperai indexes databases previously built with paperetl. paperai currently supports querying SQLite databases.

The following sections show how to build an index for a SQLite articles database.

This example assumes the database and model path is cord19/models. Substitute as appropriate.

  1. Download CORD-19 fastText vectors

    scripts/getvectors.sh cord19/vectors

    A full vector model build can optionally be run with the following command.

    python -m paperai.vectors cord19/models

    CORD-19 fastText vectors are also available on Kaggle.

  2. Build embeddings index

    python -m paperai.index cord19/models cord19/vectors/cord19-300d.magnitude

The paperai.index process takes two optional arguments, the model path and the vector file path. The default model location is ~/.cord19 if no parameters are passed in.

Building a report file

Reports support generating output in multiple formats. An example report call:

python -m paperai.report tasks/risks.yml 50 md cord19/models

The following report formats are supported:

  • Markdown (Default) - Renders a Markdown report. Columns and answers are extracted from articles with the results stored in a Markdown file.
  • CSV - Renders a CSV report. Columns and answers are extracted from articles with the results stored in a CSV file.
  • Annotation - Columns and answers are extracted from articles with the results annotated over the original PDF files. Requires passing in a path with the original PDF files.

In the example above, a file named tasks/risk_factors.md will be created. Example report configuration files can be found here.

Running queries

The fastest way to run queries is to start a paperai shell

paperai cord19/models

A prompt will come up. Queries can be typed directly into the console.

Tech Overview

The tech stack is built on Python and creates a sentence embeddings index with FastText + BM25. Background on this method can be found in this Medium article.

The model is a combination of a sentence embeddings index and a SQLite database with the articles. Each article is parsed into sentences and stored in SQLite along with the article metadata. FastText vectors are built over the full corpus. The sentence embeddings index only uses tagged articles, which helps produce the most relevant results.

Multiple entry points exist to interact with the model.

  • paperai.report - Builds a markdown report for a series of queries. For each query, the best articles are shown, top matches from those articles and a highlights section which shows the most relevant sections from the embeddings search for the query.
  • paperai.query - Runs a single query from the terminal
  • paperai.shell - Allows running multiple queries from the terminal
Comments
  • Vector model file not found (cord19-300d.magnitude)

    Vector model file not found (cord19-300d.magnitude)

    • issue moved from wrong project to here -

    Hi,

    I get the following error when running python -m paperai.index

    raise IOError(ENOENT, "Vector model file not found", path) FileNotFoundError: [Errno 2] Vector model file not found: 'C:\Users\x\.cord19\vectors\cord19-300d.magnitude'

    PS. I am quite new to all this; so, apologies if the mistake is on my end.

    When trying to download cord19-300d.magnitude from https://www.kaggle.com/davidmezzetti/cord19-fasttext-vectors#cord19-300d.magnitude, I get the error: "Too many requests"

    opened by fomar1994 30
  • Installation issues

    Installation issues

    The system would report issue with "UnicodeDecodeError: 'gbk' codec can't decode byte 0x82 in position 12007: illegal multibyte sequence" when I execute this command "pip install paperai". I wonder if WINDOWS SYSTEM cannot decompress tar.gz-type packages. 微信图片_20201215222928

    opened by albertY-C 16
  • I'm not sure to have followed correctly the procedure for running paperai with pre-trained vectors

    I'm not sure to have followed correctly the procedure for running paperai with pre-trained vectors

    After successfully installing paperai in Linux (Ubuntu 20.04.1 LTS), I tried to run it by using the pre-trained vectors option to build the model, as follows:

    (1) I downloaded the vectors from https://www.kaggle.com/davidmezzetti/cord19-fasttext-vectors#cord19-300d.magnitude (2) My Downloads folder in my computer ended up with a Zip file containing the vectors. (3) I created a directory ~/.cord19/vectors/ and moved the downloaded Zip file into this directory (see yellow folder in the figure below). (4) I extracted the Zip file, which resulted in the grey folder shown below, which contained the file cord19-300d.magnitude (5) I moved the cord19-300d.magnitude file outside of the grey folder and thus into the ~/.cord19/vectors/ directory (see figure below)

    Screenshot from 2020-08-06 22-07-59

    (6) I excuted the following command to build the embeddings index with the above pre-trained vectors:

    python -m paperai.index

    Upon performing the above I got the following error message (see below)

    ppai1 9 ErrorLaptopRun

    Am I getting this error because the above steps are not the correct ones? If so, what would be the correct steps? Otherwise, what other things should I try to eliminate the issue?

    opened by DavidRivasPhD 10
  • Windows install issue

    Windows install issue

    It was reported that paperai can't be installed in a Windows environment due to the following error:

    ValueError: path 'src/python/' cannot end with '/'

    bug 
    opened by davidmezzetti 5
  • Added pdf output build option

    Added pdf output build option

    Modified export.py to create a pdf output option This is done by the new method in export, streampdf

    This edit done for educational purposes as a participant in York University's software design course

    Thank you for your time

    opened by will0710 3
  • Processing custom sqlite file

    Processing custom sqlite file

    I want to create an index and vector file over a Custom sqlite articles database. I have created a articles.sqlite database on medical papers, using paperetl. But I did not find any instruction as to how to process it . Can you please give instructions on this ?

    opened by choudharya3 3
  • risk-factors.yml issues

    risk-factors.yml issues

    when i run command "python -m paperai.report tasks/risk-factors.yml 50 md cord19/models ", i can't find file risk-factors.yml, and i can't understand argument "50"

    opened by Zhip-S 2
  • Integration: DeepSource

    Integration: DeepSource

    I ran DeepSource analysis on my fork of this repository and found some code quality issues. Have a look at the issues caught in this repository by DeepSource here.

    DeepSource is a code review automation tool that detects code quality issues and helps you to automatically fix some of them. You can use DeepSource to track test coverage, Detect problems in Dockerfiles, etc. in addition to detecting issues in code.

    The PR #24 fixed some of the issues caught by DeepSource.

    All the features of the DeepSource are mentioned here. I'd suggest you integrate DeepSource since it is free for Open Source projects forever.

    Integrating DeepSource to continuously analyze your repository:

    • Install DeepSource on your repository here.
    • Create .deepsource.toml configuration specific to this repo or use the configuration mentioned below which I used to run the analysis on the fork of this repo.
    • Activate analysis here.
    version = 1
    
    test_patterns = ["/test/python/*.py"]
    
    [[analyzers]]
    name = "python"
    enabled = true
    
      [analyzers.meta]
      runtime_version = "3.x.x"
    
    opened by withshubh 2
  • RuntimeError: CUDA error: out of memory (NVidia V100, 32 GB DDRAM)

    RuntimeError: CUDA error: out of memory (NVidia V100, 32 GB DDRAM)

    What are the minimum memory requirements for the PaperAI? When running on Nvidia V100, 32 GB DDRAM I got: RuntimeError: CUDA error: out of memory. GPU memory seems to be completely free.

    Is there a way how to run it from GPU, or can I run it exclusively on TPUs?

    from txtai.embeddings import Embeddings
    import torch
    
    torch.cuda.empty_cache()
    
    # MEMORY
    id = 1
    t = torch.cuda.get_device_properties(id).total_memory
    c = torch.cuda.memory_cached(id)
    a = torch.cuda.memory_allocated(id)
    f = c-a  # free inside cache
    
    print("TOTAL", t / 1024/1024/1024," GB")
    print("ALLOCATED", a)
    
    # Create embeddings model, backed by sentence-transformers & transformers
    embeddings = Embeddings({"method": "transformers", "path": "sentence-transformers/bert-base-nli-mean-tokens"})
    
    import numpy as np
    
    sections = ["US tops 5 million confirmed virus cases",
                "Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg",
                "Beijing mobilises invasion craft along coast as Taiwan tensions escalate",
                "The National Park Service warns against sacrificing slower friends in a bear attack",
                "Maine man wins $1M from $25 lottery ticket",
                "Make huge profits without work, earn up to $100,000 a day"]
    
    
    query = "health"
    uid = np.argmax(embeddings.similarity(query, sections))
    print("%-20s %s" % (query, sections[uid]))
    

    TOTAL 31.74853515625 GB ALLOCATED 0 Traceback (most recent call last): File "pokus2.py", line 32, in uid = np.argmax(embeddings.similarity(query, sections)) File "/home/user/.local/lib/python3.8/site-packages/txtai/embeddings.py", line 228, in similarity query = self.transform((None, query, None)).reshape(1, -1) File "/home/user/.local/lib/python3.8/site-packages/txtai/embeddings.py", line 179, in transform embedding = self.model.transform(document) File "/home/user/.local/lib/python3.8/site-packages/txtai/vectors.py", line 264, in transform return self.model.encode([" ".join(document[1])], show_progress_bar=False)[0]

    opened by burgetrm 2
  • Wrong annotation places

    Wrong annotation places

    Need fix to correctly annotate the pdf text from query text that has the different pages, columns, or others placing positions in the pdf. In the screenshots, the annotator trying to annotate text that in the different positions only by per page consideration rather than all placing positions consideration. That method made the annotator annotate text that should not be annotated because the annotator only found the text in its current scope only. Also, the annotation that covers texts that should not be annotated leads to confusing annotation indicators too.

    Columns problem:

    • Query Screenshot_20
    • Annotations Screenshot_16

    Pages problem:

    • Query Screenshot_19
    • Annotations Screenshot_17 Screenshot_18
    opened by muazhari 1
  • sqlite3.OperationalError: no such table: sections

    sqlite3.OperationalError: no such table: sections

    when I command in a docker: python -m paperai.vectors cord19/models, the output srror is "sqlite3.OperationalError: no such table: sections"

    opened by wspspring 1
  • paperai for beginners

    paperai for beginners

    First and foremost, thank you for offering such a great library. Nonetheless I was wondering if possible can you provide a simple guideline on using such a awesome library for new research project like from loading pdf files to querying the topics. I went through the examples but could not grasp the overall idea. I believe a small effort of yours would be really help for beginners like me to use this library in research work.

    opened by satishchaudhary382 1
Releases(v2.0.0)
  • v2.0.0(Mar 12, 2022)

    This release adds the following enhancements and bug fixes:

    • Allow setting report options within task yml files (#42)
    • Allow running reports against full databases (#43)
    • Batch extractor queries (#44)
    • Remove study design columns (#46)
    • Add option to specify extraction column context (#47)
    • Add report reference column (#48)
    • Add report column format parameter (#49)
    • Add pre-commit checks (#50)
    • Add check to report sections query to ensure text has tokens (#51)
    • Remove default home directory cord19 path defaults (#52)
    • Require Python 3.7+ (#54)
    • Update txtai to 4.3.1 (#56)
    Source code(tar.gz)
    Source code(zip)
  • v1.10.0(Sep 10, 2021)

  • v1.9.0(Aug 18, 2021)

  • v1.8.0(Apr 23, 2021)

    This release adds the following enhancements and bug fixes:

    • Add ability to read index yml (#18)
    • Switch from mdv to mdv3 to support Python 3.9 (#21)
    • Add enhanced API for paperai (#30)
    • Add configurable query threshold, (#31)
    • Support query negation (#32)
    • Add search application (#33)
    Source code(tar.gz)
    Source code(zip)
  • v1.7.0(Feb 24, 2021)

  • v1.6.0(Jan 13, 2021)

  • v1.5.0(Dec 11, 2020)

  • v1.4.0(Nov 6, 2020)

    This release adds the following enhancements and bug fixes:

    • Allow specifying vector output file (#10, #11, #13)
    • Build test suite (#12)
    • Add additional column parameters (#14)
    • Allow indexing partial datasources (#15)
    • Add GitHub actions build script (#16)
    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Aug 18, 2020)

  • v1.2.1(Aug 12, 2020)

  • v1.2.0(Aug 11, 2020)

    Release addresses the following:

    • Allow customized the QA model used for QA extraction (#5)
    • Migrated embeddings index logic to txtai project (#7)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Aug 5, 2020)

    Release addresses the following:

    • Add wildcard report queries (#1) - Add ability to run report against entire database. This is only practical for smaller datasets.
    • Fix Windows install issues (#2)
    • Embeddings index memory improvements (#3) - Various improvements to limit memory usage when building an embeddings index
    • Support must clauses for custom query columns (#4) - Add same logic already present in general queries to require a term to be present when deriving report query columns
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Jul 21, 2020)

Owner
NeuML
Applying machine learning to solve everyday problems
NeuML
Pattern Matching in Python

Pattern Matching finalmente chega no Python 3.10. E daí? "Pattern matching", ou "correspondência de padrões" como é conhecido no Brasil. Algumas pesso

Fabricio Werneck 6 Feb 16, 2022
A Chinese to English Neural Model Translation Project

ZH-EN NMT Chinese to English Neural Machine Translation This project is inspired by Stanford's CS224N NMT Project Dataset used in this project: News C

Zhenbang Feng 29 Nov 26, 2022
Watson Natural Language Understanding and Knowledge Studio

Material de demonstração dos serviços: Watson Natural Language Understanding e Knowledge Studio Visão Geral: https://www.ibm.com/br-pt/cloud/watson-na

Vanderlei Munhoz 4 Oct 24, 2021
News-Articles-and-Essays - NLP (Topic Modeling and Clustering)

NLP T5 Project proposal Topic Modeling and Clustering of News-Articles-and-Essays Students: Nasser Alshehri Abdullah Bushnag Abdulrhman Alqurashi OVER

2 Jan 18, 2022
Graphical user interface for Argos Translate

Argos Translate GUI Website | GitHub | PyPI Graphical user interface for Argos Translate. Install pip3 install argostranslategui

Argos Open Tech 16 Dec 07, 2022
💫 Industrial-strength Natural Language Processing (NLP) in Python

spaCy: Industrial-strength NLP spaCy is a library for advanced Natural Language Processing in Python and Cython. It's built on the very latest researc

Explosion 24.9k Jan 02, 2023
test

Lidar-data-decode In this project, you can decode your lidar data frame(pcap file) and make your own datasets(test dataset) in Windows without any hug

46 Dec 05, 2022
숭실대학교 컴퓨터학부 전공종합설계프로젝트

✨ 시각장애인을 위한 버스도착 알림 장치 ✨ 👀 개요 현대 사회에서 대중교통 위치 정보를 이용하여 사람들이 간단하게 이용할 대중교통의 정보를 얻고 쉽게 대중교통을 이용할 수 있다. 해당 정보는 각종 어플리케이션과 대중교통 이용시설에서 위치 정보를 제공하고 있지만 시각

taegyun 3 Jan 25, 2022
API for the GPT-J language model 🦜. Including a FastAPI backend and a streamlit frontend

gpt-j-api 🦜 An API to interact with the GPT-J language model. You can use and test the model in two different ways: Streamlit web app at http://api.v

Víctor Gallego 276 Dec 31, 2022
ByT5: Towards a token-free future with pre-trained byte-to-byte models

ByT5: Towards a token-free future with pre-trained byte-to-byte models ByT5 is a tokenizer-free extension of the mT5 model. Instead of using a subword

Google Research 409 Jan 06, 2023
Smart discord chatbot integrated with Dialogflow to manage different classrooms and assist in teaching!

smart-school-chatbot Smart discord chatbot integrated with Dialogflow to interact with students naturally and manage different classes in a school. De

Tom Huynh 5 Oct 24, 2022
A CSRankings-like index for speech researchers

Speech Rankings This project mimics CSRankings to generate an ordered list of researchers in speech/spoken language processing along with their possib

Mutian He 19 Nov 26, 2022
multi-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification,seq2seq,attention,beam search

multi-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification,seq2seq,attention,beam search

hellonlp 30 Dec 12, 2022
Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)

TOPSIS implementation in Python Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) CHING-LAI Hwang and Yoon introduced TOPSIS

Hamed Baziyad 8 Dec 10, 2022
The ability of computer software to identify words and phrases in spoken language and convert them to human-readable text

speech-recognition-py Speech recognition is the ability of computer software to identify words and phrases in spoken language and convert them to huma

Deepangshi 1 Apr 03, 2022
Journalism AI – Quotes extraction for modular journalism

Quote extraction for modular journalism (JournalismAI collab 2021)

Journalism AI collab 2021 207 Dec 25, 2022
Fully featured implementation of Routing Transformer

Routing Transformer A fully featured implementation of Routing Transformer. The paper proposes using k-means to route similar queries / keys into the

Phil Wang 246 Jan 02, 2023
An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.

GPT Neo 🎉 1T or bust my dudes 🎉 An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library. If you're just here t

EleutherAI 6.7k Dec 28, 2022
DVC-NLP-Simple-usecase

dvc-NLP-simple-usecase DVC NLP project Reference repository: official reference repo DVC STUDIO MY View Bag of Words- Krish Naik TF-IDF- Krish Naik ST

SUNNY BHAVEEN CHANDRA 2 Oct 02, 2022
Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech tagging and word segmentation.

Pytorch-NLU,一个中文文本分类、序列标注工具包,支持中文长文本、短文本的多类、多标签分类任务,支持中文命名实体识别、词性标注、分词等序列标注任务。 Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classifi

186 Dec 24, 2022