Data and Code for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning"

Overview

Introduction

Code and data for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning".

We construct a new large-scale benchmark, Geometry3K, which consists of 3,002 geometry problems with dense annotation in formal language. We define 91 predicates and their corresponding literal templates to describe each problem. All predicates are defined in here. Four data examples in the Geometry3K dataset are shown below:

example

We further propose a novel geometry solving approach with formal language and symbolic reasoning, called Interpretable Geometry Problem Solver (Inter-GPS). Inter-GPS is the first geometry problem solver that achieves automatic program parsing and interpretable symbolic reasoning. Inter-GPS parses the problem text and diagram into formal language automatically via rule-based text parsing and neural object detecting, respectively. Moreover, Inter-GPS incorporates theorem knowledge as conditional rules and performs symbolic reasoning step by step.

model

Prepare the Dataset

First, unzip data files into data/geometry3k:

. data/unzip_data.sh

You can alternatively visit the Google Drive link to download the Geometry dataset and unzip it.

Requirements

Python 3.6+
torch 1.7.1
transformers 4.8.2
python3-pip

Install all required python dependencies:

pip3 install -r requirement.txt

Run Inter-GPS Directly

The Final Search Strategy

Run the final symbolic solver Inter-GPS without preprocessing data by the following commands:

cd symbolic_solver
python test.py --label final --strategy final

It applies the final search strategy (predict + low-first) with generated logic forms from the diagram parser and text parser. The solving result file and logging file will be saved in pred_results and logs, respectively.

It takes about 5 minutes for the solving process over the 601 test problems with an Intel CPU 10900K with 20 threads. If you don't have a high-performance CPU, please assign a smaller number of threads and larger searching time limit for each problem. For example:

python test.py --label final --strategy final --time_limit 200 --num_threads 4

Run the symbolic solver with annotated logic forms from the dataset:

python test.py --label final-gt --strategy final --use_annotated

Note that the results could differ slightly in each individual experiment and on different computing platforms. The differences are mainly resulted from randomness of the search process, dependency versions, CPU features, and running parameters. It is highly recommended to run the solver with multiple times and report the average result of the multiple experiments.

Other Search Strategies

Also, you can run the solver with other search strategies listed in Table 7 in the paper by running the following commands, receptively:

  • Predict-based search strategy (predict + random) with annotated logic forms:
python test.py --label predict --strategy predict --use_annotated
  • Random search strategy with annotated logic forms:
python test.py --label random --strategy random --use_annotated
  • Low-first search strategy with annotated logic forms:
python test.py --label low-first --strategy low-first --use_annotated

All these result files reported in the Table 7 are released in symbolic_solver/pred_results and symbolic_solver/logs, respectively.

Calculate Accuracies

You can obtain accuracies for different question types by running python sub_acc.py --result_file {result_json_file} . For example:

python sub_acc.py --result_file pred_results/final/logic_1612098244-predict_low-first_1.json

Run Inter-GPS from Scratch

Text Parser

Parse the problem text into literals (logic forms).

cd text_parser
python text_parser.py

Diagram Parser

The diagram parser converts a problem diagram into literals (logic forms). Only the most core running code is shown as following. If you would like to know every detail, please refer to this README file.

Unzip our detection results of text regions and symbols:

cd detection_results
unzip -d box_results box_results.zip
unzip -d ocr_results ocr_results.zip

Generate diagram logic forms by running the following command:

cd parser
python diagram_parser.py \
--data_path ../../data/geometry3k \
--ocr_path ../detection_results/ocr_results \
--box_path ../detection_results/box_results \
--output_path ../diagram_logic_forms.json

Theorem Predictor

  1. Generate template-based and random-ordered theorem sequences:
cd theorem_predict/tools
python generate_random_seqs.py

It generates two files:

  • results/train/pred_seqs_train_l30_n100_template.json: 100 template-based sequences with a maximum length of 30 for each training data
  • results/test/pred_seqs_test_l100_random.json: 1 random-order sequence with a maximum length of 100 for each testing data
  1. (Optional) Generate pseudo-optimal theorem sequences for each training data:
python check_theorem_seq.py

It will take about 20 hours to attempt 100 tries over all training data! If you want to save time, just skip this step and use our generated data in theorem_predict/results/train/splits instead.

  1. (Optional) Merge 100 tries of pseudo-optimal theorem sequences into one file.
python merge_all_correct_json.py
  1. (Optional) Train the theorem predictor from scratch:
python train_transformer.py

If you want save time, you could skip the step above and download checkpoint model directly:

cd theorem_predict/models
wget https://acl2021-intergps.s3.us-west-1.amazonaws.com/tp_model_best.pt
  1. Evaluate the the theorem predictor to generate predicted theorem sequences:
cd theorem_predict
python eval_transformer.py
  1. Generate theorem sequences for the predict-based strategy (predict + random):
cd theorem_predict/tools
python add_random_seq_to_pred_seq.py

Symbolic Solver

Finally, run the symbolic solver with the Final search strategy (predict + low-first) over generated logic forms:

cd symbolic_solver
python test.py --label final_new \
--strategy final \
--text_logic_form_path ../text_parser/text_logic_forms.json \
--diagram_logic_form_path ../diagram_parser/diagram_logic_forms.json \
--predict_path ../theorem_predict/results/test/pred_seqs_test_bart_best.json

Data Annotation Tools

We release the data collection tools that probably help you extend our dataset in the future work.

Data Collection

The data collection tool is used to collect geometry problems and the corresponding logical forms.

cd annotation_tool/data_collection
python app.py

labelImg

Symbol Labeling

LabelImg is a graphical image annotation tool and label object bounding boxes in images. We use this tool to label text regions and symbols in the diagram. If you are using the Linux system, you can just run the following commands to install the tool:

cd annotation_tool/labelImg
sudo apt-get install pyqt5-dev-tools
sudo pip3 install -r requirements/requirements-linux-python3.txt
make qt5py3

Run the labeling tool:

python labelImg.py

After running the tool, click the Open Dir button to open the data directory containing problem images, for example, InterGPS/data/geometry3k/symbols, and choose Use default label to use pre-defined labels in data/predefined_classes.txt. Note that pre-defined labels in data/predefined_classes.txt are consistent with labels in diagram_parser/detection/classes.txt.

labelImg

Follow the instructions to install the LabelImg package on other systems or learn more about the usage details.

Citation

If the paper, the dataset, or the code helps you, please cite the paper in the following format :

@inproceedings{lu2021inter,
  title = {Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning},
  author = {Lu, Pan and Gong, Ran and Jiang, Shibiao and Qiu, Liang and Huang, Siyuan and Liang, Xiaodan and Zhu, Song-Chun},
  booktitle = {The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021)},
  year = {2021}
}

Q&A

If you encounter any problem, feel free to either directly contact the first authors or leave an issue in the github repo.

Comments
  • Sharing Training Details

    Sharing Training Details

    Hi Pan,

    Your paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning" is awesome.

    However, when reproducing results of this work, I have one problem. I trained the symbol detection model with the data you provided, but the model could not perform as well as the box_result you released. Could you please share more training details?

    Thank you very much and looking forward to your reply.

    opened by YusenZhang826 3
  • DataSet Generation

    DataSet Generation

    Hi Pan, Loved your work in InterGPS. We were planning to extend the dataset, using the annotation tools shared. We wanted to know in logic_form.json for each question (Ground Truth), how was point positions added was this done manually or using some subroutine? Thanks, Akshat

    opened by Akshat188 1
  • Providing a pretrained object detection model for text and symbols

    Providing a pretrained object detection model for text and symbols

    Hello, Thanks for your work! Could you please provide a pretrained object detection model, e.g. the one mentioned in the documentation here: models/exp0/csv_retinanet_19.pt?

    Thank you in advance :)

    opened by supitalp 1
  • About datasets

    About datasets

    Hello, how can I expand the data set of Geometry3K? Where the math geometry problems of Geometry3K come from? Could you please provide more specific web links or other information? Thank you very much!

    opened by mingliangzhang2018 1
  • About the file of  “diagram_logic_forms_pred.json”

    About the file of “diagram_logic_forms_pred.json”

    Excuse me, could you tell me about whether the content of file “diagram_logic_forms_pred.json" is the predicted results of your diagram parser? Thanks every much!

    opened by mingliangzhang2018 1
  • Poor performance of theorem predictor

    Poor performance of theorem predictor

    Hello, Pan. Thank you for your open source.

    I download checkpoint model from https://acl2021-intergps.s3.us-west-1.amazonaws.com/tp_model_best.pt But the evaluation results are empty. How can I get it back to normal? Thanks.

    image

    opened by ICanFlyGFC 9
Releases(Latest)
Owner
Pan Lu
Computer Science
Pan Lu
Pmapper is a super-resolution and deconvolution toolkit for python 3.6+

pmapper pmapper is a super-resolution and deconvolution toolkit for python 3.6+. PMAP stands for Poisson Maximum A-Posteriori, a highly flexible and a

NASA Jet Propulsion Laboratory 8 Nov 06, 2022
《LXMERT: Learning Cross-Modality Encoder Representations from Transformers》(EMNLP 2020)

The Most Important Thing. Our code is developed based on: LXMERT: Learning Cross-Modality Encoder Representations from Transformers

53 Dec 16, 2022
PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Condition Layer Normalization and Semi-Supervised Training in Text-To-Speech

Cross-Speaker-Emotion-Transfer - PyTorch Implementation PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Conditio

Keon Lee 114 Jan 08, 2023
Text and code for the forthcoming second edition of Think Bayes, by Allen Downey.

Think Bayes 2 by Allen B. Downey The HTML version of this book is here. Think Bayes is an introduction to Bayesian statistics using computational meth

Allen Downey 1.5k Jan 08, 2023
code for our ECCV-2020 paper: Self-supervised Video Representation Learning by Pace Prediction

Video_Pace This repository contains the code for the following paper: Jiangliu Wang, Jianbo Jiao and Yunhui Liu, "Self-Supervised Video Representation

Jiangliu Wang 95 Dec 14, 2022
This is a project based on retinaface face detection, including ghostnet and mobilenetv3

English | 简体中文 RetinaFace in PyTorch Chinese detailed blog:https://zhuanlan.zhihu.com/p/379730820 Face recognition with masks is still robust---------

pogg 59 Dec 21, 2022
Apply our monocular depth boosting to your own network!

MergeNet - Boost Your Own Depth Boost custom or edited monocular depth maps using MergeNet Input Original result After manual editing of base You can

Computational Photography Lab @ SFU 142 Dec 17, 2022
Google AI Open Images - Object Detection Track: Open Solution

Google AI Open Images - Object Detection Track: Open Solution This is an open solution to the Google AI Open Images - Object Detection Track 😃 More c

minerva.ml 46 Jun 22, 2022
DEEPAGÉ: Answering Questions in Portuguese about the Brazilian Environment

DEEPAGÉ: Answering Questions in Portuguese about the Brazilian Environment This repository is related to the paper DEEPAGÉ: Answering Questions in Por

0 Dec 10, 2021
METER: Multimodal End-to-end TransformER

METER Code and pre-trained models will be publicized soon. Citation @article{dou2021meter, title={An Empirical Study of Training End-to-End Vision-a

Zi-Yi Dou 257 Jan 06, 2023
Few-Shot-Intent-Detection includes popular challenging intent detection datasets with/without OOS queries and state-of-the-art baselines and results.

Few-Shot-Intent-Detection Few-Shot-Intent-Detection is a repository designed for few-shot intent detection with/without Out-of-Scope (OOS) intents. It

Jian-Guo Zhang 73 Dec 26, 2022
Official PyTorch Implementation of Convolutional Hough Matching Networks, CVPR 2021 (oral)

Convolutional Hough Matching Networks This is the implementation of the paper "Convolutional Hough Matching Network" by J. Min and M. Cho. Implemented

Juhong Min 70 Nov 22, 2022
Final report with code for KAIST Course KSE 801.

Orthogonal collocation is a method for the numerical solution of partial differential equations

Chuanbo HUA 4 Apr 06, 2022
Accelerating BERT Inference for Sequence Labeling via Early-Exit

Sequence-Labeling-Early-Exit Code for ACL 2021 paper: Accelerating BERT Inference for Sequence Labeling via Early-Exit Requirement: Please refer to re

李孝男 23 Oct 14, 2022
Python package facilitating the use of Bayesian Deep Learning methods with Variational Inference for PyTorch

PyVarInf PyVarInf provides facilities to easily train your PyTorch neural network models using variational inference. Bayesian Deep Learning with Vari

342 Dec 02, 2022
Code for the ICML 2021 paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"

ViLT Code for the paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision" Install pip install -r requirements.txt pip

Wonjae Kim 922 Jan 01, 2023
[CVPR 2021] Monocular depth estimation using wavelets for efficiency

Single Image Depth Prediction with Wavelet Decomposition Michaël Ramamonjisoa, Michael Firman, Jamie Watson, Vincent Lepetit and Daniyar Turmukhambeto

Niantic Labs 205 Jan 02, 2023
PyTorch framework for Deep Learning research and development.

Accelerated DL & RL PyTorch framework for Deep Learning research and development. It was developed with a focus on reproducibility, fast experimentati

Catalyst-Team 29 Jul 13, 2022
We simulate traveling back in time with a modern camera to rephotograph famous historical subjects.

[SIGGRAPH Asia 2021] Time-Travel Rephotography [Project Website] Many historical people were only ever captured by old, faded, black and white photos,

298 Jan 02, 2023
Fastquant - Backtest and optimize your trading strategies with only 3 lines of code!

fastquant 🤓 Bringing backtesting to the mainstream fastquant allows you to easily backtest investment strategies with as few as 3 lines of python cod

Lorenzo Ampil 1k Dec 29, 2022