EMNLP 2020 - Summarizing Text on Any Aspects

Overview

Summarizing Text on Any Aspects

This repo contains preliminary code of the following paper:

Summarizing Text on Any Aspects: A Knowledge-Informed Weakly-Supervised Approach
Bowen Tan, Lianhui Qin, Eric P. Xing, Zhiting Hu
EMNLP 2020
[ArXiv] [Slides]

Getting Started

  • Given a document and a target aspect (e.g., a topic of interest), aspect-based abstractive summarization attempts to generate a summary with respect to the aspect.
  • In this work, we study summarizing on arbitrary aspects relevant to the document.
  • Due to the lack of supervision data, we develop a new weak supervision construction method integrating rich external knowledge sources such as ConceptNet and Wikipedia.

Requirements

Our python version is 3.8, required packages can be installed by

pip install -r requrements.txt

Our code can run on a single GTX 1080Ti GPU.

Datasets & Knowledge Sources

Weakly Supervised Dataset

Our constructed weakly supervised dataset can be downloaded by

bash data_utils/download_weaksup.sh

Downloaded data will be saved into data/weaksup/.

We also provide the code to construct it. For more details, see

MA-News Dataset

MA-News Dataset is a aspect summarization dataset constructed by (Frermann et al.) . Its aspects are restricted to only 6 coarsegrained topics. We use MA-News dataset for our automatic evaluation. Scripts to make MA-News is here.

A JSON version processed by us can be download by

bash data_utils/download_manews.sh

Downloaded data will be saved into data/manews/.

Knowledge Graph - ConceptNet

ConceptNet is a huge multilingual commonsense knowledge graph. We extract an English subset that can be downloaded by

bash data_utils/download_concept_net.sh

Knowledge Base - Wikipedia

Wikipedia is an encyclopaedic knowledge base. We use its python API to access it online, so make sure your web connection is good when running our code.

Weakly Supervised Model

Train

Run this command to finetune a weakly supervised model from pretrained BART model (Lewis et al.).

python finetune.py --dataset_name weaksup --train_docs 100000 --n_epochs 1

Training logs and checkpoints will be saved into logs/weaksup/docs100000/

The training takes ~48h on a single GTX 1080Ti GPU. You may want to directly download the training log and the trained model here.

Generation

Run this command to generate on MA-News test set with the weakly supervised model.

python generate.py --log_path logs/weaksup/docs100000/

Source texts, target texts, generated texts will be saved as test.source, test.gold, and test.hypo respectively, into the log dir: logs/weaksup/docs100000/.

Evaluation

To run evaluation, make sure you have installed java and files2rouge on your device.

First, download stanford nlp by

python data_utils/download_stanford_core_nlp.py

and run

bash evaluate.sh logs/weaksup/docs100000/

to get rouge scores. Results will be saved in logs/weaksup/docs100000/rouge_scores.txt.

Finetune with MA-News Training Data

Baseline

Run this command to finetune a BART model with 1K MA-News training data examples.

python finetune.py --dataset_name manews --train_docs 1000 --wiki_sup False
python generate.py --log_path logs/manews/docs1000/ --wiki_sup False
bash evaluate.sh logs/manews/docs1000/

Results will be saved in logs/manews/docs1000/.

+ Weak Supervision

Run this command to finetune with 1K MA-News training data examples starting with our weakly supervised model.

python finetune.py --dataset_name manews --train_docs 1000 --pretrained_ckpt logs/weaksup/docs100000/best_model.ckpt
python generate.py --log_path logs/manews_plus/docs1000/
bash evaluate.sh logs/manews_plus/docs1000/

Results will be saved in logs/manews_plus/docs1000/.

Results

Results on MA-News dataset are as below (same setting as paper Table 2).

All the detailed logs, including training log, generated texts, and rouge scores, are available here.

(Note: The result numbers may be slightly different from those in the paper due to slightly different implementation details and random seeds, while the improvements over comparison methods are consistent.)

Model ROUGE-1 ROUGE-2 ROUGE-L
Weak-Sup Only 28.41 10.18 25.34
MA-News-Sup 1K 24.34 8.62 22.40
MA-News-Sup 1K + Weak-Sup 34.10 14.64 31.45
MA-News-Sup 3K 26.38 10.09 24.37
MA-News-Sup 3K + Weak-Sup 37.40 16.87 34.51
MA-News-Sup 10K 38.71 18.02 35.78
MA-News-Sup 10K + Weak-Sup 39.92 18.87 36.98

Demo

We provide a demo on a real news on Feb. 2021. (see demo_input.json).

To run the demo, download our trained model here, and run the command below

python demo.py --ckpt_path logs/weaksup/docs100000/best_model.ckpt
Owner
Bowen Tan
Bowen Tan
Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR)

Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR) This is the official implementation of our paper Personalized Tran

Yongchun Zhu 81 Dec 29, 2022
(CVPR 2021) Lifting 2D StyleGAN for 3D-Aware Face Generation

Lifting 2D StyleGAN for 3D-Aware Face Generation Official implementation of paper "Lifting 2D StyleGAN for 3D-Aware Face Generation". Requirements You

Yichun Shi 66 Nov 29, 2022
Este conversor criará a medida exata para sua receita de capuccino gelado da grandiosa Rafaella Ballerini!

ConversorDeMedidas_CapuccinoGelado Este conversor criará a medida exata para sua receita de capuccino gelado da grandiosa Rafaella Ballerini! Requirem

Arthur Ottoni Ribeiro 48 Nov 15, 2022
Finite-temperature variational Monte Carlo calculation of uniform electron gas using neural canonical transformation.

CoulombGas This code implements the neural canonical transformation approach to the thermodynamic properties of uniform electron gas. Building on JAX,

FermiFlow 9 Mar 03, 2022
Boundary-preserving Mask R-CNN (ECCV 2020)

BMaskR-CNN This code is developed on Detectron2 Boundary-preserving Mask R-CNN ECCV 2020 Tianheng Cheng, Xinggang Wang, Lichao Huang, Wenyu Liu Video

Hust Visual Learning Team 178 Nov 28, 2022
Tensorflow Implementation of Pixel Transposed Convolutional Networks (PixelTCN and PixelTCL)

Pixel Transposed Convolutional Networks Created by Hongyang Gao, Hao Yuan, Zhengyang Wang and Shuiwang Ji at Texas A&M University. Introduction Pixel

Hongyang Gao 95 Jul 24, 2022
This repository contains PyTorch models for SpecTr (Spectral Transformer).

SpecTr: Spectral Transformer for Hyperspectral Pathology Image Segmentation This repository contains PyTorch models for SpecTr (Spectral Transformer).

Boxiang Yun 45 Dec 13, 2022
This is code of book "Learn Deep Learning with PyTorch"

深度学习入门之PyTorch Learn Deep Learning with PyTorch 非常感谢您能够购买此书,这个github repository包含有深度学习入门之PyTorch的实例代码。由于本人水平有限,在写此书的时候参考了一些网上的资料,在这里对他们表示敬意。由于深度学习的技术在

Xingyu Liao 2.5k Jan 04, 2023
Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Davis Rempe 367 Dec 24, 2022
Official code for "Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes", CVPR2022

[CVPR 2022] Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes Dongkwon Jin, Wonhui Park, Seong-Gyun Jeong, Heeyeon Kwon, and Cha

Dongkwon Jin 106 Dec 29, 2022
competitions-v2

Codabench (formerly Codalab Competitions v2) Installation $ cp .env_sample .env $ docker-compose up -d $ docker-compose exec django ./manage.py migrat

CodaLab 21 Dec 02, 2022
TransFGU: A Top-down Approach to Fine-Grained Unsupervised Semantic Segmentation

TransFGU: A Top-down Approach to Fine-Grained Unsupervised Semantic Segmentation Zhaoyun Yin, Pichao Wang, Fan Wang, Xianzhe Xu, Hanling Zhang, Hao Li

DamoCV 25 Dec 16, 2022
Raindrop strategy for Irregular time series

Graph-Guided Network For Irregularly Sampled Multivariate Time Series Overview This repository contains processed datasets and implementation code for

Zitnik Lab @ Harvard 74 Jan 03, 2023
End-to-End Object Detection with Fully Convolutional Network

This project provides an implementation for "End-to-End Object Detection with Fully Convolutional Network" on PyTorch.

472 Dec 22, 2022
Alex Pashevich 62 Dec 24, 2022
A unified framework to jointly model images, text, and human attention traces.

connect-caption-and-trace This repository contains the reference code for our paper Connecting What to Say With Where to Look by Modeling Human Attent

Meta Research 73 Oct 24, 2022
Code for CVPR 2021 paper: Anchor-Free Person Search

Introduction This is the implementationn for Anchor-Free Person Search in CVPR2021 License This project is released under the Apache 2.0 license. Inst

158 Jan 04, 2023
Out-of-Town Recommendation with Travel Intention Modeling (AAAI2021)

TrainOR_AAAI21 This is the official implementation of our AAAI'21 paper: Haoran Xin, Xinjiang Lu, Tong Xu, Hao Liu, Jingjing Gu, Dejing Dou, Hui Xiong

Jack Xin 13 Oct 19, 2022
Based on Yolo's low-power, ultra-lightweight universal target detection algorithm, the parameter is only 250k, and the speed of the smart phone mobile terminal can reach ~300fps+

Based on Yolo's low-power, ultra-lightweight universal target detection algorithm, the parameter is only 250k, and the speed of the smart phone mobile terminal can reach ~300fps+

567 Dec 26, 2022
TransZero++: Cross Attribute-guided Transformer for Zero-Shot Learning

TransZero++ This repository contains the testing code for the paper "TransZero++: Cross Attribute-guided Transformer for Zero-Shot Learning" submitted

Shiming Chen 6 Aug 16, 2022