CausalNLP is a practical toolkit for causal inference with text as treatment, outcome, or "controlled-for" variable.

Overview

CausalNLP

CausalNLP is a practical toolkit for causal inference with text as treatment, outcome, or "controlled-for" variable.

Install

  1. pip install -U pip
  2. pip install causalnlp

Usage

Example: What is the causal impact of a positive review on a product click?

import pandas as pd
df = pd.read_csv('sample_data/music_seed50.tsv', sep='\t', error_bad_lines=False)

The file music_seed50.tsv is a semi-simulated dataset from here. Columns of relevance include:

  • Y_sim: outcome, where 1 means product was clicked and 0 means not.
  • text: raw text of review
  • rating: rating associated with review (1 through 5)
  • T_true: 1 means rating less than 3, 0 means rating of 5, where T_true affects the outcome Y_sim.
  • T_ac: an approximation of true review sentiment (T_true) created with Autocoder from raw review text
  • C_true:confounding categorical variable (1=audio CD, 0=other)

We'll pretend the true sentiment (i.e., review rating and T_true) is hidden and only use T_ac as the treatment variable.

Using the text_col parameter, we include the raw review text as another "controlled-for" variable.

from causalnlp.causalinference import CausalInferenceModel
from lightgbm import LGBMClassifier
cm = CausalInferenceModel(df, 
                         metalearner_type='t-learner', learner=LGBMClassifier(num_leaves=500),
                         treatment_col='T_ac', outcome_col='Y_sim', text_col='text',
                         include_cols=['C_true'])
cm.fit()
outcome column (categorical): Y_sim
treatment column: T_ac
numerical/categorical covariates: ['C_true']
text covariate: text
preprocess time:  1.1179866790771484  sec
start fitting causal inference model
time to fit causal inference model:  10.361494302749634  sec

Estimating Treatment Effects

CausalNLP supports estimation of heterogeneous treatment effects (i.e., how causal impacts vary across observations, which could be documents, emails, posts, individuals, or organizations).

We will first calculate the overall average treatment effect (or ATE), which shows that a positive review increases the probability of a click by 13 percentage points in this dataset.

Average Treatment Effect (or ATE):

print( cm.estimate_ate() )
{'ate': 0.1309311542209525}

Conditional Average Treatment Effect (or CATE): reviews that mention the word "toddler":

print( cm.estimate_ate(df['text'].str.contains('toddler')) )
{'ate': 0.15559234254638685}

Individualized Treatment Effects (or ITE):

test_df = pd.DataFrame({'T_ac' : [1], 'C_true' : [1], 
                        'text' : ['I never bought this album, but I love his music and will soon!']})
effect = cm.predict(test_df)
print(effect)
[[0.80538201]]

Model Interpretability:

print( cm.interpret(plot=False)[1][:10] )
v_music    0.079042
v_cd       0.066838
v_album    0.055168
v_like     0.040784
v_love     0.040635
C_true     0.039949
v_just     0.035671
v_song     0.035362
v_great    0.029918
v_heard    0.028373
dtype: float64

Features with the v_ prefix are word features. C_true is the categorical variable indicating whether or not the product is a CD.

Text is Optional in CausalNLP

Despite the "NLP" in CausalNLP, the library can be used for causal inference on data without text (e.g., only numerical and categorical variables). See the examples for more info.

Documentation

API documentation and additional usage examples are available at: https://amaiya.github.io/causalnlp/

How to Cite

Please cite the following paper when using CausalNLP in your work:

@article{maiya2021causalnlp,
    title={CausalNLP: A Practical Toolkit for Causal Inference with Text},
    author={Arun S. Maiya},
    year={2021},
    eprint={2106.08043},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    journal={arXiv preprint arXiv:2106.08043},
}
You might also like...
Llvlir - Low Level Variable Length Intermediate Representation

Low Level Variable Length Intermediate Representation Low Level Variable Length

Semi-automated OpenVINO benchmark_app with variable parameters

Semi-automated OpenVINO benchmark_app with variable parameters. User can specify multiple options for any parameters in the benchmark_app and the progam runs the benchmark with all combinations of given options.

This is a repository for a Semantic Segmentation inference API using the Gluoncv CV toolkit
This is a repository for a Semantic Segmentation inference API using the Gluoncv CV toolkit

BMW Semantic Segmentation GPU/CPU Inference API This is a repository for a Semantic Segmentation inference API using the Gluoncv CV toolkit. The train

This is a repository for a semantic segmentation inference API using the OpenVINO toolkit
This is a repository for a semantic segmentation inference API using the OpenVINO toolkit

BMW-IntelOpenVINO-Segmentation-Inference-API This is a repository for a semantic segmentation inference API using the OpenVINO toolkit. It's supported

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.

Note: This is an alpha (preview) version which is still under refining. nn-Meter is a novel and efficient system to accurately predict the inference l

Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding
Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding

๐Ÿ quince Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding ๐Ÿ Installation $ git clone [email protected]

Comments
  • Does your model support other languages than English?

    Does your model support other languages than English?

    Hi Amaiya, Thanks for your great package. Would you kindly let me know if your package supports languages other than English when using CausalBert?

    I'm also interested in knowing whether I can exploit other Transformers models from the Huggingface hub?

    question 
    opened by behroozazarkhalili 1
  • Error while fitting the model

    Error while fitting the model

    Hi,

    I ran to this bug while fitting the model. I checked the data and everything looks good. I don't get the root cause of this error.

    File /opt/conda/lib/python3.8/site-packages/causalnlp/meta/slearner.py:80, in BaseSLearner.fit(self, X, treatment, y, p)
         78 mask = (treatment == group) | (treatment == self.control_name)
         79 treatment_filt = treatment[mask]
    ---> 80 X_filt = X[mask]
         81 y_filt = y[mask]
         83 w = (treatment_filt == group).astype(int)
    
    IndexError: boolean index did not match indexed array along dimension 0
    
    opened by hfarhidzadeh 1
Releases(v0.7.0)
  • v0.7.0(Aug 2, 2022)

  • v0.6.0(Oct 20, 2021)

    0.6.0 (2021-10-20)

    New:

    • Added model_name parameter to CausalBertModel to support other DistilBert models (e.g., multilingual)

    Changed

    • N/A

    Fixed:

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Sep 3, 2021)

    0.5.0 (2021-09-03)

    New:

    • Added support for CausalBert

    Changed

    • Added p parameter to CausalInferenceModel.fit and CausalInferenceModel.predict for user-supplied propensity scores in X-Learner and R-Learner.
    • Removed CV from propensity score computations in X-Learner and R-Learner and increase default max_iter to 10000

    Fixed:

    • Resolved problem with CausalInferenceModel.tune_and_use_default_learner when outcome is continuous
    • Changed to max_iter=10000 for default LogisticRegression base learner
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Sep 3, 2021)

    0.4.0 (2021-07-20)

    New:

    • N/A

    Changed

    • Use LinearRegression and LogisticRegression as default base learners for s-learner.
    • changed parameter name of metalearner_type to method in CausalInferenceModel.

    Fixed:

    • Resolved mis-references in _balance method (renamed from _minimize_bias).
    • Fixed convergence issues and factored out propensity score computations to CausalInferenceModel.compute_propensity_scores.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Jul 19, 2021)

  • v0.3.0(Jul 15, 2021)

    0.3.0 (2021-07-15)

    New:

    • Added CausalInferenceModel.evaluate_robustness method to assess robustness of causal estimates using sensitivity analysis

    Changed

    • reduced dependencies with local metalearner implementations

    Fixed:

    • N/A
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jun 21, 2021)

  • v0.1.3(Jun 17, 2021)

  • v0.1.2(Jun 17, 2021)

    0.1.2 (2021-06-17)

    New:

    • N/A

    Changed

    • Better interpretability and explainability of treatment effects

    Fixed:

    • Fixes to some bugs in preprocessing
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Jun 17, 2021)

  • v0.1.0(Jun 16, 2021)

Owner
Arun S. Maiya
computer scientist
Arun S. Maiya
Official implementation of the NeurIPS 2021 paper Online Learning Of Neural Computations From Sparse Temporal Feedback

Online Learning Of Neural Computations From Sparse Temporal Feedback This repository is the official implementation of the NeurIPS 2021 paper Online L

Lukas Braun 3 Dec 15, 2021
Source code of the paper Meta-learning with an Adaptive Task Scheduler.

ATS About Source code of the paper Meta-learning with an Adaptive Task Scheduler. If you find this repository useful in your research, please cite the

Huaxiu Yao 16 Dec 26, 2022
Video lie detector using xgboost - A video lie detector using OpenFace and xgboost

video_lie_detector_using_xgboost a video lie detector using OpenFace and xgboost

2 Jan 11, 2022
Phonetic PosteriorGram (PPG)-Based Voice Conversion (VC)

ppg-vc Phonetic PosteriorGram (PPG)-Based Voice Conversion (VC) This repo implements different kinds of PPG-based VC models. Pretrained models. More m

Liu Songxiang 227 Dec 28, 2022
End-to-end face detection, cropping, norm estimation, and landmark detection in a single onnx model

onnx-facial-lmk-detector End-to-end face detection, cropping, norm estimation, and landmark detection in a single onnx model, model.onnx. Demo You can

atksh 42 Dec 30, 2022
Have you ever wondered how cool it would be to have your own A.I

Have you ever wondered how cool it would be to have your own A.I. assistant Imagine how easier it would be to send emails without typing a single word, doing Wikipedia searches without opening web br

Harsh Gupta 1 Nov 09, 2021
Official implementation of DreamerPro: Reconstruction-Free Model-Based Reinforcement Learning with Prototypical Representations in TensorFlow 2

DreamerPro Official implementation of DreamerPro: Reconstruction-Free Model-Based Reinforcement Learning with Prototypical Representations in TensorFl

22 Nov 01, 2022
FinGAT: A Financial Graph Attention Networkto Recommend Top-K Profitable Stocks

FinGAT: A Financial Graph Attention Networkto Recommend Top-K Profitable Stocks This is our implementation for the paper: FinGAT: A Financial Graph At

Yu-Che Tsai 64 Dec 13, 2022
Fashion Landmark Estimation with HRNet

HRNet for Fashion Landmark Estimation (Modified from deep-high-resolution-net.pytorch) Introduction This code applies the HRNet (Deep High-Resolution

SVIP Lab 91 Dec 26, 2022
catch-22: CAnonical Time-series CHaracteristics

catch22 - CAnonical Time-series CHaracteristics About catch22 is a collection of 22 time-series features coded in C that can be run from Python, R, Ma

Carl H Lubba 229 Oct 21, 2022
Implementation of Gans

GAN Generative Adverserial Networks are an approach to generative data modelling using Deep learning methods. I have currently implemented : DCGAN on

Sibam Parida 5 Sep 07, 2021
Video Matting Refinement For Python

Video-matting refinement Library (use pip to install) scikit-image numpy av matplotlib Run Static background python path_to_video.mp4 Moving backgroun

3 Jan 11, 2022
PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".

Sharpness-aware Quantization for Deep Neural Networks This is the official repository for our paper: Sharpness-aware Quantization for Deep Neural Netw

Zhuang AI Group 30 Dec 19, 2022
[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

CodingMan 45 Dec 12, 2022
Neural networks applied in recognizing guitar chords using python, AutoML.NET with C# and .NET Core

Chord Recognition Demo application The demo application is written in C# with .NETCore. As of July 9, 2020, the only version available is for windows

Andres Mauricio Rondon Patiรฑo 24 Oct 22, 2022
Official implementation of "Watermarking Images in Self-Supervised Latent-Spaces"

๐Ÿ” Watermarking Images in Self-Supervised Latent-Spaces PyTorch implementation and pretrained models for the paper. For details, see Watermarking Imag

Meta Research 32 Dec 13, 2022
MLJetReconstruction - using machine learning to reconstruct jets for CMS

MLJetReconstruction - using machine learning to reconstruct jets for CMS The C++ data extraction code used here was based heavily on that foundv here.

ALPhA Davidson 0 Nov 17, 2021
Official implementation of paper Gradient Matching for Domain Generalization

Gradient Matching for Domain Generalisation This is the official PyTorch implementation of Gradient Matching for Domain Generalisation. In our paper,

94 Dec 23, 2022
DiSECt: Differentiable Simulator for Robotic Cutting

DiSECt: Differentiable Simulator for Robotic Cutting Website | Paper | Dataset | Video | Blog post DiSECt is a simulator for the cutting of deformable

NVIDIA Research Projects 73 Oct 29, 2022
Evaluating Privacy-Preserving Machine Learning in Critical Infrastructures: A Case Study on Time-Series Classification

PPML-TSA This repository provides all code necessary to reproduce the results reported in our paper Evaluating Privacy-Preserving Machine Learning in

Dominik 1 Mar 08, 2022