/BV-NICE

A generative Bayesian estimation framework for individualized treatment effect (ITE) estimation, integrates representation learning, adversarial matching and causal estimation.

Primary LanguageJupyter Notebook

BV-NICE: Balancing Variational Neural Inference of Causal Effects

This is the official code repository for the NeurIPS 2020 paper Reconsidering Generative Objectives For Counterfactual Reasoning.

BV-NICE is a novel generative Bayesian estimation framework that integrates representation learning, adversarial matching and causal estimation. It is designed for individualized treatment effect (ITE) estimation (a.k.a, conditional average treatment effect (CATE), heterogeneous treatment effect (HTE)) for observational causal inference. Existing solutions often fail to address issues that are unique to causal inference, such as covariate balancing and counterfactual validation. By appealing to the Robinson decomposition, BV-NICE exploits a reformulated variational bound that explicitly targets the causal effect estimation rather than specific predictive goals. Our procedure acknowledges the uncertainties in representation and solves a Fenchel mini-max game to resolve the representation imbalance for better counterfactual generalization, justified by new theory.The latent variable formulation enables robustness to unobservable latent confounders, extending the scope of its applicability.

You can clone this repository by running:

git clone https://github.com/DannieLu/BV-NICE.git

Citation

If you reference or use our method, code or results in your work, please consider citing the BV-NICE paper:

@article{lu2020reconsidering,
  title={Reconsidering Generative Objectives For Counterfactual Reasoning},
  author={Lu, Danni and Tao, Chenyang and Chen, Junya and Li, Fan and Guo, Feng and Carin, Lawrence},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}

Contents

This repository contains the following contents.

- Jupyter notebooks

Jupter notebook examples of our BV-NICE model and various baselines (CFR, BART, R-learner, EB-learner, Causal Forest, GANITE, etc.).

- Experiment codes

Python codes used for our experiments. For example, to run BV-NICE with IHDP dataset 0, use the BVNICE.py file in folder: Experiments\IHDP\:

python BVNICE.py 0

- Results and visualization

Python codes used for the visualization of our results.

Prerequisites

The algorithm is built with:

  • Python (version 3.7 or higher)
  • Tensorflow (version 1.14.0)

Installing third-party packages

Some of our baseline models (e.g., BART, Causal Forest, GANITE, CFR) are based on third party implementations. We try to provide all native python implementations of competing models rather than calling R libraries as in the perfect_match package. Note some of these implementations are from unstable development versions of the packages, and we did find compatibility issues when we run the experiments. Please use the following commands to install the versions we have installed and used in our experiments. If they do not run successfully on your computer, just try it on another machine (with a different OS or python environment). Please cite the original references if you have used these implementations.

We have used the BART python implementation from bartpy

pip3 install git+https://github.com/JakeColtman/bartpy.git@ReadOneTrees --upgrade

We have used the Causal Forest model (propensity forest & double-sample forest) from a dev version of scikit-learn 0.18.

pip3 install git+https://github.com/kjung/scikit-learn.git --upgrade

The GANITE implementation was extracted from the perfect_match package. Although you do not really need to install the perfect_match package to run our experiments, we strongly recommend so.

pip3 install git+https://github.com/d909b/perfect_match.git --upgrade

We have used the CFR implementation from https://github.com/clinicalml/cfrnet. It is also already included in our repo.

Datasets

The datasets used in our experiments can be accessed from the following links.