- For running the MNIST experiment, install the conda environment
jax_env.yml
and runCUDA_VISIBLE_DEVICES=0 python mnist_data_attr.py
- For running the LLM experiments, install the conda environment
torch_env.yml
and run the programs described inscript.sh
- To analyze the results, use
analyzer.ipynb
.
Parts of the code are referenced from the following repositories
- Transformers Learn In-context by Gradient Descent
- Label Words are Anchors
- Use Your INSTINCT: Instruction Optimization Using Neural Bandits Coupled with Transformers
If you find our work interesting, please star our repository. If you wish to cite our paper, you may use the following citation format
@misc{zhou2024detail,
title={DETAIL: Task DEmonsTration Attribution for Interpretable In-context Learning},
author={Zijian Zhou and Xiaoqiang Lin and Xinyi Xu and Alok Prakash and Daniela Rus and Bryan Kian Hsiang Low},
year={2024},
eprint={2405.14899},
archivePrefix={arXiv},
primaryClass={cs.CL}
}