ZARA: Improving Few-Shot Self-Rationalization for Small Language Models

This is the official repository of our paper ZARA: Improving Few-Shot Self-Rationalization for Small Language Models, EMNLP Findings 2023.

TL;DR: This work improves few-shot self-rationalization for small LMs via self-training with a novel zero-shot rationale-answer augmentation strategy.

Our code and a fully detailed readme will be added soon.