facebookresearch/stopes

attribution of LLMs

Wafaa014 opened this issue · 1 comments

Can you offer support for the ALTI attribution method for LLMs such as LLAMA?

Hi Wafaa!
Currently, Stopes is focused only on translation models, and ALTI+ was implemented only for seq2seq transformers, such as NLLB. We are not currently planning to adapt ALTI+ to other architectures.

If I learn that our other colleagues who are working with LLMs implement ALTI+ or similar attribution methods for their models, I will update this thread accordingly.

Otherwise, I suggest that maybe you make such a contribution. LLaMA is a decoder-only transformer, so the part of ALTI+ code responsible for the decoder can in principle be adapted to it. But of course, this adaptation will depend on the chosen framework and other exact implementation details of the LLM code.