PootieT/explain-then-translate

Question regarding origin of explanations used in few-shot example prompts.

Closed this issue · 1 comments

Hello,

I want to start off by saying that I really admire the thoroughness of your paper. I am interested in trying out your technique on my own niche translation problem; however, I'm unsure how I should create the explanations used in the few shot/in context learning setting.

I'm mainly looking at Appendix C.2-C.4. Did you have experts develop those annotations for the explanation portion of the in context learning example? Or did you use GPT 3.5 annotations on those portions? Or did you use natural language instructions from MultiPL-E?

Thank you! :)

My apologies! I finally found my answer in Appendix F.

"Once we select the few-shot programs, we simply
use the model to generate zero-shot explanations,
and modify for correctness and structural preferences (e.g. code lines followed by explanations
with exp-lbl)."