Can you add our recent work to your survey?
grayground opened this issue ยท 1 comments
Hi,
I have read your insightful paper and found it to be a valuable contribution to the field.
I would like to kindly suggest adding our recent work to your survey.
๐ Paper : Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
This paper uncovers that the judgement consistency of LLM dramatically decreases when confronted with disruptions like questioning, negation, or misleading, even though its previous judgments were correct. It also explores several prompting methods to mitigate this issue and demonstrates their effectiveness.
Thank you for your consideration! :)
Thank you for taking the time to read our paper and for your kind words. We appreciate your interest and contribution to our work.
The paper you mentioned provides valuable insights into the judgment consistency of LLMs and the impact of disruptions on their performance. The prompting methods proposed to mitigate these issues are also intriguing.
We have incorporated this paper into our paperlist and will revise our survey to include this paper.