LLM-Tuning-Safety/LLMs-Finetuning-Safety

temp not zero during inference

ShengYun-Peng opened this issue · 2 comments

Thanks for your great work! The paper said the temperature and top_p were set to 0 during inference, but the code here shows the temp is set to 1. Perhaps top_p = 0 is already greedy decoding?

temperature: float=1.0, # [optional] The value used to modulate the next token probabilities.

Hi, thanks for pointing out this. I believe you are right --- by setting top_p = 0, it is already greedy.

Thanks!