ModelTC/lightllm

Inconsistent Output between LightLLM and Transformers Inference Library

Opened this issue · 2 comments

When specifying 'max new tokens', LightLLM's output consistently matches this maximum value. However, Transformers sometimes adjust according to the model itself, resulting in outputs shorter than the specified 'max new tokens'. I believe Transformers is correct in this approach. It's implausible to always generate output exactly matching the maximum 'max new tokens' value, as this would only lead to repetitive outputs.

@Lvjinhong You can specify the stop token ID by setting the --eos_id xxx argument when starting the server or by using the stop_sequences parameter in the request parameters

@Lvjinhong You can also check if your input is spliced with the correct prompt. lightllm doesn't splice prompts on inputs, while transformers usually splice prompts on inputs in their chat functions.