tangqiaoyu/ToolAlpaca

Unreproducible: the error - "Could not parse LLM output" causes inability to reproduce.

Opened this issue · 2 comments

When using chatgpt as LLM, it generates a lot of such errors:

"Could not parse LLM output".

I found that chatgpt did not call the tool as expected by prompt.

In addition, when I call chatgpt in langchain, it will not stop according to the stop word. -

"ASSISTANT Observation: "

Any solutions? Thanks you.

I would like to ask when you conducted the chatgpt experiment?
Is it because the chatgpt model has been updated that the original prompt has become invalid?

Did you use an appropriate prompt to prompt the model to output 'ASSISTANT Observation'?