Failed to parse response
manishag1988 opened this issue · 1 comments
⚠️ INSTRUCTIONS:
- Enter ONE "x" inside the brackets [x] to choose the answer
- Example
- Example2
Have you already searched for your ISSUE among the resolved ones?
- Yes, new issue
- Yes, but the solution not work for me
- No
What version of Python do you have?
- Last, Python > 3.11
- Python >= 3.8
- PIs you have Python<3.8 pease install last version of python
What version of operating system do you have?
- Windows
- Linux/Ububtu
- Mac/OSX
What type of installation did you perform?
- pip3 install -r requirements.txt
- python3 -m pip install -r requirements.txt
- Anaconda
- Container on VS
Desktop (please complete the following information):
- Browser [e.g. chrome] : Opera
- Version [e.g. 112] : 99.0.4788.47
Describe the bug
A clear and concise description of what the bug is.
Hi,
No matter how many times I try, I always end up getting the error:
ChatError: Failed to parse response: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> Traceback: File "d:\Free AutoGPT\.venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 552, in _run_script exec(code, module.__dict__) File "D:\Free AutoGPT\Camel.py", line 238, in <module> user_ai_msg = user_agent.step(assistant_msg) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Free AutoGPT\Camel.py", line 67, in step output_message = self.model(str(input_message.content)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "d:\Free AutoGPT\.venv\Lib\site-packages\langchain\llms\base.py", line 310, in __call__ self.generate([prompt], stop=stop, callbacks=callbacks) File "d:\Free AutoGPT\.venv\Lib\site-packages\langchain\llms\base.py", line 192, in generate raise e File "d:\Free AutoGPT\.venv\Lib\site-packages\langchain\llms\base.py", line 186, in generate self._generate(prompts, stop=stop, run_manager=run_manager) File "d:\Free AutoGPT\.venv\Lib\site-packages\langchain\llms\base.py", line 451, in _generate else self._call(prompt, stop=stop) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Free AutoGPT\FreeLLM\HuggingChatAPI.py", line 42, in _call data = self.chatbot.chat(prompt, temperature=0.5, stream=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "d:\Free AutoGPT\.venv\Lib\site-packages\hugchat\hugchat.py", line 267, in chat raise ChatError(f"Failed to parse response: {res}")
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
unfortunately this problem is because LLm models don't return the output as expected. It is normal at the moment, in the next updates we will fix this too.