vndee/local-assistant-examples

After input of my query and pressing ENTER, I get this error: ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/chat/

Closed this issue · 2 comments

I am running the streamlit app locally on my Windows PC, which runs fine.
I can upload a PDF, which runs fine.
But: then I input my query, and after pressing ENTER I get the following error, which I can not get rid of:

ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/chat/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001AEDAEFCE10>: Failed to establish a new connection: [WinError 10061] Kan geen verbinding maken omdat de doelcomputer de verbinding actief heeft geweigerd'))
Traceback:
File "C:\ProgramData\Anaconda3\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 531, in _run_script
self._session_state.on_script_will_rerun(rerun_data.widget_states)
File "C:\ProgramData\Anaconda3\Lib\site-packages\streamlit\runtime\state\safe_session_state.py", line 63, in on_script_will_rerun
self._state.on_script_will_rerun(latest_widget_states)
File "C:\ProgramData\Anaconda3\Lib\site-packages\streamlit\runtime\state\session_state.py", line 504, in on_script_will_rerun
self._call_callbacks()
File "C:\ProgramData\Anaconda3\Lib\site-packages\streamlit\runtime\state\session_state.py", line 517, in _call_callbacks
self._new_widget_state.call_callback(wid)
File "C:\ProgramData\Anaconda3\Lib\site-packages\streamlit\runtime\state\session_state.py", line 261, in call_callback
callback(*args, **kwargs)
File "C:\Users\HP\RAG\RAG FULLY LOCAL Langchain + Ollama + Streamlit\local-rag-example\app.py", line 21, in process_input
agent_text = st.session_state["assistant"].ask(user_text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HP\RAG\RAG FULLY LOCAL Langchain + Ollama + Streamlit\local-rag-example\rag.py", line 54, in ask
return self.chain.invoke(query)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_core\runnables\base.py", line 2053, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\chat_models.py", line 165, in invoke
self.generate_prompt(
File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\chat_models.py", line 543, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\chat_models.py", line 407, in generate
raise e
File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\chat_models.py", line 397, in generate
self._generate_with_cache(
File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\chat_models.py", line 576, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_community\chat_models\ollama.py", line 250, in _generate
final_chunk = self._chat_stream_with_aggregation(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_community\chat_models\ollama.py", line 183, in _chat_stream_with_aggregation
for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_community\chat_models\ollama.py", line 156, in _create_chat_stream
yield from self._create_stream(
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_community\llms\ollama.py", line 215, in _create_stream
response = requests.post(
^^^^^^^^^^^^^^
File "C:\ProgramData\Anaconda3\Lib\site-packages\requests\api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ProgramData\Anaconda3\Lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ProgramData\Anaconda3\Lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ProgramData\Anaconda3\Lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ProgramData\Anaconda3\Lib\site-packages\requests\adapters.py", line 519, in send
raise ConnectionError(e, request=request)

image

And then Immediately thereafter:

image

What can I do about it ?

I am running the streamlit app locally on my Windows PC, which runs fine. I can upload a PDF, which runs fine. But: then I input my query, and after pressing ENTER I get the following error, which I can not get rid of:

ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/chat/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001AEDAEFCE10>: Failed to establish a new connection: [WinError 10061] Kan geen verbinding maken omdat de doelcomputer de verbinding actief heeft geweigerd')) Traceback: File "C:\ProgramData\Anaconda3\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 531, in _run_script self._session_state.on_script_will_rerun(rerun_data.widget_states) File "C:\ProgramData\Anaconda3\Lib\site-packages\streamlit\runtime\state\safe_session_state.py", line 63, in on_script_will_rerun self._state.on_script_will_rerun(latest_widget_states) File "C:\ProgramData\Anaconda3\Lib\site-packages\streamlit\runtime\state\session_state.py", line 504, in on_script_will_rerun self._call_callbacks() File "C:\ProgramData\Anaconda3\Lib\site-packages\streamlit\runtime\state\session_state.py", line 517, in _call_callbacks self._new_widget_state.call_callback(wid) File "C:\ProgramData\Anaconda3\Lib\site-packages\streamlit\runtime\state\session_state.py", line 261, in call_callback callback(*args, **kwargs) File "C:\Users\HP\RAG\RAG FULLY LOCAL Langchain + Ollama + Streamlit\local-rag-example\app.py", line 21, in process_input agent_text = st.session_state["assistant"].ask(user_text) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\HP\RAG\RAG FULLY LOCAL Langchain + Ollama + Streamlit\local-rag-example\rag.py", line 54, in ask return self.chain.invoke(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_core\runnables\base.py", line 2053, in invoke input = step.invoke( ^^^^^^^^^^^^ File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\chat_models.py", line 165, in invoke self.generate_prompt( File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\chat_models.py", line 543, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\chat_models.py", line 407, in generate raise e File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\chat_models.py", line 397, in generate self._generate_with_cache( File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_core\language_models\chat_models.py", line 576, in _generate_with_cache return self._generate( ^^^^^^^^^^^^^^^ File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_community\chat_models\ollama.py", line 250, in _generate final_chunk = self._chat_stream_with_aggregation( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_community\chat_models\ollama.py", line 183, in _chat_stream_with_aggregation for stream_resp in self._create_chat_stream(messages, stop, **kwargs): File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_community\chat_models\ollama.py", line 156, in _create_chat_stream yield from self._create_stream( ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\HP\AppData\Roaming\Python\Python311\site-packages\langchain_community\llms\ollama.py", line 215, in _create_stream response = requests.post( ^^^^^^^^^^^^^^ File "C:\ProgramData\Anaconda3\Lib\site-packages\requests\api.py", line 115, in post return request("post", url, data=data, json=json, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\Anaconda3\Lib\site-packages\requests\api.py", line 59, in request return session.request(method=method, url=url, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\Anaconda3\Lib\site-packages\requests\sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\Anaconda3\Lib\site-packages\requests\sessions.py", line 703, in send r = adapter.send(request, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\Anaconda3\Lib\site-packages\requests\adapters.py", line 519, in send raise ConnectionError(e, request=request)

image

And then Immediately thereafter:

image

What can I do about it ?

I found out what was wrong: OLLAMA has to be running first ! You have to pull the LLM model using OLLAMA separately from the code your are running. Then run the LLM model using OLLAMA and then this error disappears and the Streamlit runs without errors.

Now I want to run this Streamlit app on Hugging Face Spaces, but I don’t know how to use OLLAMA there to get this app running!

Can somebody help me with that? Do I have to use docker or is there another way!

The problem is solved as indicated