run-llama/sec-insights

Could not connect to the endpoint URL: "http://localhost:4566/llama-app-backend-local"

BytesByJay opened this issue · 4 comments

I'm encountering this error in both CodeSpace and on my local machine.

image

File "/root/.cache/pypoetry/virtualenvs/llama-app-backend-9TtSrW0h-py3.11/lib/python3.11/site-packages/aiobotocore/endpoint.py", line 285, in _send
    return await self.http_session.send(request)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.cache/pypoetry/virtualenvs/llama-app-backend-9TtSrW0h-py3.11/lib/python3.11/site-packages/aiobotocore/httpsession.py", line 253, in send
    raise EndpointConnectionError(endpoint_url=request.url, error=e)
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "http://localhost:4566/llama-app-backend-local"

@DhananjayanOnline can you provide more context on when you're seeing this error? Is this when running a particular make command or when taking a certain action on the frontend?

@sourabhdesai I have the same error as @DhananjayanOnline.

I get this error when setting up my environment for the first time and while testing the backend through make chat. I added new documents in the same way as the youtube video and get this response in the chat window:

(Chat🦙) message What are the three scaling factors?
=== Message 0 ===
{'id': 'cc08e26b-b0dd-4adf-a0c7-635d84c2f443', 'created_at': '2023-10-20T20:02:14.371304', 'updated_at': '2023-10-20T20:02:14.371304', 'conversation_id': 'f32b33e5-a035-4ef3-8878-11162c9e5430', 'content': '', 'role': 'assistant', 'status': 'ERROR', 'sub_processes': []}

And in my localstack log I get the same error shown above.

Some more details:

  • I'm running this from behind a proxy but seem to have that part of it worked out. I have external connectivity and set $no_proxy for these local addresses:
    export no_proxy="localhost,127.0.0.1,llama-app-fastapi,db,localstack"
  • http://localhost:8001/api/document/ looks good:
    [
    {
    "id": "bb50458d-94e2-4cf5-a1cc-0fb438ef0da1",
    "created_at": "2023-10-20T02:14:46.346857",
    "updated_at": "2023-10-20T02:14:46.346857",
    "url": "https://arxiv.org/pdf/2310.05915.pdf",
    "metadata_map": {}
    },
    {
    "id": "34f131d5-d146-4441-bece-53b03c215b65",
    "created_at": "2023-10-20T02:26:10.007830",
    "updated_at": "2023-10-20T02:26:10.007830",
    "url": "https://arxiv.org/pdf/1706.03762.pdf",
    "metadata_map": {}
    }
    ]
  • Similarly, http://localhost:4566/health looks good with all the services available.
  • I'm able to curl from my local machine: curl http://localhost:4566/health as well with all the services showing up
  • however, if I try to curl from inside llama-app-fastapi I get an empty reply:
    $ docker-compose exec llama-app-fastapi curl http://localstack:4566/health
    curl: (52) Empty reply from server
  • ie) the issue seems to be connectivity between the containers, not between host and container
  • So finally, I tried editing the app/core/config.py S3_ENDPOINT_URL to call "http://localstack:4566" instead of localhost. This resulted in a slightly different error in the localstack log during the make chat message:

llama-app-fastapi_1 | File "/lib/python3.11/site-packages/aiobotocore/httpsession.py", line 240, in send
llama-app-fastapi_1 | raise ConnectionClosedError(
llama-app-fastapi_1 | botocore.exceptions.ConnectionClosedError: Connection was closed before we received a valid response from endpoint URL: "http://localstack:4566/llama-app-backend-local".

Any help here would be much appreciated.

@DhananjayanOnline can you provide more context on when you're seeing this error? Is this when running a particular make command or when taking a certain action on the frontend?

@sourabhdesai I was encountering this error when sending a message in the chat section, but now it seems to be working fine. I'm not sure what happened.

Additionally, I'm curious if it's possible to use Minio instead of S3 buckets. If so, could you please advise me on what changes are required to make that switch?

Edit: this problem is fixed on my end now. Starting from scratch on the backend and following the readme/youtube instructions cleared things up