WongSaang/chatgpt-ui

Making concurrent requests

Closed this issue Β· 13 comments

Hi,

First thanks for this project.

It seems that we can't really make concurrent requests, if we open 2 windows and submit a prompt in each the responses aren't generated simultaneously, the second one has to wait for the first one to finish.

I've tested in the playground and it's not a limitation of openAI's free trial.

Imo it negates the multi user functionality since 2 people can't use it at the same time.

I second to this. May I suggest the project use more than 1 api key to reduce the frequency of concurrent request calls. For example if the first api is being used to fulfil a request, the second api can be used to satisfy the concurrent request.

Or the concurrent request can wait until the first one has been fulfilled.

For consideration please.

I second to this. May I suggest the project use more than 1 api key to reduce the frequency of concurrent request calls. For example if the first api is being used to fulfil a request, the second api can be used to satisfy the concurrent request.

Or the concurrent request can wait until the first one has been fulfilled.

For consideration please.

I agree that concurrent requests is a must, but I don't think it's a limitation of the API, as the point of the API is to be able to use it en-mass, so it's likely something else that's blocking the concurrent request. Adding additional API keys might not solve the issue, and if it did, it'd be more of a work-around than a solution.

I second to this. May I suggest the project use more than 1 api key to reduce the frequency of concurrent request calls. For example if the first api is being used to fulfil a request, the second api can be used to satisfy the concurrent request.

Or the concurrent request can wait until the first one has been fulfilled.

For consideration please.

I agree that concurrent requests is a must, but I don't think it's a limitation of the API, as the point of the API is to be able to use it en-mass, so it's likely something else that's blocking the concurrent request. Adding additional API keys might not solve the issue, and if it did, it'd be more of a work-around than a solution.

Yes it's not, after all the API is also meant to make products from it.

I was thinking about a limitation of the free trial but it isn't either as I said in my original message.

I'll try to look at the Django server code later today.

@WongSaang does the latest update fix the concurrent request problem?

@WongSaang

I was not clear in my question and i apologise.

In the release v2.3.6, the changes are

  1. Support conversation routing, isolating each conversation and supporting simultaneous chat in multiple conversations
  2. Localized prompt support for conversation title generation
  3. Fix some variables with ambiguous naming

What does "Support conversation routing, isolating each conversation and supporting simultaneous chat in multiple conversations" mean?

This means that you can chat in multiple conversations at the same time

@WongSaang

I tried logging in to 2 differents accounts and send different questions to openai.

Both questions were not answered simultaneously. I had the impression that v2.3.6 fixed that.

Are you working in two tabs simultaneously?

Are you working in two tabs simultaneously?

One tab in incognito (account A)
And
One tab in normal (account B)

Hello, after investigation, I found that the issue was caused by the backend service. The backend uses gunicorn for hosting, and by default, it only has one worker. This can cause blocking when multiple requests are made at the same time.

Solution:
An environment variable SERVER_WORKERS has been added to control the number of workers in the backend. The default worker number is 3.
If 3 workers are not enough, you can configure the environment variable under wsgi-server. We recommend setting the number of workers to (2 x $num_cores) + 1, where $num_cores is the number of cores allocated for your container. For example:

backend-wsgi-server:
    image: wongsaang/chatgpt-ui-wsgi-server:latest
    environment:
      - SERVER_WORKERS=5

works great. Thank you!!

πŸ‘Œ