cheminfo-py/xtbservice

timeout does not seem to work

Opened this issue · 10 comments

@lpatiny reported that there seem to be issues with the timeout

Screen Shot 2021-11-09 at 09 38 49

hmm ... i can provoke timeouts

This morning I have again a lot of processes running

image

I will send you the full log by PM.

Maybe the could be a route that shows the currently running processes and when they started ? Also are we sure that when there is a time out the process that makes the calculations is really killed ?

For information here is the list of processes currently running

root     16673  0.0  0.0   2376   504 ?        Ss   Nov09   0:00 /bin/sh -c gunicorn -w 4 xtbservice.xtbservice:app -b 0.0.0.0:$PORT -k uvicorn.workers.UvicornWorker
root     16743  0.0  0.0  31448 21728 ?        S    Nov09   0:20 /opt/conda/bin/python /opt/conda/bin/gunicorn -w 4 xtbservice.xtbservice:app -b 0.0.0.0:8091 -k uvicorn.workers.UvicornWorker
root     16854  0.0  0.1 4832068 121324 ?      Sl   Nov09   1:41 /opt/conda/bin/python /opt/conda/bin/gunicorn -w 4 xtbservice.xtbservice:app -b 0.0.0.0:8091 -k uvicorn.workers.UvicornWorker
root     16855  0.0  0.1 5366344 127272 ?      Sl   Nov09   1:42 /opt/conda/bin/python /opt/conda/bin/gunicorn -w 4 xtbservice.xtbservice:app -b 0.0.0.0:8091 -k uvicorn.workers.UvicornWorker
root     16864  0.0  0.1 6498408 147200 ?      Sl   Nov09   1:46 /opt/conda/bin/python /opt/conda/bin/gunicorn -w 4 xtbservice.xtbservice:app -b 0.0.0.0:8091 -k uvicorn.workers.UvicornWorker
root     24088 2330  0.1 3936608 105468 ?      Rl   Nov10 27220:13 /opt/conda/bin/python /opt/conda/bin/gunicorn -w 4 xtbservice.xtbservice:app -b 0.0.0.0:8091 -k uvicorn.workers.UvicornWorker
root     25707  0.0  0.0 112684   996 pts/4    S+   06:11   0:00 grep --color=auto gunicorn
root     25824  0.0  0.1 6176272 137140 ?      Sl   Nov10   0:51 /opt/conda/bin/python /opt/conda/bin/gunicorn -w 4 xtbservice.xtbservice:app -b 0.0.0.0:8091 -k uvicorn.workers.UvicornWorker
root     27536  0.0  0.0   2376    32 ?        Ss   Sep17   0:00 /bin/sh -c gunicorn -w 2 --backlog 16 dimensionality_reduction.dimensionality_reduction:app -b 0.0.0.0:$PORT -k uvicorn.workers.UvicornWorker
root     28464  0.0  0.0  33088  2344 ?        S    Sep17  10:52 /usr/local/bin/python /usr/local/bin/gunicorn -w 2 --backlog 16 dimensionality_reduction.dimensionality_reduction:app -b 0.0.0.0:14101 -k uvicorn.workers.UvicornWorker
root     30893  0.1  0.0 2567200 50356 ?       Sl   Sep17  88:04 /usr/local/bin/python /usr/local/bin/gunicorn -w 2 --backlog 16 dimensionality_reduction.dimensionality_reduction:app -b 0.0.0.0:14101 -k uvicorn.workers.UvicornWorker
root     30895  0.1  0.0 2788388 54800 ?       Sl   Sep17  87:45 /usr/local/bin/python /usr/local/bin/gunicorn -w 2 --backlog 16 dimensionality_reduction.dimensionality_reduction:app -b 0.0.0.0:14101 -k uvicorn.workers.UvicornWorker

Maybe the could be a route that shows the currently running processes and when they started ?

typically this is nothing that i'd put in the service code

when there is a time out the process that makes the calculations is really killed ?

this is the only thing i can imagine now. That there is still some thread remaining

and you can also add cpu limit in the docker-compose to avoid that you manually need to restart https://docs.docker.com/compose/compose-file/compose-file-v2/#cpu-and-other-resources. Maybe it is easier if you just use v2

i'll push an example with those settings
Screen Shot 2021-11-11 at 08 04 45

for me, locally, i couldn't find a way to run out of resources

Screen Shot 2021-11-11 at 08 07 48

so i really started a lot concurrent requests