stevearc/pypicloud

Recommended System Specs

Closed this issue · 1 comments

Are there any recommended system specs? We are currently running two pypicloud instances on a single VM with 1 CPU and 1.7 Gb RAM (a GCE g1-small VM). The pypicloud server is accessed relatively infrequently, certainly not under constant load.

The setup uses uwsgi in emperor mode.

Whilst that VM is small I was expecting at least the memory to suffice given all data is kept in redis and cloud storage.

For reference we also have pypi.stream_files = true. Again, given this is called "stream files" I was expecting the memory footprint of this to be relatively small.

I haven't put together any specific guidelines for specs. One thing I'd recommend tweaking on your emperor setup is the processes and max-requests variables. More processes will obviously require more memory, and if your server isn't under heavy load you don't need many. max-requests specifies how many requests a process can handle before uwsgi reloads the worker. max-worker-lifetime does something similar, but by time instead of by request. Shrinking either of those values can help mitigate issues with processes gradually growing in memory use. Python processes do not always readily release as much memory as you would expect them to, and it's also entirely possible there's a memory leak somewhere.

The stream_files behavior was added in #202, and the naming is...misleading. The way that it's currently implemented still requires pypicloud to read the entire package into memory before serving it to the client.

with request.db.storage.open(package) as data:
request.response.body = data.read()