enhance scaling w/r input data size
Opened this issue · 0 comments
dparalen commented
At the moment, performance of dva
w/r data input size is limited by it's design. dva
utilizes gevent
pools of a single process to handle IO events. This approach is limited by both the number of file-descriptors a process may use and by not utilizing multiple processors/cores of a system to scale. Given Python's threading limitations, utilizing multiprocessing
seems the way to take. Possibly, one could distribute the data load by mapping it to a worker-process-pool.