pytorch/serve

Whether the pre- and post-processing operations of batch processing are parallel

pengxin233 opened this issue ยท 1 comments

๐Ÿ“š The doc issue

During batch processing, torchserve accumulates the number of corresponding batches. When preprocessing, are there parameters that can control torchserve to perform preprocessing in parallel, and then perform inference together? Or do I need to implement parallel logic in the handle myself?

Suggest a potential alternative/fix

No response