[BUG] Unexpected memory usage for kando
Opened this issue · 2 comments
Describe the bug
When handling large database dump files (hundreds of megabytes or gigabytes), the kando push command's memory usage increases linearly, eventually leading to the process being killed by the OOM killer in case it reaches its limit or there's no more available memory on the host.
To Reproduce
Use the MongoDB example blueprint (v1) on a MongoDB database with at least 1 GB in collection size.
Monitor the execution of
mongodump | kando location push -
- The kando process starts with approximately 60 MB of RSS memory.
- When the dump reaches 100 MiB, the RSS memory usage grows to 280 MB.
- When the dump reaches 300 MiB, the RSS memory usage grows to 870 MB.
- When the dump reaches 500 MiB, the RSS memory usage grows to 1400 MB.
and so on. The pipe is monitored with the pv tool.
By comparison, using MinIO's mc client instead of kando, it reaches 600MB and stays there, regardless of the dump size.
Expected behavior
Memory usage should remain constant throughout the execution of the upload, or there should be an option to limit the maximum memory usage.
Thanks for opening this issue 👍. The team will review it shortly.
If this is a bug report, make sure to include clear instructions how on to reproduce the problem with minimal reproducible examples, where possible. If this is a security report, please review our security policy as outlined in SECURITY.md.
If you haven't already, please take a moment to review our project's Code of Conduct document.
We also have this issue. Any suggestion or fix would be appreciated.