[Question]: How to optimise the memory usage for websocket client
coder3101 opened this issue · 5 comments
Hi,
I am writing a load testing tool and my use case demand opening up many websocket connections to a server, each connection is long lived and has a keep-alive send and receive mechanism, where every client sends "keep-alive" (text message) at interval of 4 second. Server also sends keep-alive text and some events as they occur.
Running that load tool on a k8 pod with 4G of memory limit results in OOM after opening up 5000 connections. I suspect the memory usage shoots up because of websocket connections as I tried to run without opening connection and memory usage was well under 500M.
I was wondering if there is a way to reduce the memory usage of client since I would like to open as many connections as possible with limited resource without triggering OOM.
I am using tokio-runtime with this library, I tried various different libraries such as:
I did not find any difference significant difference.
Thanks
Use a profiler and check where the memory is actually used up, and then we can look at optimizing those things :)
Do you have a testcase for this that you can share btw?
I profiled using instruments and found that native-tls
on linux was causing too much memory allocation. I replaced it with rust-tls
and now 10 times less memory is being allocated.
During profiling I also found that connect_async
was the highest contributor to memory usage.
During profiling I also found that
connect_async
was the highest contributor to memory usage.
Which part?
I see. Do you see any possibility of optimization in there? :)