Close connection faster to save memory
jozn opened this issue · 3 comments
This project seems not to close connections fast enough, which could be important for this kind of servers.
In my server with modest (dozen) number of users the number of open files of trojan is high:
root@srv1667:~# lsof -s | grep trojan | wc
returns 20913
The cpu usage is low but the memory usage is relatively high. I list the open files, most of ip address (each connected mobile device) have hundreds of open tcp connections (with different ports ofc). Most of them seems to be short lived tcp connections.
For a project like this it would be better to timeout connections faster to save resources, I believe most of this kind of servers had a short timeout.
Memory usage in my case for 10-30 users is ~250MB with cpu load at ~3% of a single 2.66GHz core.
I only checked tcp connections to users device, not the trojan to other servers.
I'm not sure closing connections could be done safely without breaking some applications.
I am aware of this issue, but fixing it is a tough problem. The reason the connection is not closed fast enough is that the proxy server would wait for the client side to terminate the connection first. Otherwise, it would keep the connection open unless the server side wants to close it.
Depending on the client side implementation, for example, if you are using the built-in firefox socks proxy, the proxy connection only gets terminated when you close the tab. Meanwhile, other clients can deploy very different approaches about managing existing connections.
One possible work around could be setting a timeout for each open connection, ie. If the connection is idle for 5 mins, the server should pro-actively shut it down. However, this behavior may not be always desired depending on the use case.
Just for reference for others who want to use this project I have wrote a simple bash script to restart trojan every hour to mitigate this problem, otherwise I don't think this project can be used as it is, because for many reason TCP close signal will not be sent to servers (bad client implementation, device goes out of network, ...) and the open connections will hang forever and the memory usage goes up.
For Ubuntu:
#!/bin/bash
echo “$(date) - Restaring trojan”
# Set open file limits
ulimit -Hn 250000
ulimit -Sn 250000
# Kill trojan
killall -9 trojan
# Try to start trojan. If trojan did not start in first attempt (sometimes this happens), try few more times.
for run in {1..5}; do
cd /root/trojan/
nohup ./trojan -c config.json &
sleep 2
done
# Add to corn jobs (remove #).
# in Ubuntu: # crontab -e
# Every hour restart trojan.
# 0 * * * * bash /root/trojan/runner.sh >> /root/trojan/cron_log.txt
This works like a charm, I did not saw any problems in applications I use, and the memory usage of trojan is kept in check. The corn job is great, even if server unexpectly restart (unplugged,...) it will bring trojan back online after 30 minutes on average. No worry about unxpedted server reboot.