shellster/keepass-sftp-sync

Sftp connections are not exited properly

pmust opened this issue · 5 comments

pmust commented

Hi. I have been using this for a couple of days now to sync my keepass database which is in my VPS. Today I noticed that i have hundreds of "sshd petri@notty" processes running (checked via ps aux). I am not sure what is causing this. I had to include lines in my sshd_config that will kill any unresponsive connections via keep-alive checks. I do not think this is the right way to go though.

Here's an example (ps -aux | grep notty) after me saving the database in keepass:

petri     707273  0.0  0.2  13900  5900 ?        S    22:19   0:00 sshd: petri@notty
petri     707300  0.0  0.3  13904  6004 ?        S    22:19   0:00 sshd: petri@notty
petri     707329  0.0  0.3  13904  5988 ?        S    22:19   0:00 sshd: petri@notty
petri     707356  0.0  0.3  13904  5968 ?        S    22:19   0:00 sshd: petri@notty
petri     707381  0.0  0.3  13908  5960 ?        S    22:19   0:00 sshd: petri@notty
petri     707406  0.0  0.2  13904  5812 ?        S    22:19   0:00 sshd: petri@notty
petri     707431  0.0  0.3  13900  6004 ?        S    22:19   0:00 sshd: petri@notty
petri     707456  0.0  0.2  13908  5848 ?        S    22:19   0:00 sshd: petri@notty
petri     707481  0.0  0.2  13904  5860 ?        S    22:19   0:00 sshd: petri@notty
petri     707506  0.0  0.2  13904  5264 ?        S    22:19   0:00 sshd: petri@notty
petri     707531  0.0  0.2  13908  5912 ?        S    22:19   0:00 sshd: petri@notty
petri     707538  0.0  0.0   6432   732 pts/0    S+   22:20   0:00 grep --color=auto notty

Multiple processes being spawned. Which do not go anywhere. Same command a bit later:

petri     707273  0.0  0.2  13900  5900 ?        S    22:19   0:00 sshd: petri@notty
petri     707300  0.0  0.3  13904  6004 ?        S    22:19   0:00 sshd: petri@notty
petri     707329  0.0  0.3  13904  5988 ?        S    22:19   0:00 sshd: petri@notty
petri     707356  0.0  0.3  13904  5968 ?        S    22:19   0:00 sshd: petri@notty
petri     707381  0.0  0.3  13908  5960 ?        S    22:19   0:00 sshd: petri@notty
petri     707406  0.0  0.2  13904  5812 ?        S    22:19   0:00 sshd: petri@notty
petri     707431  0.0  0.3  13900  6004 ?        S    22:19   0:00 sshd: petri@notty
petri     707456  0.0  0.2  13908  5848 ?        S    22:19   0:00 sshd: petri@notty
petri     707481  0.0  0.2  13904  5860 ?        S    22:19   0:00 sshd: petri@notty
petri     707506  0.0  0.2  13904  5264 ?        S    22:19   0:00 sshd: petri@notty
petri     707531  0.0  0.2  13908  5912 ?        S    22:19   0:00 sshd: petri@notty
petri     707568  0.0  0.0   6432   660 pts/0    S+   22:24   0:00 grep --color=auto notty

Everything is still here. After adding the lines ClientAliveInterval 30 and ClientAliveCountMax 4 I was able to remove the lingering processes automatically after a few minutes. But something odd is happening here. I did try to have Connection Timeout in the sfpt settings in keepass (open url -> advanced).

Thanks for the bug report. I will take a look and see if I can replicate this and post back when I get a chance.

@DemoniWaari Can you please see if the attached build fixes this issue and report back. You can also build it yourself if you prefer (https://github.com/shellster/keepass-sftp-sync/tree/Fix_Session_Hangs)

SftpSync.zip

For more context, I now explicitly close the connection after pulling down the file.

pmust commented

Yeah this works! No more lingering connections. Though is there a reason why the plugin opens up a connection, then terminates it, then opens up another, then terminates... 11 times in total?

I'm not sure where you are getting "11 times" from. It does one connection to pull down the file. Whenever you type your password, it will attempt to SSH to grab the file. Whenever you alter the local, cached copy of your password wallet, it will pull down the latest version to synchronize the changes. Since these connections were not being closed properly, they would continue to hang around for awhile. I will push this fix shortly as the changes seem to address the issue.