castwide/readapt

Support for stdin/stdout communication

PyvesB opened this issue ยท 22 comments

Hello @castwide!

Thanks for starting this project. I've had a quick play around with the debugger and it seems to work great. ๐Ÿ‘

If I'm not mistaken, Readapt currently only supports communication over a network socket. Are you planning on adding support for stdin and stdout, similarly to the solargraph stdio and solargraph socket variants? That would be very handy!

Thanks!

Pyves

Thanks!

I'd love to add a stdin/stdout adapter, but it wouldn't be easy. In the current architecture, the output from the target program is very likely to mangle DAP messages. We might need some sort of meta-protocol to separate server I/O from target I/O. Another option is to separate the server and the target into separate processes, but that would require IPC overhead that might make the debugger unusably slow. I suspect that rdebug-ide (the tool that vscode-ruby uses) requires a socket for similar reasons.

I'm open to suggestions, especially since solving this problem might help fix other issues (e.g., properly redirecting the target output through the DAP); but at the moment I don't have a solution.

I've pushed two updates to the repo:

  1. The server runs the debug target in a subprocess so it can properly send its output to the client through the debug adapter protocol.
  2. There's a readapt stdio command for running the server over stdin/stdout.

You can check it out in the master branch if you want to test it before I release a new gem.

Hi @castwide , thanks for looking into this! I've built the gem from master and did a bit of testing, things seem to work! ๐Ÿ‘

I did notice one small detail. When launching Readapt in "socket" mode, a message similar to the following is printed out:
Readapt Debugger 0.8.1 is listening HOST=127.0.0.1 PORT=59484 PID=9684

Similarly with Solargraph in "stdio" mode:
Solargraph is listening on stdio PID=12828

However, there aren't any messages when launching Readapt in "stdio" mode. It's definitely not the end of the world and may even be intentional, but I thought I would mention it just in case.

I'll let you know if I spot anything else that looks odd.

Thanks for the heads up. I'll probably omit the launch message unless someone turns out to need it. Either that or put it in a configurable log level for people trying to debug the debugger itself.

The main reason it's in Solargraph is because the socket server can select arbitrary ports to avoid conflicts with other instances of itself, and the client needs to know which port got assigned.

Thanks for the heads up. I'll probably omit the launch message unless someone turns out to need it.

Fine by me!

I've done some more testing, and I've noticed a couple of odd things:

  • the Disconnect Request does terminate Readapt but it seems to hang after that, i.e. the request never returns from my client's point of view. It does return in "socket" mode though.
  • if I close the stream that pipes into Readapt's stdin, it keeps on running. If the IDE is killed or if the debug session is terminated abruptly, this will cause it to leak Readapt instances. Note that Solargraph does shutdown when you close the input pipe.

Thanks. There are definitely a few issues with terminating processes. I'll try to get them sorted before the next release.

I refactored the disconnect process and updated the server to terminate target processes on close. If you still have problems with leaking instances, please let me know.

Barring unforeseen bugs, I expect the next release to be version 1.0.

The Disconnect Request does now seem to terminate everything as expected. However, when the stream is closed, only one of the two Ruby processes part of the Readapt instance is terminated, the other one remains.

Note that I'm testing on Windows. If I recall correctly, terminating a given process with this operating system does not automatically terminate all sub-processes that were created by it. Maybe that's the cause of the issue?

Yeah, I've seen plenty of cases where child processes go zombie if you don't do any housecleaning. The adapter should terminate the target process as soon as the client connection is closed. It's been working for me the past couple of days, but I've been testing primarily with the socket transport. I'll switch to stdio and see what happens.

I was able to reproduce this bug with the server process as the zombie. If it tried to send a message to a dead target process (i.e., the process had stopped or its stdin was closed), an exception would prevent the server from shutting down. The latest version of master should fix it.

Unfortunately, latest HEAD seems worse than HEAD~1 or HEAD~2: when the stream is closed abruptly, both Ruby processes part of the Readapt instance remain, whereas it was only one in the two other commits I tested.

Ouch. Thanks, I'll keep looking.

The latest update to master explicitly stops the Backport machines that manage processes.

If you're still encountering an issue with zombie processes, can you give me an idea of how you test for it? I might not even be looking in the right place.

Unfortunately, I won't have access to the Windows machine I've used for testing so far for the next ten days or so. I'll try things out as soon as possible after that!

Cool, thanks!

Hello @castwide ! Great news to start the new year, I'm no longer able to reproduce any issues with the latest master! ๐Ÿพ

Awesome! I want to run some more tests, but overall I think we'll be able to release version 1 before the end of the month. Thanks again for your help.

@castwide anything else I can test to help deliver the next release?

I think it's about ready. I just pushed a minor change that sorts variables alphabetically. If you don't see any showstopping bugs, I'll go ahead and publish v1.0.0 this weekend.

No showstopping issues as far as I can tell! The variable sorting works as expected (though instance and class variables are sorted in their own respective groups as there are two loops, but that's fine!).

Published gem v1.0.0.

Great stuff, thanks! Nothing left to do on this issue, I'll go ahead and close it! ๐Ÿพ