target/lorri

Reload running programs when a build finishes

Opened this issue · 6 comments

Feature description

I'm not sure if this is the right place to try and fix this issue with Nix, but I am curious to see other's thoughts on the mater.

When running programs in a lorri shell (think hoogle, ghcid, hie or ghcide) and you make a change that requires a rebuild of your shell, you have to kill the program, wait until the build finishes (usually by manually checking the daemon) and then restart the program. Now I do love the fact that I still can use the older cached version while the current one builds, but needing to manually reload gets old quick when you have several tools that need to be restarted so that they can use the new libraries/documentation that was just pulled down. It would be a nice user experience to have specific shells (maybe lorri run) that send a, SIGHUP, or something to that effect to the running program and then re-runs the command after the latest build finishes.

Target users

In particular this would help with users running multiple IDE like tools. I often add in a missing dependency and then go back to what I was working on. It would be really nice to just continue with the problem at hand and not think about having to reload my tools or check when a new build is finished. Don't get me wrong, lorri already makes development much easier, I'm just hoping to make development with nix that much nicer.

You would have to use the lorri internal stream_events command (huh, where do the underscores come from) and combine it with a service manager that you run your project’s services with (I had some success with https://git.skarnet.org/software/s6/).

stream_events gives you a line of json per event and you would convert that to the service restart command of your service manager.

There was some work towards a lorri services command at some point. The idea here was exactly to have long running background processes which would be rebuilt and restarted on the fly.

#259 is the draft PR. I have no plans to continue working on this; doing it properly is a lot of work.

@curiousleo, I took a look at your PR and can see how complex it would be to have a proper services command. Would a cut down version under a run command be acceptable? Instead of managing a group of services with their own configuration, run a single command in a lorri shell --cached and restart the whole process (re-run lorri shell --cached and run command) when a new build is complete. This would not address restarting failed services or prevent your service from being restarted repeatedly, but conceptually its much simpler and (I assume) easier to implement. I think I can take a stab at this over the weekend.

Otherwise, I can always try out @Profpatsch's idea.

@curiousleo, I took a look at your PR and can see how complex it would be to have a proper services command. Would a cut down version under a run command be acceptable?

There are simplifications that can be made, no doubt. I have the feeling that this is the kind of feature that will always have a tendency to grow in the direction of a full service manager.

Otherwise, I can always try out @Profpatsch's idea.

I think this is worth trying. It's more Unix-y, and we can use feedback on event streams / generally how easy or hard it is to intergrate lorri with other tools.

I’m using the s6 approach in a production project and it’s working quite nicely (though I don’t have any automatic restart set up, I just do it manually when necessary).

One approach to this might be to support a project-local config file like .lorrirc, and allow the user to specify hook commands in there, which would simply be shell commands executed by lorri in response to certain events. This way, one person could use a hook command to restart locally-running services, and another could ping their editor to reload the direnv (thinking slightly about emacs here).

I feel like this might help keep lorri small and focused, as opposed to worrying about things like lorri services, or streaming events, but of course it's not without some complexity. For example, what happens when hook commands block, and should they be executed in the new env, or responsible for querying direnv themselves etc.? Structured information could potentially be passed to the hook callback commands as additional command-line args.