"... &" background statement cannot work with functions?
sdothum opened this issue · 26 comments
Does the "... &" statement only work with simple commands and not functions? For example, if I have defined
function q; echo $argv; sleep 10s; echo $argv; end
and execute
q 1 &; q 2 &; q 3 &
the output is
1
1
2
2
3
3
Job 3, “q 3 &” has ended
Job 2, “q 2 &” has ended
Job 1, “q 1 &” has ended
with the observed performance (delay) indicating that each successive "q" command is not executed until the previous command is completed. I use in POSIX sh "(sleep [some delay] && [command]) &" quite commonly in startup scripts e.g. for window managers to allow for the initialization requirements of certain apps.
If there is a correct "fish" way of spawning multiple background processes, my second question is: is there a way to spawn a block of commands in background similar to POSIX "(command1 && command2 && command3...) &". I tried "begin; ...statements...; end &" but fish does not like that!
On Linux, executing "vi &" doesn't take vi to background, and at exit I get following message :
Job 1, “vi &” has ended
Is anyone else able to reproduce it ?
fish probably doesn't spawn a subshell for functions executed as background jobs. If it did, those jobs probably could not do things like update global variables.
$ vi &
Job 1, “vi &” has stopped
Seems to work correctly here using fish packaged in debian:
Version: 1.23.1+20120106.git8b407a3-1
Could be a regression?
I had written a function named "vi" (alias like) to invoke vi editor, so I was not able to invoke vi in background. This should be fixed, we need to add support for executing functions in background.
This bit me in a rather confusing way just now.
Several weeks ago I defined an alias: alias redis="redis-server /usr/local/etc/redis.conf"
today, I was trying to run redis and elasticsearch together in the same console, both as background jobs, so that their output would be in the same shell and typed redis & elasticsearch &
. Instead of what I was expecting, I got just the redis server and on exit, it spawned elasticsearch in a background job.
I understand what you're saying about subshells for functions being overkill, but I think that, as with the !! syntax, if there is a way to achieve this functionality without changing things, it'll meet the expectations of more people migrating from bash/zsh, because you can background just about everything in a bash.
Thanks again for all your work on this awesome shell!
This is a known problem, see also issue #563.
Unlike most other shells fish is a multithreaded program, so it can not
just be forked into a new subshell without coordinating the multiple
threads. That could be done but no-one has taken the time to do so. The
other (preferable) solution is to have background functions execute in
different threads, this should be possible but would also require some work
to be implemented.
Ah. Well that would also explain #1040 then, I think.
Hm, what if calls to builtins, like set
, were synchronized? @ridiculousfish Then subshells could update global/universal variables but allow everything else to run concurrently, or am I missing something else?
@maxnordlund The biggest obstacle is that fish uses the C stack for builtins, functions, and command substitutions. Therefore executing two of these concurrently means multiple threads, multiple processes, or something like coroutines.
I have a branch that tries to support multithreaded execution, but it's very complex so I'm not optimistic that it will work.
I was leaning towards multiple threads, and come to think of maybe it would be easier if script execution was always running in the background. Every time it needed a builtin it synchronized/sent a message to the main thread to do the work.
Sort of like most GUI frameworks keep one thread responsible for rendering, and you always have to ask it if you need an update.
Fish shell already uses libevent, right? Couldn't we keep it single threaded, but switch builtin and function execution to an asynchronous or coroutine model? That solves the global state atomicity issue- only one piece of code is ever running at a time, so changes to variables, working directory, etc all happen atomically. Once all pipelines are converted to true unix pipes, rather than being fully buffered between functions (see #936), the pipe's file descriptors can be selected over in the libevent event loop, allowing for single-threaded concurrency in the same vein as node.js or python's asyncio.
We should probably assume that users who are backgrounding fish functions are aware and accepting of the possibility of global state mutation, rather than attempting to simulate multiple working directories (for instance, with openat
). This is already the behavior of background jobs that write to stdout. One of the main fish features is that it never forks except when launching a program, so cd
and set
and related commands never don't work as expected. Therefore, when a user writes:
function contrived
sleep 5
set -g MY_VAR hello
cd $argv[1]
end
contrived /another/directory
The user clearly wants the current shell to switch to /another/directory
and set MY_VAR
globally to hello
after 5 seconds. We should keep this behavior for background jobs- if the user instead does contrived /another/directory &
, the working directory and variable should still be changed after 5 seconds, even though it happened in the background. This is the same as background jobs sharing stdout/stderr with the foreground. Fish doesn't currently have a mechanism for a "local" working directory, in the same way it has "local" variables; that should probably be a separate (though related) discussion. cd -l
, anyone?
No, fish does not use libevent.
Making the internals thread safe is part of it. There's also the issue of replicating process state. For example, we need to have multiple variable stacks, since two separately executing functions need independent local variables. We need separate values of $status
. We need to have separate notions of "last backgrounded process" since functions may use that. We need separate working directories - I don't think it's acceptable to prohibit cd
for two simultaneously executing functions.
I did some work on this in the multithreaded_execution branch, but I think as it stands it adds too much complexity to merge, and if we want to go down this path, it will involve a lot of rewriting.
Are there any plans to add multithreading support to functions in the near future?
Since an alias is actually a function, a large piped 'grep' can take forever... (since 'grep' is usually an alias)
Are there any plans to add multithreading support to functions in the near future?
No. With 530 open issues as I type this patches are welcomed.
Powershell just released proper backgrounding, hopefully fish gets it soon! :)
Until then, this hack is still working well for me (though backgrounded tasks don't have access to local vars from the current shell):
function background --description "Run a fish command in the background using a bash subshell"
bash -c "fish -c '""$argv""' &"
end
background "sleep 5; echo hi"
@Lucretiel notice the &
on the end is part of the bash command. Bash provides backgrounded subshells with consistent behaviour, fish does not.
Edit: maybe I misunderstood, did you mean call fish -c 'command' &
from within fish? It produces a line: 'fish -c 'sleep 10; echo hi' &' has ended
which I'd rather suppress.
Bash provides backgrounded subshells with consistent behaviour, fish does not.
I have no idea where you got that idea. What exactly is wrong with simply using fish to do this? This is commonly known as the double-fork trick and it works fine with fish.
@krader1961, sorry I mistakenly thought that because fish had issues backgrounding functions, that I wouldn't be able to trust it to background other commands consistently as well.
fish -c '...' &
seems to work fine, although I'm going to keep using the bash double-fork because I don't want the 'fish -c 'sleep 10; echo hi' &' has ended
line when fish bg jobs finish.
I think it is acceptable that functions cannot be put in the background, but it should be better communicated to the user. Now it just runs in the foreground anyway, even though the user requested the background: it is blocking the shell and in the end the user sees: Job 1, 'hello &' has ended
even though there never was a job running in the background.
I think in case of a request hallo &
, where hello
is a function, there should be a syntax error or other message, that tells the user that this command is a function and won't run in the background.
What do you think about this? Fish stands for a friendly shell: it should tell the user what it does in my humble opinion.
You can work around the "has ended" message by putting the job start in a block: begin; sleep 5 &; end
@sabinem I think it's also very important to birng the user attention to abbreviation. This is a feature not available in other shells most new users(and this is what happened to me) just copy their alias.sh to alias.fish source it and assume everything is gonna be okay then they run at issues such as #238 and assume their workflow is gonna be totally disrupted in fish.
I think an even better way to deal with the alias situation is to add --silent functionality to abbreviation and replace aliases with those instead of functions this should be easier to implement than running functions in the background and better mimic the alias functionnality in other shells.
A message such as this would be nice when running an alias command with &.
I think it's important to bring the user attention to abbreviation when using an alias but maybe not so if the user use a regular function so there should be a way for fish to distinguish between alias functions and normal ones maybe by adding a specific description for functions created by alias?
For normal functions we can just tell the user that functions can't be put in the background.
If it saves anyone 10 minuntes working out the correct way to quote, I believe this works (or will happily be corrected) as a workaround:
function bgFunc
fish -c (string join ' ' (string escape $argv)) &
end
A slight improvement on @stellarpower's wonderful bgFunc
wrapper above:
function bgFunc
fish -c (string join -- ' ' (string escape -- $argv)) &
end
(I happen to have arguments starting with --
which would get passed to string escape
and string join
as their arguments instead of string input if not using --
to explicitly mark the end of the arguments)
A message such as this would be nice when running an alias command with &. I think it's important to bring the user attention to abbreviation when using an alias but maybe not so if the user use a regular function so there should be a way for fish to distinguish between alias functions and normal ones maybe by adding a specific description for functions created by alias? For normal functions we can just tell the user that functions can't be put in the background.
This wrongly assumes all functions are aliases.