Stuck after killing X
git-bruh opened this issue · 10 comments
Having a weird issue running libudev-zero
with mdev
- Run
startx
- Switch to tty2, run
killall X
:- if
libudev-zero
is used,X
is killed and nothing more happens, if you try to switch to tty1 (alt + left arrow), input gets stuck completely, have to hard reboot - if
eudev
is used,X
is killed and I'm thrown to tty1, input keeps working andstartx
can be run again
- if
~/.xinitrc
feh --bg-center ~/dwm_nord.png &
slstatus &
exec dwm
Xorg log after killing:
libudev-zero
eudev
[ 90.138] (II) UnloadModule: "libinput"
[ 90.146] (II) NVIDIA(GPU-0): Deleting GPU-0
[ 90.147] (II) Server terminated successfully (0). Closing log file.
These lines seem to be missing from the log when libudev-zero
is used, maybe something needs to be done to mdev.conf
? The only thing I've done is add SUBSYSTEM=input;.* root:input 660 *env > /tmp/.libudev-zero/uevent.$$
to the existing mdev.conf
. I'm on proprietary nvidia drivers if that helps.
Thanks!
I can't reproduce(intel i915 driver). Try to use suid on Xorg binary instead of rootless patch.
Tried setting suid on Xorg
, booted with nouveau
aswell. Same issue :/
Did you experience high cpu usage while hanging in tty?
No, CPU usage seems to be normal
So turns out what actually happens is that when i do killall X
, it doesn't get killed but i have to kill it with SIGKILL
If i switch to tty1, my input isn't stuck, its just that the display isnt updated for some reason (it appears as if i just got stuck in tty2 as nothing on the screen gets updated anymore), so if i blindly type startx
after switching to tty1, it works again.
That's odd ... What xorg-server version do you have ? Try to downgrade to 1.20.9 and see if it works.
I'm on 1.20.10
, but downgrading to 1.20.9
does not help either 🤔
You're on glibc, right ? Can you try to reproduce this bug on musl ?
so glibc {nouveau,nvidia} is broken, but musl nouveau works fine... maybe it could be an issue with some other package? I doubt the libc would be related here, will investigate some more
I was able to reproduce this bug. Will fix soon
Works fine now, thanks for the fix! 🎉