Cloudef/wlc

RFC: Buffer API autodetection

Closed this issue · 10 comments

Currently wlc always chooses GBM as the default buffer API unless the WLC_USE_EGLDEVICE environment variable is set to a truthy value in which case the EGL streams buffer API is chosen.

This means that when running wlc under the proprietary nVidia drivers you have to set the WLC_USE_EGLDEVICE environment variable before running wlc. While not a big issue, this is a little inconvenient and it's always nice when a program runs without having to do any configuration whatsoever.

Would you be open to accept a PR which:

  • removes the WLC_USE_EGLDEVICE environment variable config option
  • adds WLC_BUFFER_API environment variable config option with possible values GBM and EGL which forces the buffer API to the given type
  • implements some form of auto-detection for the buffer API (I'm thinking something simple like checking if the nvidia kernel module is loaded, similar to the way sway does it, but other suggestions are welcome) so that wlc would run on proprietary nVidia drivers without any configuration

Sounds ok. Not sure about the auto-detection. It's not hard to export WLC_BUFFER_API=EGL in env. But if you think it is worth it, go for it.

👍 for auto-detection or we're gonna get a truckload of annoying support requests

@Cloudef Great. In that case I'll try to prepare a PR tomorrow if I'll have time.

@SirCmpwn If you're okay with it I'll mostly just copy-paste and adapt your detect_proprietary function. But isn't there a C function somewhere in the standard library which does what read_line does so I don't have to copy that too? (I'm asking because I don't know C and that function seems generic enough that it might be in the standard library).

No, there's not.

If you are going to do auto-detection you should do it the right way.

Checking the module does not help at all on systems were multiple cards are present and may break systems, that currently work just fine, where the card actually used by wlc is supporting the GBM buffers.

I do not know, whats the best way to get the loaded module of a device in code (probably via udev), but you can find it out quite easily with some shell commands, so I should not be too difficult.

Assuming the device used is /dev/dri/card0 (which is default, if I am not mistaken, unless WLC_DRM_DEVICE is set):

$ udevadm info --query=all -n /dev/dri/card0 | grep DEVPATH
E: DEVPATH=/devices/pci0000:00/0000:00:02.0/drm/card0
$ readlink /sys/devices/pci0000:00/0000:00:02.0/drm/card0/device/driver
../../../bus/pci/drivers/i915

The crud detect_proprietary function was fine for a warning because it was not affecting functionality, but please don't break this for some existing users. Not everybody just has a single graphics card.

EDIT: I know I can always just set WLC_BUFFER_API, but seeing @SirCmpwn is already concerned about support requests, so...

Should dig into udev and see how it does that, rather than depend on udev specifics.

Right, that was just a quick example.

I took a stab at it in #248 using the drmGetVersion function which seems to return the driver used by the DRM device.

@Drakulix @SirCmpwn Should that be robust enough? Do you know of any scenarios for which that doesn't work reliably?

If there is any, this only should be relevant for devices using nvidia-drm, so at least my original point cannot be relevant anymore and I cannot think of any more scenarios.

Thanks for figuring out how to do this! Highly appreciated.

Fixed in a7a3db8