fsquillace/junest

Trying to import Hardware Accelleration / OpenGL from host to guest using environment variables to bind

ivan-hc opened this issue · 11 comments

Hi @fsquillace , as we have already talked elsewhere, I'm creating JuNest-based ppImages in my project I've named "ArchImage".

As you have said at #342 :

I guess that's tricky and not necessarily it depends on JuNest itself. Drivers are only needed for the host linux kernel which is outside JuNest. If drivers are not installed by the host, JuNest cannot do much for it.

And as @ayaka14732 have suggested at #209 its necessary to build the same Nvidia drivers on the guest.

It almost seems like there is nothing that can be done, but solutions arise out of nowhere. For now I have managed to mount some components of the host system on the guest using various environment variables in my tests.

I plan to share them here to start a discussion about it.

I would also like to mention other contributors to this project who certainly know more than me, inviting them to participate in this research together. @cosmojg @cfriesicke @escape0707 @schance995 @neiser @hodapp512 @soraxas I would like to share what I'm working on.

They are just some functions I'm working on to made my Bottles-appimage work... for now without great progresses other than the detection of some libraries on my host system... all of them are listed at https://github.com/ivan-hc/Bottles-appimage/blob/main/AppRun

NOTE, my host system is Debian, so this may vary depending on your system.

Detect if the host runs an AMD / Intel /Nvidia driver to check the "Vendor"

# FIND THE VENDOR
VENDOR=$(glxinfo -B | grep "OpenGL vendor")
if [[ $VENDOR == *"Intel"* ]]; then
	export VK_ICD_FILENAMES="/usr/share/vulkan/icd.d/intel_icd.i686.json:/usr/share/vulkan/icd.d/intel_icd.x86_64.json"
	VENDORLIB="intel"
	export MESA_LOADER_DRIVER_OVERRIDE=$VENDORLIB
elif [[ $VENDOR == *"NVIDIA"* ]]; then
        NVIDIAJSON=$(find /usr/share -name "*nvidia*json" | sed 's/ /:/g')
	export VK_ICD_FILENAMES=$NVIDIAJSON
	VENDORLIB="nvidia"
	export MESA_LOADER_DRIVER_OVERRIDE=$VENDORLIB
elif [[ $VENDOR == *"Radeon"* ]]; then
	export VK_ICD_FILENAMES="/usr/share/vulkan/icd.d/radeon_icd.i686.json:/usr/share/vulkan/icd.d/radeon_icd.x86_64.json"
	VENDORLIB="radeon"
	export MESA_LOADER_DRIVER_OVERRIDE=$VENDORLIB
fi

Find libraries on the host

DRIPATH=$(find /usr/lib -name dri)
VDPAUPATH=$(find /usr/lib -maxdepth 2 -name vdpau)
export LIBVA_DRIVERS_PATH=$DRIPATH
export GLPATH=/lib:/lib64:/lib/x86_64-linux-gnu:/usr/lib
export VULKAN_DEVICE_INDEX=1
export __GLX_VENDOR_LIBRARY_NAME=mesa

function _host_accelleration(){
	LLVM=$(find /usr/lib -name "*LLVM*")
	for arg in $LLVM; do
		for var in $arg; do
			echo "$var"
		done
	done

	MESA=$(find /usr/lib -name "*mesa*.so*")
	for arg in $MESA; do
		for var in $arg; do
			echo "$var"
		done
	done

	D3D=$(find /usr/lib -name "*d3d*.so*")
	for arg in $D3D; do
		for var in $arg; do
			echo "$var"
		done
	done

	EGL=$(find /usr/lib -name "libEGL*" | grep -v "libEGL_mesa")
	for arg in $EGL; do
		for var in $arg; do
			echo "$var"
		done
	done

	STDC=$(find /usr/lib -name "*stdc*.so*")
	for arg in $STDC; do
		for var in $arg; do
			echo "$var"
		done
	done

	SWRAST=$(find /usr/lib -name "*swrast*")
	for arg in $SWRAST; do
		for var in $arg; do
			echo "$var"
		done
	done

	VULKAN=$(find /usr/lib -name "*vulkan*")
	for arg in $VULKAN; do
		for var in $arg; do
			echo "$var"
		done
	done
}

In the following step I'll try to list all the libraries found above into a file in ~/.cache (this step may be slower on some systems)

What to bind?

ACCELL_DRIVERS=$(echo $(echo "$(_host_accelleration)") | sed 's/ /:/g')
BINDLIBS=$(echo $(cat $HOME/.cache/hostdri2junest | uniq | sort -u) | sed 's/ /:/g')

rm -f $HOME/.cache/libbinds $HOME/.cache/libbindbinds
echo $ACCELL_DRIVERS | tr ":" "\n" >> $HOME/.cache/libbinds
echo $BINDLIBS | tr ":" "\n" >> $HOME/.cache/libbinds
for arg in $(cat $HOME/.cache/libbinds); do
	for var in "$arg"; do
		echo "$arg $(echo $arg | sed 's#/x86_64-linux-gnu##g' | cut -d/ -f1,2,3 )" >> $HOME/.cache/libbindbinds
		break
	done
done
sed -i -e 's#^#--bind / / --bind #' $HOME/.cache/libbindbinds

BINDS=$(cat $HOME/.cache/libbinds | tr "\n" " ")

EXTRA: trying to mount libLLVM host/guest (I've disabled this for now)

HOST_LIBLLVM=$(find /usr/lib -name "*libLLVM*" | grep -v ".so.")
JUNEST_LIBLLVM=$(find $JUNEST_HOME/usr/lib -name "*libLLVM*" | grep -v ".so.")

All I've done then was to recreate the structure of directories to bind in the AppImage (in my use case), for those that use JuNest normally, the directories should be automatically mounted, like this.

Where $HERE is the current directory I'm using

HERE="$(dirname "$(readlink -f $0)")"` 

and $EXEC is the name of the program I take from the "Exec=" entry in its .desktop file

EXEC=$(grep -e '^Exec=.*' "${HERE}"/*.desktop | head -n 1 | cut -d "=" -f 2- | sed -e 's|%.||g')

here is how a command with namespaces should be (in my experimente):

function _exec(){
	if [[ $VENDOR == *"NVIDIA"* ]]; then
		$HERE/.local/share/junest/bin/junest -n -b "$BINDS\
			--bind /usr/lib/ConsoleKit $JUNEST_HOME/usr/lib/ConsoleKit\
			--bind $DRIPATH $JUNEST_HOME/usr/lib/dri\
			--bind /usr/libexec $JUNEST_HOME/usr/libexec\
			--bind /usr/lib/firmware $JUNEST_HOME/usr/lib/firmware\
			--bind /usr/lib/modules $JUNEST_HOME/usr/lib/modules\
			--bind /usr/lib/nvidia $JUNEST_HOME/usr/lib/nvidia\
			--bind /usr/lib/systemd $JUNEST_HOME/usr/lib/systemd\
			--bind /usr/lib/udev $JUNEST_HOME/usr/lib/udev\
			--bind $VDPAUPATH $JUNEST_HOME/usr/lib/vdpau\
			--bind /usr/lib/xorg $JUNEST_HOME/usr/lib/xorg\
			--bind /usr/share/bug $JUNEST_HOME/usr/share/bug\
			--bind /usr/share/dbus-1 $JUNEST_HOME/usr/share/dbus-1\
			--bind /usr/share/doc $JUNEST_HOME/usr/share/doc\
			--bind /usr/share/egl $JUNEST_HOME/usr/share/egl\
			--bind /usr/share/glvnd $JUNEST_HOME/usr/share/glvnd\
			--bind /usr/share/lightdm $JUNEST_HOME/usr/share/lightdm\
			--bind /usr/share/lintian $JUNEST_HOME/usr/share/lintian\
			--bind /usr/share/man $JUNEST_HOME/usr/share/man\
			--bind /usr/share/nvidia $JUNEST_HOME/usr/share/nvidia\
			--bind /usr/share/vulkan $JUNEST_HOME/usr/share/vulkan\
			--bind /usr/src $JUNEST_HOME/usr/src\
			" -- $EXEC "$@"
	else
		$HERE/.local/share/junest/bin/junest -n -b "\
			--bind $DRIPATH $JUNEST_HOME/usr/lib/dri\
			--bind /usr/libexec $JUNEST_HOME/usr/libexec\
			--bind /usr/lib/modules $JUNEST_HOME/usr/lib/modules\
			--bind /usr/lib/xorg $JUNEST_HOME/usr/lib/xorg\
			--bind /usr/share/dbus-1 $JUNEST_HOME/usr/share/dbus-1\
			--bind /usr/share/glvnd $JUNEST_HOME/usr/share/glvnd\
			--bind /usr/share/vulkan $JUNEST_HOME/usr/share/vulkan\
			--bind /usr/src $JUNEST_HOME/usr/src\
			" -- $EXEC "$@"
	fi
}
_exec

For now the result is that I've no more many of the error messages I had previously.

I've NOT reached my goal, but I'm near to a solution.

Are there any pieces missing or perhaps I added too many in my attempt?

I can't say it myself, surely some of you can do better.

My search was not possible without:

A special thanks to @mirkobrombin that redirected me to the right path... I'm trying to finish this journey. I hope not alone.

Update:

  • I have created an empty file "nvidia_dri.so" in the /usr/lib of JuNest
  • I've used the option --bind /usr/lib/nvidia/nvidia_drv.so $JUNEST_HOME/usr/lib/dri/nvidia_dri.so
  • When before MESA was unable of detecting /usr/lib/dri/nvidia_dri.so, now it sees my driver and I get this error message:
MESA-LOADER: failed to open nvidia: /usr/lib/dri/nvidia_dri.so: file too short (search paths /usr/lib/dri, suffix _dri)

NOTE, I started the program "Bottles" with or without all the options listed in my first comment.

I also created an empty file and a directory in /dev and mouted them like this:

...
--bind /dev/dri $JUNEST_HOME/dev/dri\
--bind $DEV_NVIDIA $JUNEST_HOME/dev/nvidia\
...

where

DEV_NVIDIA=$(find /dev -name nvidia*[0-9]* 2> /dev/null | head -1)

and

HERE="$(dirname "$(readlink -f $0)")"
JUNEST_HOME=$HERE/.junest

All details of my script are avilable at https://github.com/ivan-hc/Bottles-appimage/blob/main/AppRun

Of course no 64bit game was able to run... but now that JuNest can see the Nvidia loader, I have an hope.

is there a way to export the content of a directory to the guest? For example...

export VDPAU_LIBRARY_PATH=/usr/lib/vdapu

or something?

UPDATE

# FIND THE VENDOR
VENDOR=$(glxinfo -B | grep "OpenGL vendor")
if ! echo "$VENDOR" | grep -q "*Intel*"; then
	export VK_ICD_FILENAMES="/usr/share/vulkan/icd.d/intel_icd.i686.json:/usr/share/vulkan/icd.d/intel_icd.x86_64.json"
	VENDORLIB="intel"
	export MESA_LOADER_DRIVER_OVERRIDE=$VENDORLIB
elif ! echo "$VENDOR" | grep -q "*NVIDIA*"; then
	export VK_ICD_FILENAMES=$(find /usr/share -name "*nvidia*json" | tr "\n" ":" | rev | cut -c 2- | rev)
	VENDORLIB="nvidia"
	export MESA_LOADER_DRIVER_OVERRIDE=$VENDORLIB
elif ! echo "$VENDOR" | grep -q "*Radeon*"; then
	export VK_ICD_FILENAMES="/usr/share/vulkan/icd.d/radeon_icd.i686.json:/usr/share/vulkan/icd.d/radeon_icd.x86_64.json"
	VENDORLIB="radeon"
	export MESA_LOADER_DRIVER_OVERRIDE=$VENDORLIB
fi

and

EXEC=$(grep -e '^Exec=.*' "${HERE}"/*.desktop | head -n 1 | cut -d "=" -f 2- | sed -e 's|%.||g')

if ! echo "$VENDOR" | grep -q "*NVIDIA*"; then
	echo "NVIDIA"
	$HERE/.local/share/junest/bin/junest -n -b "$ETC_RESOLV\
		--bind $(find /usr/lib -name libEGL.so* -type f) $(find $JUNEST_HOME/usr/lib -name libEGL.so* -type f)\
		--bind $(find /usr/lib -name libGLESv2* -type f) $(find $JUNEST_HOME/usr/lib -name libGLESv2* -type f)\
		--bind $(find /usr/lib -name *libEGL_mesa*.so* -type f) $(find $JUNEST_HOME/usr/lib -name *libEGL_mesa*.so* -type f)\
		--bind $(find /usr/lib -name *libGLX_mesa*.so* -type f) $(find $JUNEST_HOME/usr/lib -name *libGLX_mesa*.so* -type f)\
		--bind $(find /usr/lib -name *zink*_dri.so* -type f) $(find $JUNEST_HOME/usr/lib/dri -name *zink*_dri.so* -type f)\
		--bind $(find /usr/lib -maxdepth 2 -name vdpau) $(find $JUNEST_HOME/usr/lib -maxdepth 2 -name vdpau)\
		--bind $(find /usr/lib -name *nvidia*drv.so* -type f) /usr/lib/dri/nvidia_dri.so\
  		--bind $(find /usr/lib -name *libvdpau_nvidia.so* -type f) /usr/lib/libvdpau_nvidia.so\
		" -- $EXEC "$@"
else
	$HERE/.local/share/junest/bin/junest -n -b "$ETC_RESOLV\
		--bind $(find /usr/lib -name libEGL.so* -type f) $(find $JUNEST_HOME/usr/lib -name libEGL.so* -type f)\
		--bind $(find /usr/lib -name libGLESv2* -type f) $(find $JUNEST_HOME/usr/lib -name libGLESv2* -type f)\
		--bind $(find /usr/lib -name *libEGL_mesa*.so* -type f) $(find $JUNEST_HOME/usr/lib -name *libEGL_mesa*.so* -type f)\
		--bind $(find /usr/lib -name *libGLX_mesa*.so* -type f) $(find $JUNEST_HOME/usr/lib -name *libGLX_mesa*.so* -type f)\
		--bind $(find /usr/lib -maxdepth 2 -name vdpau) $(find $JUNEST_HOME/usr/lib -maxdepth 2 -name vdpau)\
		" -- $EXEC "$@"
fi

The error message is changed. If I had a missing zink_dri.so before (now I've mounted it, see bove), now I get an error about a missing intel_dri.so, but mounting it not changes the fact that Hardware Accelleration is off.

@fsquillace I think I'm near to a solution.

We should perform some tests by exporting two environment variables:

  • LD_LIBRARY_PATH must contain the path of libraries in both JuNest and the host
  • LD_PRELOAD should redirect to the needed libraries in JuNest (libc.so.6, ld-linux-x86-64.so.2, libpcre2-8.so.0... also libselinux.so.1 must be installed in JuNest)

I've done a brief test, but I was stuck because I don't know the environment variable to point to the LD configuration file in /etc... it would be useful for me if you don't leave me test all these stuff all alone

EDIT: its enough to add these environment variables wherever you know in JuNest itself, we don't need to use an AppRun script like the one I use in my tests. This should be an inbuilt feature of JuNest, not an external one.

@fiftydinar join the issue

It seems I'm the only one here who is interested in this topic. I am sorry.

Since my interest in implementing hardware acceleration in JuNest is primarily driven by my need to get AppImage packages working and exportable to various distributions, I think I will continue to address the problem on my "ArchImage" repository.

Those interested in contributing, please join and discuss it at ivan-hc/ArchImage#20

@fsquillace I still can't stop thanking you for this amazing job you've done so far. I hope you will soon return to your project, which has nothing to envy of others.

@ivan-hc thanks for the kind words. Unfortunately I have little time to invest on this specific feature. As of now, I am mostly maintaining JuNest for bug fixes and little improvements as I do not have time to invest on larger features like this one. Having said that, I'd like to pursuit on keeping Junest as simple as possible, hence I am not yet sure that hardware acceleration support is strictly needed as part of the junest project as it can be something that anyone can build on top of that. I am trying to preserve simplicity over maintainance of large features following the Arch way. This is also because, having a clearer responsibility make things easier to maintain. The more code we add to such little project the harder will be its maintenance.

@ivan-hc thanks for the kind words

@fsquillace I say the truth :)

I'd like to pursuit on keeping Junest as simple as possible

I totally agree with this. Maybe my research can be useful to add something to the readme file, as a troubleshooter.

Someone who wanted to use hardware acceleration without having to reinstall all the drivers in JuNest could simply install a few packages and export an environment variable.

As soon as I find out something, I will contact you to have this detail added to your README. Your project is too close to my heart. It's thanks to you that I was able to transform the impossible into AppImages.

Maybe my research can be useful to add something to the readme file, as a troubleshooter.

Yeah, maybe the junest wiki could be a good place.

btw, maybe something useful for you is that someone else wrote a blog about JuNest. Step 3 shows how to configure GPU drivers:
https://medium.com/@ayaka_45434/installing-packages-on-linux-without-sudo-privilege-using-junest-5fe7523c9d86

Yeah, maybe the junest wiki could be a good place.

btw, maybe something useful for you is that someone else wrote a blog about JuNest. Step 3 shows how to configure GPU drivers: https://medium.com/@ayaka_45434/installing-packages-on-linux-without-sudo-privilege-using-junest-5fe7523c9d86

I was aware of this solution (but not of the blog), I read it in one of the Issues. I was actually looking for a more portable solution, like the approach Distrobox takes, but without having to depend on Podman/Docker.

I've actually found that by installing libselinux in JuNest and exporting the local libraries to LD_LIBRARY_PATH, JuNest recognizes the existence of the libraries outside of the container, so I get fewer error messages related to (for example) GPU identification.

What is missing, however, is allowing the application to exploit this hardware acceleration.

NOTE: libselinux is essential in this process, if we export LD_LIBRARY_PATH without it, JuNest will not detect its internal libraries (errors in BUBBLEWRAP, which for JuNest will not exist) and the app will not launch.

I don't know if I should run " --bind " on a specific directory in the host or just "export" some environment variable. I just know that we are close to solving this problem.

Please see ivan-hc/ArchImage#20

Hi all, I recently solved the issue in my repo by merging JuNest with some files built by another Arch Linux portable container named Conty.

When you start any conty.sh istance, if you are an Nvidia user, it will locally download the official Nvidia driver and it will compile it to $HOME/.local/share/Conty.

Well, JuNest is more flexible, since to have hardware acceleration is enough to export these variables

export DATADIR="${XDG_DATA_HOME:-$HOME/.local/share}"
export PATH=$PATH:$DATADIR/Conty/overlayfs_shared/up/usr/bin:${PATH}
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$DATADIR/Conty/overlayfs_shared/up/usr/lib:${LD_LIBRARY_PATH}
export XDG_DATA_DIRS=$XDG_DATA_DIRS:$DATADIR/Conty/overlayfs_shared/up/usr/share:${XDG_DATA_DIRS}

this only work if that Conty directory exists.

In my AppRuns (the internal scripts needed to run the AppImages) I added this

[ -z "$NVIDIA_ON" ] && NVIDIA_ON=1
if [ "$NVIDIA_ON" = 1 ]; then
  DATADIR="${XDG_DATA_HOME:-$HOME/.local/share}"
  CONTY_DIR="${DATADIR}/Conty/overlayfs_shared"
  CACHEDIR="${XDG_CACHE_HOME:-$HOME/.cache}"
  [ -f /sys/module/nvidia/version ] && nvidia_driver_version="$(cat /sys/module/nvidia/version)"
  [ -f "${CONTY_DIR}"/nvidia/current-nvidia-version ] && nvidia_driver_conty="$(cat "${CONTY_DIR}"/nvidia/current-nvidia-version)"
  if [ "${nvidia_driver_version}" != "${nvidia_driver_conty}" ]; then
     if command -v curl >/dev/null 2>&1; then
        if ! curl --output /dev/null --silent --head --fail https://github.com 1>/dev/null; then
          notify-send "You are offline, cannot use Nvidia drivers"
        else
          notify-send "Configuring Nvidia drivers for this AppImage..."
          mkdir -p "${CACHEDIR}" && cd "${CACHEDIR}" || exit 1
          curl -Ls "https://raw.githubusercontent.com/ivan-hc/ArchImage/main/nvidia-junest.sh" > nvidia-junest.sh
          chmod a+x ./nvidia-junest.sh && ./nvidia-junest.sh
        fi
     else
        notify-send "Missing \"curl\" command, cannot use Nvidia drivers"
        echo "You need \"curl\" to download this script"
     fi
  fi
  [ -d "${CONTY_DIR}"/up/usr/bin ] && export PATH="${PATH}":"${CONTY_DIR}"/up/usr/bin:"${PATH}"
  [ -d "${CONTY_DIR}"/up/usr/lib ] && export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}":"${CONTY_DIR}"/up/usr/lib:"${LD_LIBRARY_PATH}"
  [ -d "${CONTY_DIR}"/up/usr/share ] && export XDG_DATA_DIRS="${XDG_DATA_DIRS}":"${CONTY_DIR}"/up/usr/share:"${XDG_DATA_DIRS}"
fi

the above function will call a script https://github.com/ivan-hc/ArchImage/blob/main/nvidia-junest.sh that will download a mini conty.sh file, will run it and then will compile the Nvidia driver. This is the content of the script

#!/bin/sh

conty_minimal="https://github.com/ivan-hc/Conty/releases/download/continuous-SAMPLE/conty.sh"

if command -v curl >/dev/null 2>&1; then
	echo "Downloading Conty"
	curl -#Lo conty.sh "${conty_minimal}"
else
	echo "You need \"curl\" to download this script"; exit 1
fi
[ -f ./conty.sh ] && chmod a+x ./conty.sh && ./conty.sh bwrap --version | grep -v bubblewrap
rm -f ./conty.sh

and this is how hardware acceleration works into an Archimage based on JuNest with this kind of support

simplescreenrecorder-2024-12-29_01.15.05.mkv.mp4

full hardware acceleration, ladies and gentlemen!

This is the structure of a Conty directory

Istantanea_2024-12-29_03-37-20

but what we need is just in that /bin, /lib and /share directories

Istantanea_2024-12-29_03-40-49
Istantanea_2024-12-29_03-40-33

its enough to export them to the correct variables, the same exact lines I said at start of this comment.


About the way to compile the drivers locally, I already tried to do this manually and without Conty, but while I was able to download and extract them, I was unable to compile them. Conty uses the Arch Linux bootstrap with a builtin bubblewrap and the related mountpoints to run and compile the drivers for its filesystem structure. Conty and JuNest are both Arch Linux, so the filesystem structure is the same. This is why it works.

To see the function that handles Nvidia drivers in Conty, see https://github.com/Kron4ek/Conty/blob/master/conty-start.sh#L392

It is not difficult to understand, but the reason that it does not work as a separated process is that it needs an Arch Linux bootstrap. A minimal one. The same conty-start.sh script already has mountpoints and a bwrap function that calls the function to handle Nvidia drivers.

There is no faster way for now... or at least, we need precompiled drivers hosted somewhere if we want to increase download speed.

Btw if to wait one minute is not too much, everything is ready to work for anyone wants to use hardware acceleration in JuNest.


I know this is a workaround, but I hope this workaround is a start for something better.

Best regards.

@fsquillace now this is the code I'm using to import hardware acceleration

[ -z "$NVIDIA_ON" ] && NVIDIA_ON=1
if [ "$NVIDIA_ON" = 1 ]; then
   DATADIR="${XDG_DATA_HOME:-$HOME/.local/share}"
   CONTY_DIR="${DATADIR}/Conty/overlayfs_shared"
   [ -f /sys/module/nvidia/version ] && nvidia_driver_version="$(cat /sys/module/nvidia/version)"
   if [ -n "$nvidia_driver_version" ]; then
      [ ! -d "${CONTY_DIR}"/up/usr/share/glvnd ] && ln -s /usr/share/glvnd "${CONTY_DIR}"/up/usr/share/ 2>/dev/null
      [ ! -d "${CONTY_DIR}"/up/usr/share/nvidia ] && ln -s /usr/share/nvidia "${CONTY_DIR}"/up/usr/share/ 2>/dev/null
      ln -s /usr/share/nvidia "${CONTY_DIR}"/up/usr/share/ 2>/dev/null
      mkdir -p "${CONTY_DIR}"/up/usr/lib
      if [ ! -f "${CONTY_DIR}"/nvidia/current-nvidia-version ]; then
         mkdir -p "${CONTY_DIR}"/nvidia
         echo "${nvidia_driver_version}" > "${CONTY_DIR}"/nvidia/current-nvidia-version
      fi
      [ -f "${CONTY_DIR}"/nvidia/current-nvidia-version ] && nvidia_driver_conty=$(cat "${CONTY_DIR}"/nvidia/current-nvidia-version)
      if [ "${nvidia_driver_version}" != "${nvidia_driver_conty}" ]; then
      	rm -f "${CONTY_DIR}"/up/usr/lib/*
      	echo "${nvidia_driver_version}" > "${CONTY_DIR}"/nvidia/current-nvidia-version
      fi
      nvidialibs="libcuda libEGL_nvidia libGLX_nvidia libnvidia libOpenCL libvdpau_nvidia"
      for n in $nvidialibs; do
         nvidia_libs="$nvidia_libs $(find /usr/lib -name "$n*" -print0 | xargs -0)"
      done
      for n in $nvidia_libs; do
         libname=$(echo "$n" | sed 's:.*/::')
         [ ! -f "${CONTY_DIR}"/up/usr/lib/"$libname" ] && cp "$n" "${CONTY_DIR}"/up/usr/lib/
      done
      [ -d "${CONTY_DIR}"/up/usr/lib ] && export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}":"${CONTY_DIR}"/up/usr/lib:"${LD_LIBRARY_PATH}"
      [ -d "${CONTY_DIR}"/up/usr/share ] && export XDG_DATA_DIRS="${XDG_DATA_DIRS}":"${CONTY_DIR}"/up/usr/share:"${XDG_DATA_DIRS}"
   fi
fi

What it does

  • it creates a Conty directory (you may use anyone else)
  • detects if the system has Nvidia drivers
  • copies existing Nvidia drivers t that directory
  • symlinks the Nvidia related /usr/share directories
  • exports Nvidia libraries and data directories
  • updates Nvidia drivers every time you run an application
  • update/configuration may take less than 1 second

I'm running a Bottles AppImage on my PC using JuNest like this, 64bit games are running well... 32bit ones not yet (they may rely on Nouveau normally).

I suggest to study well my code.