niess/python-appimage

Package dependencies of C based libraries

Closed this issue · 8 comments

Python has quite a few C based libraries available which include .so files. It seems that currently, the libraries linked to those files are not being included in AppImages generated by python-appimage. So the user needs to have them installed for the application in run in a lot of cases.

I think it would be possible to implement packaging all required libraries using ldd on .so files and then copying the libraries found in the result. The linuxdeploy software does something like this as well. Would that work or would it require more work to get that working?

niess commented

Hello @sharkwouter,

well, it depends on what .so files you are talking about. Do you have some example(s) in mind?

Python Runtime dependencies, as well as those of vanilla packages, are bundled in the AppImage, under $APPDIR/usr/lib. You can check this by extracting the Python AppImages (using --appimage-extract). Some core libraries are however excluded (whitelisted), following the AppImage policy.

Concerning additional Python packages, installed from PyPI with pip, only binary wheels are supported (or pure Python packages). The current policy of python-appimage is that proper wheels should already contain their binary deps with the correct manylinux binary compatibility.

More complex cases, the like source distributions of C-extension packages, are briefly discussed in the documentation. Python-appimage can still help in those cases, e.g. by providing a relocatable Python runtime, but it is not within its scope to bundle such packages / distributions. Note that building a binary wheel or an AppImage of a C-extension is very similar in practice. Thus, in such cases, it would make more sense to me to start by building a binary wheel, and to put it on PyPI. Then, packaging with python-appimage should be straightforward, using the binary wheel.

But, there are intermediary cases where this is arguable, like C-library bindings for Python. One might want to bundle both the Python wrapper and the C-library in an AppImage, while the C-library is not shipped with the Python wrapper (it is expected to be provided by the system).

Thanks for the reply! An example where this could be a problem is if I ship a python application which uses Py-GObject (python-gi) for using gtk. In this example the gi cairo library is packaged and so it the cairo library, but that's about it. None of the other gtk related libraries and the libraries gtk relies on are included.

I'll show it using ldd on the gi cairo library I extracted from the AppImage I made for testing purposes:

wouter:~/Sources/python-appimage-test/squashfs-root/opt/python3.10/lib/python3.10/site-packages/gi$ ldd _gi_cairo.cpython-310-x86_64-linux-gnu.so 
	linux-vdso.so.1 (0x00007ffe89fb3000)
	libgobject-2.0.so.0 => /lib/x86_64-linux-gnu/libgobject-2.0.so.0 (0x00007fd23cd01000)
	libcairo.so.2 => /lib/x86_64-linux-gnu/libcairo.so.2 (0x00007fd23cbd9000)
	libcairo-gobject.so.2 => /lib/x86_64-linux-gnu/libcairo-gobject.so.2 (0x00007fd23cbcd000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fd23c9a5000)
	libglib-2.0.so.0 => /lib/x86_64-linux-gnu/libglib-2.0.so.0 (0x00007fd23c86b000)
	libffi.so.8 => /lib/x86_64-linux-gnu/libffi.so.8 (0x00007fd23c85c000)
	libpixman-1.so.0 => /lib/x86_64-linux-gnu/libpixman-1.so.0 (0x00007fd23c7b1000)
	libfontconfig.so.1 => /lib/x86_64-linux-gnu/libfontconfig.so.1 (0x00007fd23c767000)
	libfreetype.so.6 => /lib/x86_64-linux-gnu/libfreetype.so.6 (0x00007fd23c69f000)
	libpng16.so.16 => /lib/x86_64-linux-gnu/libpng16.so.16 (0x00007fd23c664000)
	libxcb-shm.so.0 => /lib/x86_64-linux-gnu/libxcb-shm.so.0 (0x00007fd23c65f000)
	libxcb.so.1 => /lib/x86_64-linux-gnu/libxcb.so.1 (0x00007fd23c633000)
	libxcb-render.so.0 => /lib/x86_64-linux-gnu/libxcb-render.so.0 (0x00007fd23c624000)
	libXrender.so.1 => /lib/x86_64-linux-gnu/libXrender.so.1 (0x00007fd23c617000)
	libX11.so.6 => /lib/x86_64-linux-gnu/libX11.so.6 (0x00007fd23c4d7000)
	libXext.so.6 => /lib/x86_64-linux-gnu/libXext.so.6 (0x00007fd23c4c2000)
	libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fd23c4a6000)
	libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fd23c3bd000)
	/lib64/ld-linux-x86-64.so.2 (0x00007fd23cd7e000)
	libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007fd23c347000)
	libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007fd23c316000)
	libuuid.so.1 => /lib/x86_64-linux-gnu/libuuid.so.1 (0x00007fd23c30d000)
	libbrotlidec.so.1 => /lib/x86_64-linux-gnu/libbrotlidec.so.1 (0x00007fd23c2ff000)
	libXau.so.6 => /lib/x86_64-linux-gnu/libXau.so.6 (0x00007fd23c2f7000)
	libXdmcp.so.6 => /lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007fd23c2ef000)
	libbrotlicommon.so.1 => /lib/x86_64-linux-gnu/libbrotlicommon.so.1 (0x00007fd23c2cc000)
	libbsd.so.0 => /lib/x86_64-linux-gnu/libbsd.so.0 (0x00007fd23c2b4000)
	libmd.so.0 => /lib/x86_64-linux-gnu/libmd.so.0 (0x00007fd23c2a7000)

None of these libraries are included in the AppImage. I think modifying the existing code to include these libraries is possible, but would require some tinkering. Especially since there are some libraries you'd not want to include. Linuxdeploy has a blacklist of libraries it will not package.

niess commented

OK. I see. The same happens with PyQt5. The binary wheel does not contain all deps. Some are expected to be installed using the system package manager. But, when building an AppImage, this is arguable indeed.

Currently, if one wants to bundle all deps in such cases, then one has to edit the AppImage generated by python-appimage. I don't feel like automating this for an arbitrary use case, since I am afraid that handling all corner cases might be complex. Also, my understanding is that this is already the scope of linuxdeploy. But, for some popular Python packages (PyQt5, etc.) it could make sense to have python-appimage patch deps automatically, following their pip installation.

I don't have time to implement this right now. But, I am open to a PR if you feel like doing so. I do not know where linuxdeploy fetches deps from. But, ideally I would fetch the missing libs from the corresponding Manylinux Docker image, for consistent binary compatibility.

That's good to hear. I'll take a look at it. It will probably take quite a while for me to get to it, but I would be interested in implementing this later in the year.

niess commented

Great :)

Maybe linuxdeploy could be used under the hood that for (assuming that blacklisted deps could be overriden, if this is problematic).

Another possibility would be to patch "manually". Then, one could fetch missing deps from the Manylinux Docker image and patch their RPATH using patchelf. This is what python-appimage does somehow. Thus, some of the corresponding tools are already there (e.g. under python_appimage.utils).

I just wanted to chime in that this feature would be useful for me too.

My use case for this has been to quickly test builds against newer versions of python on old OS's that I'm stuck working with. I had lots of issues with OpenSSL, and those all go away with the appimage. However, now I have some of my own packages that bind in C++ that can't use the appimage. Even externally, I have found that when using python 3.10+, pip triggers compilations instead of pulling wheels for some important packages for me like psutil, and those fail to build with this appimage, and I believe it is because this feature is not available.

I'm poking around the code looking for a way to add a flag to include the needed libraries in hopes to submit a PR, but I'm not totally sure what I'm doing yet. Any additional details about which source files would need to be changed to copy in the libraries would be much appreciated.

Actually, after reading the discussion above more carefully, it seems like I just need to understand the steps described here more concretely: https://python-appimage.readthedocs.io/en/latest/apps/#advanced-packaging. I am totally fine with some "manual" post steps, but from the doc, it is not totally clear to me what I need to do. Will report back with my steps if I can get a working example and maybe could enhance the doc.

Actually, I started over with my builds, and my builds are compiling just fine without any modifications with the latest releases, so I think at least for my use case, this is no longer an issue. I can't say I understand what changed, and I was unfortunately not that rigorous in my testing. I think it's possible I wasn't extracting to AppDir before, and maybe that is all I needed to do.

I guess it's still worth the data point that I do have a use case where I am in a position where I would like to use the appimage in a manner that goes against the pre-built wheel methodology outlined above.

niess commented

Hello @microchipster,

if psutil does not provide binary wheels compatible with the Python runtime Packaged in the AppImage, then pip fallbacks to compiling the package from source. As a result, even if the compilation succeeds, the resulting AppImage will likely not be portable. E.g. there might be missing binary deps (shared libraries), available on the host used for the compilation, but not packaged in the AppImage. Furthermore, the binary compatibility is then limited by the host (through extra binary deps), not by the Python runtime packaged in the AppImage. Binary "compatibility" usually stems down to max(version(GLIBC)), where the max runs over all ELF-binaries in the AppImage (Python runtime, Python C extensions, and all dependable shared libraries).

One way to check binary availability, from PyPI, is to click on the download link (top left, below versions history) and to look for manylinux wheels. According to this, I see that there would have been a manylinux2010_x86_64 compatible wheel for psutil since October 18. Maybe that you used a manylinux1 AppImage for your first (failed) attempt? Or, maybe that something else has changed meanwhile?