ffibuilder.include() seemingly not working
James-E-A opened this issue · 3 comments
The documentation says (emphasis added):
For out-of-line modules, the
ffibuilder.include(other_ffibuilder)
line should occur in the build script, and theother_ffibuilder
argument should be another FFI instance that comes from another build script. When the two build scripts are turned into generated files, say_ffi.so
and_other_ffi.so
, then importing_ffi.so
will internally cause_other_ffi.so
to be imported. At that point, the real declarations from_other_ffi.so
are combined with the real declarations from_ffi.so
.The usage of
ffi.include()
is the cdef-level equivalent of a#include
in C, where a part of the program might include types and functions defined in another part for its own usage. You can see on theffi
object (and associatedlib
objects on the including side) the types and constants declared on the included side. In API mode, you can also see the functions and global variables directly. In ABI mode, these must be accessed via the originalother_lib
object returned by thedlopen()
method onother_ffi
.
However, this doesn't seem to work. (Attachment: minimum_repro_example.zip)
In reply to me on Stack Overflow, a pretty highly rated user alleges (emphasis added):
the [
ffi.include(ffi_other)
] syntax is not enough to cause the C-level [_ffi.cpython-XXX.so
] to be compiled specially. If a symbol from [_ffi_other.so
] is needed at the C level for [_ffi.so
] to load, then it won't work. The CFFI documentation is kinda misleading. It means rather that, say, you can take things like struct type definitions that appear in [_ffi_other
] and use them from [_ffi
].
I'm trying to dynamically link the dependent library directly, if at all possible, rather than stringing it up with a bunch of janky extern "Python+C"
wrappers hotwired in in bulk with def_extern
during import-time.
Is dynamic startup-time linking of shared library dependencies something CFFI supports?
just write a single
ffi
that exposes all the functions that you need from both libraries; and write a single ffi.set_source(..., sources=[..all_c_files_from_both_libraries..]).
In this case, there are many different libraries, some of which are under different licenses (some are not "OSI-approved"), so I don't want to load them all if it's not needed.
You're right that the "Python+C" solution was awful, so, for now, I'm just including each of the shared C library files into the dependent module, "statically linked":
ffibuilder1.set_source(..., sources=['program1.c', 'sharedX.c'])
ffibuilder2.set_source(..., sources=['program2.c', 'sharedX.c', 'sharedY.c'])
ffibuilder3.set_source(..., sources=['program3.c', 'sharedY.c', 'sharedZ.c'])
In other words, the question starts with "how would you do it in C, if Python was not involved". In that case you'd likely have 34 DLLs, generated using specific MSVC parameters or Visual Studio configurations to set up the export/import symbols correctly. Now with CFFI, you can either do the exact same and then export each of the 34 DLLs to Python, or use the shortcut of doing it only for the [shared dependency] but not for the 33 others. You can have set_source compile custom C code---but this produces a .pyd and not a .dll, with no real control over exported symbols, so it doesn't work for [the shared dependency]
So you're saying that dynamically including shared dependencies at import-time is something CFFI does not currently provide, due to the difficulties of getting the compiler to emit that kind of library?
Correct. This part of CFFI is done by just using setuptools. I haven't test it, but maybe look at https://setuptools.pypa.io/en/latest/userguide/ext_modules.html#extension-api-reference. It's possible that you can get the result you want on at least some platforms by using these setuptools options. These options can be given as keyword arguments to CFFI's set_source()
method, which just passes them to setuptools. Maybe libraries
and export_symbols
?