Avoid full path enumeration on import of setuptools or pkg_resources?
Opened this issue ยท 99 comments
Originally reported by: konstantint (Bitbucket: konstantint, GitHub: konstantint)
At the moment on my machine, it takes about 1.28 seconds to do a bare import pkg_resources
, 1.47 seconds to do a bare import setuptools
, 1.36 seconds to do a bare from pkg_resources import load_entry_point
and 1.25 seconds to do a bare from pkg_resources import load_entry_point
.
This obviously affects all of the python scripts that are installed as console entry points, because each and every one of them starts with a line like that. In code which does not rely on entry points this may be a problem whenever I want to use resource_filename
to consistently access static data.
I believe this problem is decently common, yet I did not find any issue or discussion, hence I'm creating one, hoping I'm not repeating what has been said already elsewhere unnecessarily.
I am using Anaconda Python, which comes along with a fairly large package, alongside several of my own packages, which I commonly add my path via setup.py develop
, however I do not believe this setup is anything out of the ordinary. There are 37 items on my sys.path
at the moment. Profiling import pkg_resources
shows that this leads to 76 calls to workingset.add_entry
(timing at about a second), of which most of the time is spent in 466 calls to Distribution.from_location
.
Obviously, the reason for the problem lies in the two _call_aside
methods at the end of pkg_resources
which lead to a full scan of the python path at the moment when the package is imported, and the only way to alleviate it would be to somehow avoid or delay the need for this scan as much as possible.
I see two straightforward remedies:
a) Make the scanning lazy. After all, if all one needs is to find a particular package, the scan could stop as soon as the corresponding package is located. At the very least this would allow me to "fix" my ipython loading problem by moving it up in the path. This might break some import rules which do not respect the precedence of the path, which I'm not aware.
b) Cache a precomputed index and update it lazily. Yes, this might requre some ad-hoc rules for resolving inconsistencies, and this may lead to ugly conflicts with external tools that attempt to install multiple versions of a package, but this will basically avoid the current startup delay in 99.99% of cases and solve so much of my problems, that I'd be willing to pay the price.
Although both options might seem somewhat controversial, the problem itself seems to be serious enough to deserve at least some fix eventually (for example, I've recently discovered I'm reluctant to start ipython
for short calculations because of its startup delay which I've now tracked back to this same issue).
I'm contemplating making a separate utility, e.g. fast_pkg_resources
, which would implement the strategy b) by simply caching calls to pkg_resources
in an external file, yet I thought of raising the issue here to figure out whether someone has already addressed it, whether there are plans to do something about it in the setuptools core codebase, or perhaps I'm missing something obvious.
@konstantint Here's another instance, with some more comparisons: https://www.reddit.com/r/Python/comments/4auozx/setuptools_deployment_being_very_slow_compared_to/
Is the performance better with the same packages installed using pip? What about those packages installed with pip install --egg
?
As long as console entry points require the validation of all packages in the chain, I expect startup to be somewhat slow.
I worry that remedy (a) might only have modest benefits while imposing new, possibly conflicting instructions to the user on how to implement the remedy.
Remedy (b) promises a nicer use-case, but as you point out, caching is fraught with challenges.
It sounds like you have a decent grasp of the motivations behind the current implementation, so you're at a good place to draft an implementation.
Is the performance better with the same packages installed using pip? What about those packages installed with pip install --egg?
Even when installing via pip
or with --egg
, it's still over 300ms for my use case. As an aside, the reason we want to decrease this startup time is so that we can use the tool in interactive tab-completion.
Might be of interest : https://pypi.python.org/pypi/entrypoints (https://github.com/takluyver/entrypoints) but agreed that the load time is impacting a few other project like everython that rely on prompt_toolkit.
And everyone that relies on pygments. I have some profiling available at https://github.com/xonsh/import-profiling where I have a nasty sys.modules['pkg_resources'] = None
hack to prevent its import.
Importing pygments:
So just by importing pkg_resources
, the slowdown is ~100x. In wall clock time, I have consistently tested the pkg_resources
overhead to be at least 150 - 200 ms. This makes pkg_resoucres
unusable in command line utilities that require fast start up times.
In xonsh, we have resorted to the above hacks to prevent our dependencies (pygments, prompt_toolkit) from accidentally importing it.
I'm seeing a consistent ~150ms wall clock time as well. I'm writing a command-line utility with autocompletion, so it's a serious challenge. It's not clear how to fix this without giving up all of setuptools' advantages.
Yesterday, I released the lazyasd
package (pip install lazyasd) which has the ability perform imports on a background thread. This was written specifically to mitigate the long pkg_resources
import times.
Background thread docs and example here https://github.com/xonsh/lazyasd#background-imports
Feel free to use or copy the lazyasd module into your projects.
I wrote a tiny module called fastentrypoints
that monkey patches the mechanism behind entry_points
to generate scripts that don't import pkg_resources.
@ninjaaron Thanks for fastentrypoints. I managed to fix the distribution issue by adding it to MANIFEST.in:
include fastentrypoints.py
@Fak3 Good idea! I took that crazy bit about downloading and exec-ing the code out of the docs for fastentrypoints and mentioned using MANIFEST.in instead. The fastep
command also now appends this line to MANIFEST.in
I have to admit, I loved the way I came up with (because it's so evil), but using MANIFEST.in is waaaaay saner.
I certainly don't want to pile on but I did want to chime and say that this is an enoromous problem for big Python projects right now. It's severely impacted the performance of SaltStack's CLI toolset, which takes ~2.0s to load, of which 1.9s is spent purely in pkg_resources
. Unfortunately, we can't just rip out any imports of pkg_resources
because so many of the libs we use end up importing in anyhow. (This is generally the requests
package, but could be others.)
We're exploring ways to mitigate this right now but anything we can do to help out here we'd gladly contribute to. It's a big issue for us.
(We're looking at fastentrypoints
by @ninjaaron today.) I'll report back with any results. :]
@cachedout I don't think fastentrypoints can solve the problem if you are importing pkg_resources anyway. It only takes it out of the automatically-generated scripts. However, if you are or another library is importing it anyway... :(
I myself have actually moved away from using requests
for trivial scripts just to avoid the "tax" of importing it. I'm sure this isn't a solution for you, but you might try (or several of us might try) working with the developer of requests to move away from pkg_resources
.
Also, I know the developers of xonsh (@scopatz and co., and I think they are not the only ones) have created mechanisms for lazily importing modules only when they are actually required. This kind of lazy import strategy might be appropriate for your project.
@ninjaaron Thanks so much for the feedback! Yes, after looking at fastentrypoints
it's not the right solution for us, unfortunately.
Yes, we're in the process of deprecating requests
directly but we have plugins that use it so it would be very challenging to remove it entirely. I'll head over to the requests
project and see if I can get an issue filed.
We do have a lazy plugin system that we really like but unfortunately it doesn't quite get us out of this problem because of the way it's written. There might be some room for improvement though, certainly. I'll be investigating.
One very ugly workaround that we did find (though likely won't use) is to simply fool the importer into skipping over pkg_resources
. Somewhat surprisingly, this works at the top of a CLI script:
# This needs to be the first thing you do. Obviously, if `pkg_resources` is already imported you are too late!
import sys
sys.modules['pkg_resources'] = None
<do work>
del sys.modules['pkg_resources']
I'm not necessarily advocating this in all cases but I'll leave it here as a possible workaround for others.
That said, I would still really like to hear from the setuptools
folks on this if possible. Having a simple module import stat the disk almost 18,000 times as it does in my test case all but makes many python projects unusable. Would they accept a PR to move away from this behavior by default or at the least, gate it behind an environment variable?
I don't currently have it paged into my mind why this behavior is the way it is. I can't recall if anyone has analyzed the behavior to see if there is something that can be done. Would someone search to see if there have been solutions proposed outside this ticket and link those here? It sounds like fast entry points suggests a partial solution but not a general one. If one were to analyze the code, does that lead to other ideas? I'm happy to review a PR, especially one that's not too complicated.
What about moving the implicit behavior to another module, like pkg_resources.preload. Then projects and scripts that import for the implicit behavior could try/except to import that module, and those that don't can simply import pkg_resources.
It would be a backward incompatible release, but if that clears the way for more efficient implementations, I'm for it.
this is an issue for me as well. This might be naive, but is there a reason why the scripts written by ScriptWriter
can't import the entry point directly? (i.e. only use pkg_resources.load_entry_point
at install time, not runtime).
@untitaker
Not a clear reason. script generated with wheel
do just that. fastentrypoints
monkey-patches ScriptWriter
for the same behavior, and it seems to work.
Apparently someone thought this was needed when they wrote it, but clearly it doesn't affect the general use-case!
Is it possible that some other hook is also executed when load_entry_point is used? That would explain the indirection.
I guess it's possible, but I think the fact that wheels don't behave this way is a pretty good indication that it's unnecessary.
I have a suspicion it's a case of getting so involved in one's own API that it seems like the obvious way to do something, even when there is a much simpler solution. We've all been there...
I'm currently working on this and it's more complicated. Installing from eggs doesn't work with your patch.
Ah. Testcases using this example project fail:
The entry point uses a wrong delimiter between module and function (but something like that crashes at runtime anyway).
@jaraco @ninjaaron See #901
so fastentrypoints only partially solves this issue, and i hope that surely this issue will not stay idle for another year here, because there are many consumers of pkg_resources
. for example, it is suggested as the canonical way for a package to fetch its own version in setuptools_scm. others will use pkg_resources to load their own data_files
as well.
and while it's thing to have to eat this performance cost once for something, hitting it on a module import is really unacceptable. we should at the very least, lazy-load those call_aside
functions and call them on the fly, with caching of course, as needed, in the functions that actually need them.
then we can make sense of this mess: optimize hot loops and everything. right now, it's hard to eveni make heads or tails of all of this because everything is mangled up in the package load. and what's with the globals()
mangling that's going on in there? is that expected practice for tools are basically part of the standard library (i know, they're not, but considering that entry_points is basically the standard way of distributing Python programs, we should consider this is standard).
here's a quick profiler run done on Python3 loading the pkg_resources, including most function calls until we start getting into actual package loading (e.g. feedparser, a rather lorge package, is included):
$ python3 -m cProfile -s cumtime test-import
396313 function calls (392538 primitive calls) in 0.335 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
76/1 0.004 0.000 0.336 0.336 {built-in method builtins.exec}
1 0.000 0.000 0.336 0.336 test-import:1(<module>)
88/1 0.000 0.000 0.336 0.336 <frozen importlib._bootstrap>:966(_find_and_load)
88/1 0.000 0.000 0.336 0.336 <frozen importlib._bootstrap>:939(_find_and_load_unlocked)
88/1 0.000 0.000 0.336 0.336 <frozen importlib._bootstrap>:659(_load_unlocked)
64/1 0.000 0.000 0.336 0.336 <frozen importlib._bootstrap_external>:667(exec_module)
108/1 0.000 0.000 0.335 0.335 <frozen importlib._bootstrap>:214(_call_with_frames_removed)
1 0.000 0.000 0.335 0.335 __init__.py:16(<module>)
2 0.000 0.000 0.217 0.108 __init__.py:3002(_call_aside)
1 0.000 0.000 0.217 0.217 __init__.py:3019(_initialize_master_working_set)
19 0.001 0.000 0.206 0.011 __init__.py:683(add_entry)
458 0.004 0.000 0.199 0.000 __init__.py:1992(find_on_path)
15 0.000 0.000 0.108 0.007 __init__.py:1966(_by_version_descending)
15 0.009 0.001 0.108 0.007 {built-in method builtins.sorted}
1 0.000 0.000 0.104 0.104 __init__.py:641(_build_master)
1 0.000 0.000 0.104 0.104 __init__.py:628(__init__)
29/15 0.000 0.000 0.085 0.006 {built-in method builtins.__import__}
445 0.003 0.000 0.076 0.000 __init__.py:2418(from_location)
1601 0.003 0.000 0.074 0.000 __init__.py:1981(_by_version)
1601 0.003 0.000 0.066 0.000 __init__.py:1987(<listcomp>)
4199 0.006 0.000 0.063 0.000 version.py:24(parse)
401 0.001 0.000 0.050 0.000 __init__.py:2760(_reload_version)
532 0.001 0.000 0.050 0.000 re.py:278(_compile)
87 0.000 0.000 0.049 0.001 re.py:222(compile)
83 0.001 0.000 0.048 0.001 sre_compile.py:531(compile)
401 0.001 0.000 0.048 0.000 __init__.py:2390(_version_from_file)
5040 0.017 0.000 0.043 0.000 version.py:198(__init__)
1 0.000 0.000 0.042 0.042 requirements.py:4(<module>)
300/296 0.005 0.000 0.037 0.000 {built-in method builtins.__build_class__}
3749 0.003 0.000 0.036 0.000 version.py:74(__init__)
1818 0.003 0.000 0.034 0.000 __init__.py:2563(_get_metadata)
3749 0.012 0.000 0.033 0.000 version.py:131(_legacy_cmpkey)
401 0.001 0.000 0.032 0.000 {built-in method builtins.next}
83 0.000 0.000 0.031 0.000 sre_parse.py:819(parse)
337/83 0.001 0.000 0.030 0.000 sre_parse.py:429(_parse_sub)
841 0.002 0.000 0.030 0.000 __init__.py:1376(safe_version)
9/8 0.000 0.000 0.030 0.004 <frozen importlib._bootstrap>:630(_load_backward_compatible)
487/88 0.011 0.000 0.030 0.000 sre_parse.py:491(_parse)
6 0.000 0.000 0.030 0.005 __init__.py:35(load_module)
1 0.000 0.000 0.024 0.024 pyparsing.py:61(<module>)
153/98 0.000 0.000 0.024 0.000 <frozen importlib._bootstrap>:996(_handle_fromlist)
1 0.000 0.000 0.022 0.022 parser.py:5(<module>)
1 0.000 0.000 0.022 0.022 feedparser.py:20(<module>)
test-import
is simply import pkg_resources
in a text file. here, it takes about 200ms more starting Python3 with pkg_resources than without, a result consistent with others. The 345ms result above is probably due to the profiler overhead.. most of the time (217ms) is taken by _initialize_master_working_set()
:
setuptools/pkg_resources/__init__.py
Line 3193 in 5da3a84
Half of that time (104ms) is taken by WorkingSet._build_master()
:
setuptools/pkg_resources/__init__.py
Line 3205 in 5da3a84
If I would adventure a guess as to the other half, it's the map
call a little later:
setuptools/pkg_resources/__init__.py
Line 3228 in 5da3a84
find_on_path
also seems like a pretty hot loop, called 500 times and doing pretty inefficient stuff like:
if os.path.isdir(path_item) and os.access(path_item, os.R_OK):
this is probably generating hundreds of stat()
spurious syscalls. and that's just scratching the surface... there might be other nice optimization opportunities here... but really, there's a lot of work done here to list all packages. and i doubt we'll get to below 10-20ms (a 10-fold increase would be a nice first performance target) with all those...
isn't there a simpler way to do most of what we need here? for example, I just want the version number of my package: I don't want all version numbers of all packages or even the version number of arbitrary package foo. couldn't there be a cache built on install of the package metadata so that is available in a fast way? heck, the entrypoint stuff created by setup.py looks like this here:
#!/usr/bin/python3
# EASY-INSTALL-ENTRY-SCRIPT: 'undertime==1.0.2.dev0+ng7058042.d20180223','console_scripts','undertime'
__requires__ = 'undertime==1.0.2.dev0+ng7058042.d20180223'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('undertime==1.0.2.dev0+ng7058042.d20180223', 'console_scripts', 'undertime')()
)
my version number is right there on top! why do I need to go back to pkg_resources to load it again if it was used to find the package in the first place!
if we deploy an egg, for example, couldn't we just hardcode the path to the egg there and make a special case for our own package in get_distribution()
? that would solve 99% of the use cases, and, with proper lazy-loading, would fix this shit for the remaining 1%, which would pay a higher cost to look at other packages.
i know it's a major change to packaging, but it's not like we have a precious little pearl of design that we don't want to touch here. one more stitch on that frankenstein might actually make it look prettier. ;)
A big ๐ to what @anarcat writes above.
The way this is built at present is causing major performance issues for Python projects that can't simply abandon pkg_resources
. A 200ms+ hit on virtually all Python executables is a big, big, big deal.
Another thing: anecdotal evidence here shows that there might be more performance improvements than I thought possible. Indeed, not that much time is actually spent in syscalls at all, and a lot of time spent in pkg_resources
is just raw userland CPU cycles. Here's an example of a program that I have which uses setuptools_scm
to guess its own version number, but only if it's not available in a _version.py
file. With the _version.py
file we get our baseline, about 83ms:
$ multitime -n 100 -s 0 -q ./undertime.py
===> multitime results
1: -q ./undertime.py
Mean Std.Dev. Min Median Max
real 0.083 0.005 0.081 0.081 0.119
user 0.075 0.007 0.064 0.076 0.116
sys 0.006 0.005 0.000 0.008 0.016
Note: 6ms spent on system calls on average. Now, with pkg_resources being hit:
$ multitime -n 100 -s 0 -q ./undertime.py
===> multitime results
1: -q ./undertime.py
Mean Std.Dev. Min Median Max
real 0.377 0.014 0.369 0.373 0.451
user 0.345 0.015 0.316 0.344 0.416
sys 0.026 0.009 0.004 0.024 0.048
feeel it, feeeeeeel the pain!!!!! Whereas the program was running almost instantly (below the traditional 100ms threshold), it's now pushing past the load time of my homepage through a DSL link, with a lovely 380ms. Also note that the vast majority of that is spent in userland: 345ms, with only a five-fold increase in the system time (26 ms).
In other words, I don't know what we do in pkg_resources, but we sure as heck spend a lot of time building and inspecting data structures that we just throw out the window a second later when the program terminates: most of the time is not spent looking at the filesystem, but actually calculating stuff.
It might very well be that find_on_path
and add_entry
are actually where all the stuff is going on: they are called often enough and fast enough and they are slow enough to matter. How about we optimize the heck out of those functions now? :)
I think your missing the point, it's not just about your version number: handling namespace packages, checking requirements and activating distributions... You can't do that without knowing what distributions are available. And load_entry_point
will use that info.
@benoit-pierre the point of package_resources is exactly that: it's doing too much stuff. it shouldn't load a model of the universe and try to guess the weather in Honolulu when all i want to know is "what's my address?" Normally, I just know, but if i don't, I just step out the door and look. Most operations are just like that: local inventory. Not multi-level introspection. I understand why those might be conflated at the data structure level, but from an API consumer perspective, this is really hurting the developer experience.
"..from an API consumer perspective, this is really hurting the developer experience."
Not just from the developer experience.
Again, I can't emphasize this enough: programs that are written in Python which end up hitting this code path (which is a huge number of all Python executables) are experiencing a massive decrease in performance when it comes to execution time.
Yes this continues to be the major startup time slowdown for all of my projects.
I like how popular fastentrypoints is getting, but clearly the real solution is just to make entry_points fast -- not to mention all the other uses of pkg_resources. I've been using gross data_files package_data hackarounds just to avoid it but my own projects. I hesitate to use Requests in small scripts because I know it's using pkg_resoures. Requests alone should be reason enough to fix this ridiculous behavior.
@scopatz Yes! I'm a huge fan of xonsh! The main reason I don't use it much is because I'm constantly opening new terminals and the load time is slow. Fix pkg_resources already! The people want xonsh!
@ninjaaron it would be great if you could share the hack you use for data_files
, it's the other main reason i use pkg_resources outside of entry_point and setuptools_scm.
i guess that if we have workarounds for entrypoints, data_files and get_version, I'd be happy with pkg_resources being slow. :p
@anarcat - Check out http://importlib-resources.readthedocs.io/en/latest/ which will also be part of Python 3.7
@anarcat looks like I misspoke. I use package_data. You have to do some stuff in setup.py and you also have to include the required files in your MANIFEST.in (to insure it's included if someone installs with pip). Then, I locate the data file relative to the module when I need to use it. It's absurd.
Here's an example where I ship the data in the package folder: https://github.com/ninjaaron/lazydots for that, you just use include_package_data=True in setup.py
In another case (https://github.com/OriHoch/python-hebrew-numbers), the data comes from a repo we vendor, so you have to do some manual enumeration.
More info: https://docs.python.org/3.6/distutils/setupscript.html#installing-package-data
At this point I'm pretty well convinced pkg_resources
can't be fixed, it needs to be replaced. And it should be replaced not by another monolithic package but by smaller packages that each do a piece of what pkg_resources
tries to do. That's why we wrote importlib.resources
for 3.7 (and the backport of importlib_resources
), and why things like fastentrypoints is a good thing.
With Python 3, we shouldn't need pkg_resources
for namespace packages; we should just get on with it and ditch Python 2, and adopt native namespace packages. The next thing I want to look at is a better way of extracting version numbers from libraries and applications. I have some thoughts here, but haven't yet put pen to paper.
@warsaw importlib.resources
does indeed solve the resource_filename
problem perfectly (which covers about half of my own uses of pkg_resources
), but what should we do with entrypoints (which makes for another half)? Do you know of a dedicated module for implementing these, or are there any plans for one?
@konstantint what's wrong with fastentry? couldn't that make it into stdlib eventually?
There is also http://entrypoints.readthedocs.io - which has some entrypoint helpers.
@anarcat ha! I doubt it. fastentrypoints monkey patches setuptools, which also isn't in the standard library!
fastentrypoints is really just a hack-around for situations where you're not using wheels, i.e. for development testing (something installed with pip install -e
), or if someone else is creating a package and you don't have control over whether they use wheel or setup.py (many Linux distros have scripts for automatically generating their package format from Python packages that, unfortunately, don't use wheel).
For uploading to pypi, you should already be building with wheel, which solves the same problem (I even stole the script fastentrypoints generates directly from wheel... Or maybe one of the related packages, possibly distlib...).
I definitely think of fastentrypoints as a sort of stop-gap until someone fixes this broken packaging system for real.
We're experimenting with fep, but yeah, I'm not sure that's going to be the long term solution. Another thing to consider; IMHO we really, really want to get rid of setup.py
and the whole imperative build system, which includes setuptools
. I think PEP 517/518, pyproject.toml
declarative build specifications, flit
, etc. are the long term way to go, so that's what we should be thinking about. I'm not as involved in the distutils-sig (or, maybe I try to ignore it as much as possible, and there are awesome people doing great work over there ;), but I think that's also the general consensus for where the ecosystem should be moving.
while it'd be great to get rid of setup.py
, i'd like to see this issue fixed without having to rebuild all of Python's distribution system, which could take a ... rather long time. :)
I recently tried out the brand new importlib-metadata and importlib.resources!
Here's an example PR which replaces pkg_resources.get_distribution(...).version
and pkg_resources.resource_filename(...)
with those modules: pre-commit/pre-commit#846
For comparison, the "usable startup" time before vs. after:
before
$ time pre-commit --version
pre-commit 1.11.2
real 0m0.363s
user 0m0.303s
sys 0m0.043s
after
$ time pre-commit --version
pre-commit 1.11.2
real 0m0.254s
user 0m0.229s
sys 0m0.020s
Thanks @jaraco @warsaw for these awesome improvements ๐ ๐ ๐!
I do not see anywhere in this issue an explanation for why pkg_resources
cannot be fixed. Has anyone tried?
At this point I'm pretty well convinced pkg_resources can't be fixed, it needs to be replaced. And it should be replaced not by another monolithic package but by smaller packages that each do a piece of what pkg_resources tries to do.
@warsaw I tend to agree with switching to a unix philosophy type design, but can this particular issue about the eager path enumeration be fixed?
The import time cost could be fixed, however anything that depends on any api which uses the working_set
will still incur the huge cost at ~some point. Construction of working_set
basically involves querying the filesystem for every sys.path
entry (and then reading a bunch of metadata off disk). For example iter_entry_points
requires that every distribution is available such that it can be checked for a specific class of entry point (meaning things like flake8
/ pytest
which use entrypoints as a plugin system will still have to have something do all of that work).
I think the problem is that pkg_resources
has some very subtle semantics, it's used throughout the ecosystem, and somewhere somebody probably depends on the current behavior. Can it be fixed without breaking things? I'm doubtful, but all I know is that I have no interest in working on that. :)
@ninjaaron It's unclear who would be relying on this and why, and it's notoriously hard to warn about any default behavior (most people who do the default thing will probably not care about one specific behavior or another, plus you generally want the default behavior to be the sane one - very hard to get people to opt in to that).
We can start by making this behavior occur lazily only when needed, but a full understanding of what operations use the working set and why people use them would be needed to properly optimize it.
It is sad to see this bug is going to stay with us for a long time.
@navytux I think it's just that it's hard to fix. If you would like to take a crack at this it would be awesome to make this happen lazily.
@pganssle it's been a while since I looked, but iirc the issue is that the building of the WorkingSet underpins all other functionality, so if pkg_resources is in the critical path for your application (for example, finding the entry point), you are going to take the performance penalty whether it's lazily computed or not.
As long as console entry points require the validation of all packages in the chain, I expect startup to be somewhat slow.
@jaraco is this right, do we require this for console entry points too? Note if packages are installed as a wheel the user sidesteps this as pip does not generate a script that tries to do this validation (bumped into this via pypa/virtualenv#1300)
Do we require [validation of all packages] for console entry points too?
Yes - for console scripts generated by easy_install or for entry points processed by pkg_resources.
However, as you point out, console scripts generated by pip install don't have this characteristic.
Furthermore, entry points processed by entrypoints or importlib_metadata also do not have this characteristic.
Given the difficulty and risk of addressing this issue withing pkg_resources, I've been focusing my efforts on making importlib_metadata
, which is planned to become importlib.metadata
in the stdlib, a suitable replacement for 99% of the use-cases on which applications currently rely on pkg_resources. After that transition, I'd expect setuptools might be able to rely on importlib_metadata or pkg_resources could become a private, internal implementation detail of setuptools.
That doesn't necessarily preclude someone from attempting to make pkg_resources faster and more robust for these use-cases. It just means I'm focusing my efforts elsewhere (on this topic).
Sorry, hard to follow above, but where has this issue ended up? I got here through a bug report on one of my libraries (simplistix/configurator#6) but I'm certainly interested in the wider issue.
Am I right in guessing that this is is basically because at startup, pkg_resources
is going to go scan for entrypoints in all installed packages? If so, this results in two problems:
-
Console scripts use entry points and so will be massively slow to start up. Could this also be because pkg_resource appears to do some pseudo-dependency-resolution stuff? I'd love if all of this could go away and be replaced with a simple import statement.
-
Anything that uses entrypoints is going to be massively slow when lots of packages are installed or the filesystem serving the python code is slow. HPC environments often have both of these ;-)
IIUC, I'd see the best solution to have entrypoints collected in a central location as part of the package installation process, rather than being collected at runtime, but would that have to be a PEP nowadays?
IIUC, I'd see the best solution to have entrypoints collected in a central location as part of the package installation process, rather than being collected at runtime, but would that have to be a PEP nowadays?
This would make this a pip feature request, and the very least a PEP would need to be formulated for this that deals with how this central location is maintained as long as CRUD operations and parallel interactions go. In theory, this could be done though. Tagging @pfmoore for thoughts. And then it's not clear how one could actually handle altering the sys.path
at startup/runtime which can extend/change this central database. Maybe every sys.path
entry can have a distributions.sqlite
file that basically contains everything under *.dist-info
), and if not fallback to directory discovery as is today.
@gaborbernat - thanks for the quick response, to try and explain more about the environments where I think this causes problems:
-
huge numbers of python packages installed (think your average data science stack, or something like the Anaconda distribution)
-
only a few packages used for any more scripts.
I would suggest that most people would be happy for entry points to not take into account modification of sys.path
in terms of finding entry points in return for making entrypoints scale. I'd be sad, but of course would have to accept, if empirical evidence suggests otherwise.
I'm not sure a database per sys.path
would make things better, I was thinking more along the lines of one data store per python installation/virtualenv/etc.
I would suggest that most people would be happy for entry points to not take into account modification of
sys.path
in terms of finding entry points in return for making entrypoints scale. I'd be sad, but of course would have to accept, if empirical evidence suggests otherwise.
While most might be ok, I don' think we can design something that would drop a feature supported for now. IMHO the only way I can at the moment this work is tying it to the sys.path. This could decrease the disk access roughly from 1000 to 10, which IMHO could be enough.
Thanks for tagging me here @gaborbernat. I'll answer this with my "interoperability BDFL" hat rather than my "pip maintainer" hat, as I think that's probably more appropriate (see later for why).
Requiring installers to log entry points in a central location is definitely something that would need to be standardised via a PEP (essentially, it's an extension of PEP 376 - Database of Installed Python Distributions). As tools like setuptools couldn't rely in that database unless they were sure all installers would maintain it, a standard is the correct approach here. (And with my pip maintainer hat on, pip would only implement something like this if it were backed by a standard).
One point I would like to clarify here, though - the original post talked about console scripts, which are defined using entry points. However, the script wrappers installed by pip (which are implemented by distlib) do not use entry point discovery to work, so they do not need or use pkg_resources
(they use entry point discovery when installing, but not at runtime). It's only the wrapper scripts implemented by setuptools itself (used by pip for develop mode, and for "legacy" installs that don't go via a wheel build) that use the entry point discovery mechanism at runtime.
Personally, I'd strongly advise against using that mechanism - my understanding is that it was mainly to support the mechanisms setuptools has for dynamically activating versions of a package at runtime (something that I think has been deprecated for some time). But either way, the choice to use it or not is entirely in setuptools' hands. It wouldn't make any difference for other uses of entry points, but it would address the issue for setuptools-created console script wrappers.
Lots of work has been done to address this issue, in particular by satisfying the use-cases of pkg_resources
in other places. In particular, the importlib.resources
and importlib.metadata
stdlib packages (and their _
-separated backports) attempt to satisfy the most common use-cases around metadata (including fast entry point parsing) and package resource retrieval. At the same time, this project is deprecating egg-based installs and easy_install.
As a result, use of the pkg_resources
package for these use-cases is discouraged and the module is largely deprecated.
To that end, I don't believe there's much left to do with this issue as described. Instead, downstream consumers should attempt to use the importlib features instead. If there are use-cases that are not satisfied by those packages, please feel free to raise those as separate issues.
@jaraco I am a simple developer who uses setuptools but have avoided using entry_points
because of this bug. I have been watching this bug (for a few years!) waiting for the day it gets fixed so I can start using them again. According to the current automatic script generation documentation, entry_points
are still the intended way to do this but I just tried it again using pip and my programs still cop a few unacceptable extra seconds of startup delay (e.g when installed on a raspberry pi). What are you saying we should be doing now? Can you please provide a link to an example setup.py? I am using pip 20.0.2 and setuptools 46.1.3.
@bulletmark I'd recommend that you use pip's script generation, installing from wheels (or if you install from source, ensure you have the wheel project installed, so that pip builds a wheel and installs it, rather than going through the setuptools "legacy" script generation process).
The pip scripts still use exactly the same entry point mechanism, but only at install time - there's no runtime cost to the entry points.
@gaborbernat sorry, but I don't see how that link you quote relates to my question? Should I still be using setuptools? If so, should I not be following the current documentation to create an automatic script? I can't do what that current documentation says because the (pip) created script runs far too slowly due to the present bug. I think this bug should be re-opened, at least until that documentation is corrected, and/or some other avenue is provided for us to create automatic scripts from setuptools.
I can't do what that current documentation says because the (pip) created script runs far too slowly due to the present bug.
@bulletmark, sorry to hear you're still having issues. I'm surprised by this report. If pip is creating the script, it should not be importing pkg_resources
implicitly. However, if any of the libraries used by that command are importing pkg_resources
, you'll still have the slowness. The recommendation is for those packages to use importlib.metadata
. The instructions in setuptools are still accurate for how a project can/should declare console-script entry points. Do you have an example of a command whose library when installed by pip is still slow to execute?
@jaraco wrote:
Do you have an example of a command whose library when installed by pip is still slow to execute?
Here you go: https://github.com/bulletmark/dummysetuptestapp.
@bulletmark If I install your test program using the most recent pip and setuptools versions, the generated wrapper does not use pkg_resources
and is fast to execute.
The problem may be in the use of sudo
(according to your instructions in the README), which may give you different pip and setuptools versions than when you run it without sudo
.
You may want to compare output of sudo python3 -c "import sys; print(sys.executable)"
with python3 -c "import sys; print(sys.executable)"
to check this. Note that best practice is to avoid sudo pip install
anyway.
FWIW the issue for me has always been that pip install -e
is slow, not pip install
which has been working fine.
always = since ~2018
Okay, that's odd. On my main system, the generated dummysetuptestapp
indeed doesn't have pkg_resources
import.
On my Orange Pi Zero board, however, the generated script somehow still imports main
through load_entry_point
, even though I manually updated pip
and setuptools
beforehand.
$ python3 -m venv env
$ ./env/bin/pip3 install -U pip setuptools
[...]
Installing collected packages: pip, setuptools
Found existing installation: pip 9.0.1
Uninstalling pip-9.0.1:
Successfully uninstalled pip-9.0.1
Found existing installation: setuptools 39.0.1
Uninstalling setuptools-39.0.1:
Successfully uninstalled setuptools-39.0.1
Successfully installed pip-20.0.2 setuptools-46.1.3
$ ./env/bin/pip3 freeze --all
pip==20.0.2
pkg-resources==0.0.0
setuptools==46.1.3
$ ./env/bin/pip3 install .
$ cat ./env/bin/dummysetuptestapp
#!/tmp/dummysetuptestapp/env/bin/python3
# EASY-INSTALL-ENTRY-SCRIPT: 'dummysetuptestapp==1.0','console_scripts','dummysetuptestapp'
__requires__ = 'dummysetuptestapp==1.0'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('dummysetuptestapp==1.0', 'console_scripts', 'dummysetuptestapp')()
)
@WGH- I've noted this earlier, but the behaviour is different depending on whether you have wheel installedl. In your example, you don't - so you get the old, slow version of the scripts. Please install wheel and try again, and you'll see the newer scripts.
Okay, turned out I had to install wheel as well. It was far from obvious that wheel somehow influences pip/setuptools (?) entry point script generation. FFS.
@WGH- Agreed, it's not obvious, It's because pip can't build a wheel and do its own (faster) script generation when wheel isn't present, so it falls back on the older setuptools direct installation code, which is what generates the old-style wrappers.
Maybe pip should warn in this case. I'll raise a pip issue suggesting that.
@bulletmark Sigh, I noted this earlier, but it got lost in the discussion, my apologies for not being clearer. As noted, I'm raising a pip issue to consider warning in this case, to make it easier to see what's going on.
Given this situation, I still won't be using entry_points
because I don't want users to suffer slow startup just because they don't have that package installed. Can this be improved other than merely outputting a warning?
Can this be improved other than merely outputting a warning?
You could switch your project to use pyproject.toml
for defining its build dependencies. Then you can explicitly require wheel, and pip will install it for you when your users build from source. That may or may not be a bigger change than you want to make, but it should be relatively painless. In principle, all you need is a pyproject.toml
file containing
[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"
It does change some details of how pip builds your project, though, so please test before releasing.
Sigh, I noted this earlier, but it got lost in the discussion, my apologies for not being clearer.
@pfmoore you were perfectly clear, the fault is mine for not reading at least some recent messages of the discussion. (the real fault is on the non-obvious pip/setuptools/wheel interaction, of course)
@pfmoore - when would the non-wheel case be desirable? How would vendoring in wheel
so that pip never builds the degraded scripts that use pkg_resources
?
@pfmoore can wheel
be added to build process using setup_requires
parameter of setuptools.setup
?
@pradyunsg - from a user perspective, having a really badly performing thing done because some library I don't know about isn't installed by a tool that should require it does end up feeling like a bug with the tool...
@pfmoore can wheel be added to build process using setup_requires parameter of setuptools.setup?
@kapsh not safely, setup_requires
is deprecated because it uses easy_install
to install the packages, not pip. pyproject.toml
is the correct, supported way. What's the use case that works with setup_requires
but not with pyproject.toml
(because we'd like to fix it!)?
from a user perspective, having a really badly performing thing done because some library I don't know about isn't installed by a tool that should require it does end up feeling like a bug with the tool...
@cjw296 Agreed, up to a point. While I understand that the history isn't the point here, we're in a transition period. The setuptools
wrappers were historically the approved solution, and pip did setup.py install
. We moved away from setup.py install
to PEP 517 (pyproject.toml
and building wheels) but we're still part-way through that process (pyproject.toml
adoption is still in progress, but the new wrappers depend on wheel). The transition to PEP 517 is not about better wrappers itself, but they come as a consequence.
The fix pip is progressing towards is making all builds go via PEP 517. We only support setup.py install
any more to avoid breaking projects that haven't done anything about the transition yet, and break under the new process. Conversely, setuptools isn't interested in updating their wrappers as they are being phased out by the pip change (and the general move away from installing via setuptools directly).
So yes, it's a bug, but it's being fixed. The fix is just rather long-winded, for compatibility reasons, and we're doing our best to apply mitigations while the process is ongoing.
@pfmoore sorry, I am not very proficient with setuptools and can't tell you use cases where pep-517-way would fail (personally I like to abuse pip install -e .
, but that's another story). I've only used setup_requires
to make setuptools_scm
available to the packaging process. Yet I have to notice that documentation here https://setuptools.readthedocs.io/en/latest/setuptools.html#new-and-changed-setup-keywords never mentions deprecation of setup_requires
keyword, maybe you would like to fix that detail. Big red warning while building sounds useful.
Thanks for your brief on current situation, this is interesting to know about.
Yet I have to notice that documentation here https://setuptools.readthedocs.io/en/latest/setuptools.html#new-and-changed-setup-keywords never mentions deprecation of setup_requires keyword, maybe you would like to fix that detail. Big red warning while building sounds useful.
Good point. I'm not a setuptools developer, so I'll leave it to them to pick up on that.
One context where the degraded setuptools scripts are generated is for any/every python package on gentoo that has an entry point. This is probably ultimately a issue that the gentoo python team (e.g. @mgorny) would have to tackle, but it affects all system installed python packages.
Yet I have to notice that documentation here https://setuptools.readthedocs.io/en/latest/setuptools.html#new-and-changed-setup-keywords never mentions deprecation of
setup_requires
keyword, maybe you would like to fix that detail. Big red warning while building sounds useful.
setup_requires
is sort of semi-deprecated. It's not the preferred way to add things to the build dependencies, but it is compatible with PEP 517/518 and feeds into get_requires_for_build_wheel
. We can probably open a separate issue to discuss this.
@pfmoore - okay, so I think I'm doing everything you said, but still getting entrypoint scripts built using pkg_resources:
$ pip freeze --all | egrep -i 'wheel|pip|setuptools'
pip==20.1
setuptools==46.1.3
wheel==0.34.2
$ pip install -e .
Obtaining file:///home/chris/energenie
...
Successfully installed energenie
$ cat `which check`
#!/home/chris/virtualenvs/energenie/bin/python3.5
# EASY-INSTALL-ENTRY-SCRIPT: 'energenie','console_scripts','check'
__requires__ = 'energenie'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('energenie', 'console_scripts', 'check')()
)
What am I doing wrong?
@cjw296 Ah, you're using editable installs (-e
). They go via setup.py develop
, because they are a setuptools-specific feature, not handled via wheels or any published standard. So you get setuptools wrappers in that case, and there's no avoiding it. Sorry, I forgot to mention that case.
It feels like the chances of getting a non-sucky entrypoint script are really pretty small, no?
(Honestly, reading through the above feels like some magic incantation, rather than a standard way to install software in one of the most popular programming languages in the world...)
More constructively, what's the current state of play on publishing a standard for editable installs? (-e seems pretty ubiquitous, I seem to remember flit having something too, not actually sure what conda does or if they care...)
There's a lot of debate on editable installs, but the latest round of discussions is here.
To cut through some of the packaging community specifics there, there's one proposal that hasn't been completely written up yet (a rough spec is here) which is waiting on someone with time to build a proof of concept implementation for some build backend (probably setuptools) and for pip. There's still a lot of debate over whether this is the best approach, but TBH, we need someone to write code at this point, not to discuss ideas (we've got plenty of people willing to do that ๐)
Edit: @gaborbernat posted a link to some additional points that I'd not spotted since I last checked the topic, so we're a bit further forward than I suggested above.
I use Arch Linux, which installs all python packages using setup.py install
(see package guidelines). So all Python executables installed through the system package manager (tox
, virtualenv
, meson
, youtube-dl
, docker-compose
, borg
, and many more) get the 250ms startup slowdown due to the pkg_resources
import, which is unfortunate.
I reported this issue to the Arch Linux devs, and they explained that they prefer the pkg_resources
method because it provides a nice informative error message if one of the dependencies is broken or missing, for example:
Traceback (most recent call last):
File "/usr/bin/pyrsa-keygen", line 6, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3259, in <module>
def _initialize_master_working_set():
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3242, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3271, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 584, in _build_master
ws.require(__requires__)
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 901, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 787, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'pyasn1>=0.1.3' distribution was not found and is required by rsa
compared to some import error, or worse, a silently broken program if the import is conditional.
I wonder if the Python devs have any recommendations to distros on this, or if other distros do something different.
(Apologies if I missed previous discussion on this.)
The recommendation to distros is definitely to not use setup.py install
.
We are 100% planning on removing setup.py install
, and for several years we haven't been fixing bugs that can be fixed by using pip
. They don't have to use pip
, but they should be using something equivalent. The sooner they migrate to something else the better.
Could you please indicate what 'something equivalent' useful for distributions is? It's easy to remove features you don't need for your workflow. It's much harder to provide a good alternative, and a plan to update thousands of packages to work. Flit/poetry has already caused enough mess by not caring at all about what distributions need.
The recommendation is to install using pip. If a distribution doesn't like the script wrappers pip generates, they can certainly write their own (or write a tool to generate something that works as they want). As things stand, I think you'd have to overwrite the pip-created wrappers (or put your own earlier on PATH so they get priority) but it would be a reasonable request for pip install
to have a flag that omits generating script wrappers.
What advantage does pip have over setup.py install
? Besides creating even bigger circular dependency graph that makes switching to a new Python version an experience wasting hundreds of hours of our time.
I think maybe we should take this to a new issue, since we're getting a bit far off the original topic of discussion.
@mgorny If you do not like the supported workflow, or the fact that setup.py install
is deprecated and unsupported, would you mind opening a new issue?
Considering that this issue is closed and seems to be a lightning rod for off-topic discussions, I recommend we lock it.
What advantage does pip have over
setup.py install
? Besides creating even bigger circular dependency graph that makes switching to a new Python version an experience wasting hundreds of hours of our time.
Pretty much every word of pep 518 and pep 517.
@gaborbernat you completely missed the point, PEP 517 and 518 are completely irrelevant for what is being discussed.
I've gone ahead and locked this topic for the sake of the inboxes of the people who followed this issue looking for updates on pkg_resources
and who don't care about linux distributions. If other maintainers feel I've overstepped my bounds here, they are welcome to unlock it.
I would like to say to the Linux distributors, (and particularly the Arch Linux packagers; a distro I've been using and heartily recommending for years) โ thank you for the work you've been doing. We definitely would like to continue working with you to find a reasonable way to take your important use case into account. You are always welcome to open an issue on setuptools
, a thread on the packaging discourse, or even to e-mail me personally. Next time PyCon happens, we'll be having a packaging summit, and we'd be happy to have you involved.
I would like to say to the Linux distributors, (and particularly the Arch Linux packagers; a distro I've been using and heartily recommending for years) โ thank you for the work you've been doing. We definitely would like to continue working with you to find a reasonable way to take your important use case into account.
+1
You are always welcome to open an issue on
setuptools
, a thread on the packaging discourse, or even to e-mail me personally.
I'll extend the same offer from pip's side as well!