psf/requests

Consider making Timeout option required or have a default

mlissner opened this issue ยท 62 comments

I have a feeling I'm about to get a swift education on this topic, but I've been thinking about the pros/cons of changing requests so that somehow there is a timeout value configured for every request.

I think there are two ways to do this:

  1. Provide a default value. I know browsers have a default, so that may be a simple place to begin.
  2. Make every user configure this in every request -- bigger API breakage. Probably not the way to go.

The reason I'm thinking about this is because I've used requests for a few years now and until now I didn't realize the importance of providing a timeout. It took one of my programs hanging forever for me to realize that the default here isn't really very good for my purposes. (I'm in the process of updating all my code...)

I also see that a lot of people want Session objects to have a timeout parameter, and this might be a way to do that as well.

If a large default were provided to all requests and all sessions, what negative impact would that have? The only thing I can think of is that some programs will get timeout exceptions where they previously hung, which seems like an improvement to me.

Caveat, added May 13, 2016:

Please don't use this issue to discuss adding a timeout attribute to requests. There are a number of discussions about this elsewhere (search closed issues), and we don't want that conversation to muddy this issue too. Thanks.

This is an entirely reasonable suggestion.

Honestly, I'm not averse to doing it: there are definitely worse things to do than this. I'd be open to providing a default timeout. However, I don't think we can do it until 3.0.0.

Of course @kennethreitz as keeper of the spirit of requests has got the final say on this, and should definitely weigh in.

Changing the default would definitely need to wait until 3.0.0.

I'm not against having a well-reasoned (and therefore, also well-documented) default timeout parameter.

I'm still pretty strongly against sessions having a timeout attribute though and I think that shouldn't muddy the waters of this discussion because it will sidetrack things and prevent an otherwise productive discussion.

@sigmavirus24 I don't know the argument pro/con sessions having a timeout, but I'm happy to defer to other's judgement and experience on that, keeping these waters clean, like you say.

I'm just happy that I haven't gotten a swift education about timeouts.

I'm definitely in favor of this in light of dealing with a package that doesn't provide a way to set a timeout, but does accept a fully configured session. My current fix is a specialized HTTPAdapter that sets a default timeout if none is provided:

class TimeoutHTTPAdapter(HTTPAdapter):
    def __init__(self, timeout, *args, **kwargs):
        self._timeout = timeout
        super().__init__(*args, **kwargs)

    def send(self, request, timeout=False, ...):
        if timeout is None:
            timeout = self._timeout
        return super().send(request, timeout=timeout, ...)

I'd much prefer something like:

session = Session(timeout=...)

# or even

session.default_timeout = ...

However, this current fix isn't too cumbersome either.

๐Ÿ‘ for session = Session(timeout=...). Would you merge a patch?

@kuraga No. Per @sigmavirus24, and in many many previous discussions:

I think I could actually still implement it using hooks - create a method which uses the prepared request workflow, and in the hook, just call it again. What would be your suggested solution?

@Lukasa ok, but which discussion did you cite? Was it private? Which previous discussions?

Argh, sorry, copy-paste fail:

I'm still pretty strongly against sessions having a timeout attribute though and I think that shouldn't muddy the waters of this discussion because it will sidetrack things and prevent an otherwise productive discussion.

@kuraga please search closed issues. There are several discussions of this. I really don't want this diversion to distract from the topic of a default timeout though. So can we please stop discussing this now as I asked nicely before. You and @justanr are distracting from the important and attainable portion of this issue.

@sigmavirus24 I added a caveat to my initial comment, above. Hopefully that should get this discussion focused.

I'm (as a user) slightly against this proposal.
There are following caveats:

  1. There should be several different time-outs: resolve, connect, request, read. What should be defaults for each of them?
  2. If there will be default read time-out, then (for streaming interfaces) there should also be keep-alive packets. Is this implemented?
  3. There must be way to override default and disable time-outs completely.

What I've missed?

At the very least, how about we include the timeout arg in the first introductory examples in the documentation? It's bad when intro examples are subtly wrong for the sake of brevity.

It's extremely unclear to me what "wrong" means here. They aren't wrong: they work, as designed. The introductory examples are all clearly typed into the Python interactive interpreter, so they have a timeout: the user sitting there can hit ^C anytime they like.

I'd welcome enhancements to the "timeouts" section of the documentation to be more emphatic, that certainly does seem reasonable. But I don't think we need to go through and add the timeout arg.

It's extremely unclear to me what "wrong" means here. They aren't wrong: they work, as designed.

I think the point that's generally acknowledged by this bug is that the design was wrong. There's a general consensus here that adding a default timeout or requiring it as an argument is a good idea. The docs could address that, and that seems like a simple step to take until this issue is resolved in the next major version.

What happens in practice is that people grab the examples from the docs, don't read the timeout section carefully (or don't understand its implications -- I didn't for years until I got bit), and then wind up with programs that can hang forever. I completely agree with @chris-martin that until this issue is fixed, all examples in the docs should provide the timeout argument. Otherwise, we're providing examples that can (and probably will) break your programs.

There is a substantial difference between "is a good idea in production code" and "should be used in every example". For example, all production code should use Sessions for everything, but the documentation doesn't do that because it doesn't help teach the specific lessons that the documentation is intended to teach.

While I appreciate the concern that users will blindly copy code out of the documentation without further consideration, anyone doing that in their product is necessarily going to have sub-par results. This is true in all programming tools.

That's fair enough, @Lukasa. What about making the Timeout section more explicit then? Right now it says:

You can tell Requests to stop waiting for a response after a given number of seconds with the timeout parameter.

And then goes on with a long, somewhat complicated warning. Could the first part say:

You can tell Requests to stop waiting for a response after a given number of seconds with the timeout parameter. Nearly all production code should use this parameter in nearly all requests. Failure to do so can cause your program to hang indefinitely:

Something like that?

Yup, I'd be happy to merge a PR with that change in it. ๐Ÿ˜„

I think that we've addressed this through documentation. I don't think any of the core team is going to change the API to require a timeout and I think we've done our diligence here.

I doubt this will change anybody's mind, but I vehemently disagree with closing this bug without a fix. We improved the documentation via the PR that I filed, but in practice I regularly run into programs that hang because they do not have a timeout. It's gotten bad enough that it's one of the first things I think of when a program hangs.

Requests is a wonderful library but you can't document your way out of a problem like this. Look at the list of issues referencing this one. Documentation is just not enough, or else issues would stop referencing this one.

I respect the requests maintainers greatly, but I hope you'll reconsider closing this. If it causes API changes, that's what version bumps are for. But I'm not even sure API changes would be needed.

Please reconsider.

We could consider making a default in 3.0, but I don't know what it would be... 120s?

Idk, I like our current design. You really shouldn't be hitting the internet without proper timeouts in place ever in production. Any sane engineer knows that โ€” it's not Requests' job to do your job for you.

Here are a few example idle timeout defaults from other libraries, in case that helps:

Those seem reasonable to me. I just just think this should be a connect-only timeout, not a download timeout.

It would also be overridable via environment variable.

Hooray for reopening this bug. I like the solution of providing a default with an env override. Seems simple enough to me.

You really shouldn't be hitting the internet without proper timeouts in place ever in production. Any sane engineer knows that โ€” it's not Requests' job to do your job for you.

I do disagree with this though. The thing I love about Requests (and Python in general) is that these kinds of details are usually taken care of for me. For me, I think that's what so jarring about this issue. A super simple GET request has a gotcha that could bite you way later.

Anyway, thanks for reopening. I really appreciate it and I think the community will greatly benefit from a default timeout.

@mlissner I don't disagree with your statement โ€” as long as we apply this only to connect timeouts, not download timeouts. Download timeouts should be unlimited by default (people downloading large files).

Environment overrides are already causing us to pull our hair out. I'd like to not add anymore, for what that's worth.

And as a user of software, I'd like to not have yet more mysterious inputs to programs, for what that's worth. Nobody's going to bother documenting that their daemon uses requests and therefore has behavior that depends on this environment variable.

Don't disagree. Will think about it.

Download timeouts should be unlimited by default (people downloading large files).

That's not how Requests timeouts work. They apply per socket I/O syscall, not over the total execution time. That is, they govern how much each call to read() blocks. Large file downloads shouldn't be an issue.

vog commented

Has there been any progress on this?

@mlissner To add to your statement, I think that the current situation kind-of contradicts the stated project goal "HTTP Requests for Humans".

Humans need sensible defaults. Not just in graphical user interfaces, but also in command-line user interfaces and software libraries. And "no timeout" is almost never a sensible default. I was very surprised to read about that trap in the manual. This note should be more prominent in the manual, perhaps being the first note in the Quickstart (rather than the last note), because this is surprising and not what you expect from a library that performs network calls.

(Even better than a more prominent place in the manual, of course, would be to fix this once and for all.)

@sigmavirus24 @chris-martin I fully agree, and I don't need timeout environment variables, either. All I need is either a sane default timeout, or making the timeout argument mandatory. No more, no less.

@vog there hasn't been any progress on this. We're planning it for 3.0.

Every operating system has a default TCP timeout. On windows it's 240 seconds, the RFC for TCP recommends a 100 second minimum.

The requests library, without sane defaults, is fundamentally unsafe to use. Any nested dependent library a project relies on can - at any time and even after thorough testing - hang forever. This can be exploited, for example, to produce efficient denial of service attacks.

In a recent project I worked on, one requests call using a prepared send in a 3 level deep nested dependent library was missed. It took us months to figure out what was going on.

Sure, from now on I will patch the send function in every project I work on. But no amount of linting or code review would have caught this.

If requests isn't going to add a default to earlier versions, at least library should spit out a warning to the console, and the docs should be updated.

https://github.com/psf/requests/pull/5434/files

I pushed for documentation updates too and I think the docs are pretty good now, but I think the overall goodness of requests creates a blind spot for developers. It's very easy to get started with requests, get things working, assume the defaults are good, and then never read more of the documentation.

This issue is now four years old. I'd very much support adding a warning to current versions of request, since requests 2.0 (and the opportunity to change defaults) may never come.

I added a lint for my code for this but would much prefer there is a default timeout in the library

Making the timeout option have a default value, and setting that value to a very high number would not cause any breaking changes. I feel like any hesitation here about setting a default on a non-major release version is for fear of breaking changes. I propose a default value of 1000 seconds for the next minor release and lowering that default to 300 seconds for the next major release.

What if there was a way to set a default using env variables. Then people could opt in to the default, and it would be easier to also fix programs in production without having to dig through all code looking for libraries that might be using requests without the default. I was thinking something similar to how the http proxy has env variables.

As the person that filed this nearly five years ago, as a long-time user of requests that doesn't have any other issues with it, as a reviewer of pull requests that constantly begs developers to remember the timeout parameter, I can't say strongly enough how important โ€” and safe! โ€” I think this feature is. I think the risks to backwards incompatibility are almost negligible, and that this should be done as soon as possible.

The worst case of a default timeout is that a connection that badly hung in the past will now fail โ€” that's good! I can't think of a time that'd be bad.

Is there anything preventing us from finally fixing this wart in an otherwise nearly perfect library?

I recently found that there is a semgrep rule to check that a timeout is set for these calls. If you run this in CI you at least don't need to remind anyone anymore:

semgrep --config=r/python.requests.best-practice.use-timeout.use-timeout
# or semgrep --config=p/python ./

This is just a workaround though, I absolutely agree that default timeouts are essential and this should be fixed in the library here.

Ok, well I created a pull request for the proposed change, and it was approved by the CI. It's all in @nateprewitt's hands now to merge it and release it. Fingers Crossed

Over in #5779, there is a bit of discussion about how to fix this, but I think the blocker here is that we don't have direction on what kind of fix would be merged. We have a number of proposals:

  1. Cut a new major version with the proper fix (reasonable default timeouts)

    I strongly favor this option, but there's a bigger picture around releases that I lack visibility into, so maybe it's not practical. But I think this is the correct long term fix if this were a cleanroom implementation.

  2. Cut a new minor version with the OS-based Connect timeout (as in #5779)

    This feels pretty safe, but I think it would drain momentum from option number 1, so I don't favor it. Still it'd be an improvement as an intermediate step if we can be sure to get to option 1 eventually.

  3. Cut a point release with a deprecation warning whenever a timeout isn't provided, and start prepping people for one of the other solutions (which one?)

    This is harmless, but delays the actual fix, and if we're doing a major release anyway, I don't think we need to do this. As I've said before, for most people, adding default timeouts wouldn't create much upgrade hassle, if any.

  4. Cut a minor release with really long timeouts and slowly dial them into reasonable values over the span of several releases

    This would be a way to stem the bleeding when folks forget to include a timeout and their program freezes, but I think releasing a major version (option 1) would accomplish this sooner and more cleanly.

  5. Do nothing and continue having a way to shoot yourself in the foot when using requests.

Is there a maintainer that can issue an edict about how to proceed? I think if we have a clear path to fixing this, the code part should be easy, but it's never been clear what the maintainers would merge (as we've seen in #5779, for example).

If there's anybody that's opposed to option 1, I'd love to hear the reasoning. It seems like we lack momentum for that option, but I don't think anybody has ever voiced an opposing view to just doing reasonable default timeout values. In #5779, there's a note that folks are uncomfortable commenting here, but it'd be helpful to hear their arguments if others are able to paraphrase them. (For example, do they oppose option 1 generally, or just without a major release?)

My recollection of the consensus (from those who direct the project) has always been Requests 3.0 will fix this with a default value. Fixing it before then is not an option on the table due to our backwards-compatibility policy and the people who've been made to feel like commenting contrary to the popular opinion here will get them ridiculed and decried.

A deprecation warning would be fine, I doubt it will be effective given there were years of warnings for sqlalchemy/sqlalchemy#6105 and it caught many people completely off-guard.

I don't have the reasoning behind those people not wanting a default timeout set in a minor version other than they were concerned about the impact it would have on their existing infrastructure. My understanding is many people use requests for its simplicity and gloss over more advanced usage until they really really need it and configuring retries is not user-friendly - even if they know that what they're doing is retryable.

A major release would probably be the answer to their concerns (they can cap to <3.0 until they're ready to deal with whatever fallout might happen).

Thanks @sigmavirus24, that's helpful. 3.0 looks like it may never come though โ€” it's already been in the works for at least five years and the milestone has a lot of open issues that look hairy. (Sorry I didn't notice this before when laying out options above.)

Is there any way we can re-think this as a bug fix instead of a new feature? There are no API changes here, and I think for the vast majority of programs using requests, a default timeout will fix issues instead of creating new ones. If it's in a minor release, folks can very easily upgrade their code to add really long timeouts if that's really what they need (I think this would be very rare).

Am I wrong that we still haven't heard an argument about how a default timeout would cause actual real-world problems? I think the closest we come is:

I don't have the reasoning behind those people not wanting a default timeout set in a minor version other than they were concerned about the impact it would have on their existing infrastructure.

Yes, this could impact people's stuff, but I think in most cases this would change a silently hung program to one that crashes, which is something that's really good. A bug fix.

This could be framed in release notes as:

Fix: Default timeouts of X seconds have been provided to all requests. Programs that previously hung indefinitely or connected after extremely long periods of time will now crash with requests.exceptions.TimeoutException instead. To upgrade these programs, simply provide a longer timeout via the timeout parameter to the get, post, head, and options methods.

I'm really sorry to be pushing on this so much. It's not my usual thing. I just don't want to wait (literally?) forever for this fix to land, and I think it's so safe we should find a way to do it. Sorry, this feels like the kind of thing I see in release notes all the time.

Requests 3 hasn't happened for a myriad of reasons and winding circling conversations like this one don't help accelerate it's progress. If I had to hazard a guess, I'd say the pushing and tone of many comments in this thread (not yours) are why people with legacy code they're wary to change (aside from patching vulnerable dependencies) are not chiming in. Push-back on the urgency here has been met with forceful comments and complete disregard for situations other than your own - regardless of the fact that there's a desire to change things in a way that works well for 99% of users.

Spending time closing duplicate issues, redirecting questions, etc takes most of the ~hour I have to devote to open source in a given week. Ignore the fact that this project in particular has caused me to burn out repeatedly and the only other maintainer has more restrictions on their contributions and time than I do (and yet someone pinged them and only them directly here). 3.0 was supposed to be a funded development effort but the funds were never used for that effort. This is well documented. Rather than guilting the two people actually trying to achieve that goal within those constraints, there are other productive things to do that could help. You could help triage issues or answer folks questions on Stack overflow so they stop reposting here. You could help triage/review PRs or help with big fix release notes.

I'd love to help make this happen, but I'm in the same place wrt time, just different repos, I think.

Sounds like if 3.0 is a far-off hope, maybe the best thing we could do is start logging a warning. That'd be easy enough to do (I think?), and would help folks start upgrading things. It might also give us some extra feedback about reasons people can't upgrade? Harmless enough to slip into the next version, but at least moves this forward?

Also, thank you for the detailed reply. I'm sure this is a really hard project to maintain.

Requests has no logging of its own any longer. We can definitely start logging things though (like this for example) and add a deprecation warning which might help raise hackles more in preparation for 3.0 (where people might see the warning in their CI environments)

That sounds great, and like it might break the dam for some features like this one that need more consensus to move forward. It'll let us make a decision and then warn people of coming things before those things come.

I guess a simple warning could say:

DeprecationWarning: No timeout provided for request. Starting in version 3.0, the requests library will provide a default timeout for such requests. For more information, see: https://xxx.com

Does that look about right?

Yep. You can point people at this issue.

Would a 3.0 release with a default timeout change, and then changing what's currently designated for 3.x to be 4.x instead be a path forward?

I've seen this issue crop up over the years, with various code bases. It would be very nice to have a default timeout to prevent programs that wait forever due to a missing timeout.

What if there was a way to set a default using env variables. Then people could opt in to the default, and it would be easier to also fix programs in production without having to dig through all code looking for libraries that might be using requests without the default. I was thinking something similar to how the http proxy has env variables.

This one got quite some negative thumbs, but I think it's a good idea. The requests library already relies on the environment, i.e. it honors a .netrc and it honors proxy settings - and it has a trust_env parameter that can be set to False to disregard environment settings.

Such a solution may empower quite some end-users. Some of the problem with this issue is that there is no strong consensus on what the default timeout should be - but this problem persists even if it's pushed to the users of the library. Like, I have my caldav library, and I could consider setting some default timeout there - but then I have the same issue, what would the sane timeout be? This library is again used by other libraries and utilities. Sometimes the end user is the one to learn if the timeout setting is too low or too high. The timeout may be passed through the full chain and be specified by a configuration file for each tool out there - but it adds a lot of complexity. Putting an environment variable there is a simple solution that works for (nearly) all projects.

A default timeout given through environment variables may solve quite some problems. It is not a breaking change, users relying on the default timeout to be infinite won't be affected - hence it can be rolled out whenever and without DeprecationWarnings. However, it does not solve this issue. I strongly believe that the default should be set to something else than "infinite" in the long-term future. I've used the requests library a lot, and I always thought that such a high-level library would come with some kind of default timeout ... first time I encountered this issue my script had been hanging for days. I think even a very high default timeout, like 24h, is better than infinite.

In an effort to clean up the issue tracker to only have issues that are still relevant to the project we've done a quick pass and decided this issue may no longer be relevant for a variety of potential reasons, including:

  • Applies to a much older version, unclear whether the issue still applies.
  • Change requires a backwards incompatible release and it's unclear if the benefits are worth the migration effort from the community.
  • There isn't a clear demand from the community on the change landing in Requests.

If you think the issue should remain open, please comment so below or open a new issue and link back to the original issue. Again, thank you for opening the issue and for the discussion, it's much appreciated.

I still feel strongly that requests needs a default timeout. I don't think that has changed or been fixed, but I recognize that nobody wants to pick it up and run with it.

It could cause some minor backwards incompatibility, but fixing this would take requests from freezing your program when used incorrectly to crashing it when used incorrectly, which I consider a fix, not breaking compatibility.

I can't get too revved up about this because I'm not the one fixing it, but I do find the absence of a default timeout to be a source of bugs about 20% of the time I see requests in the wild.

I find made up percentages pulled from no real data to be a source of wasted time 90% of the time

Well, I think you know what I mean: I see people forgetting to use the timeout parameter pretty regularly, and when they do it's a bug. No, I don't have metrics on this, but would you agree it's an issue in the wild?

I have seen the lack of a default timeout in requests cause multiple production outages, including due to third-party libraries from major Cloud Providers not using it correctly.

This caused more incidents than there are use cases for not having a timeout (and in this case it can be disabled explicitly).

As noted above (almost 7 years ago), this completely disregards the practical needs of users (especially for a library that claims to abstract complexity):

I think that the current situation kind-of contradicts the stated project goal "HTTP Requests for Humans".

Humans need sensible defaults. [..] And "no timeout" is almost never a sensible default.

maybe another solution would be to warn if there's no timeout, so people can be harassed into providing one, but the default is still the backward compatible "randomly hang forever if one is not provided"

Earlier in the thread we discussed adding a warning (it's welcome), and after seven years, I don't think anybody in this issue has identified why they would want no timeout at all. The closest we've come is a concern about:

the impact it would have on [...] existing infrastructure.

That's it, I think. I don't think this need to be a controversial fix.

My personal suggestion would be to implement a fix and add a release note (example below). To me, this is broken functionality, and a default timeout fixes it for users. Maybe I'm missing something, but I don't think I am.

If we want to be more conservative, we can take additional steps which add more work, but give people more time and opportunity to adapt:

  • Add a deprecation warning for some period of time first.
  • Phase in the timeout starting at 3600s, then 1800s, then 900s, then 450s...until we get to 60s.

But I think all of that is unnecessary and that we can just take a middle path:

  1. In the next version: Add a warning if no timeout is provided.

    The one mentioned above has a blessing of a maintainer:

    DeprecationWarning: No timeout provided for request. Starting in version XYZ, the requests library will provide a default timeout for such requests. For more information, see: https://github.com/psf/requests/issues/3070

  2. Release the next version with that warning.

  3. Add a long default timeout of 60s to match Ruby and Scala.

  4. Add a release note that says:

    Fix: Default timeouts of 60 seconds have been provided to all requests. Programs that previously hung indefinitely or connected after extremely long periods of time will now crash with requests.exceptions.TimeoutException instead. To upgrade these programs, provide a longer timeout via the timeout parameter to the get, post, head, and options methods.

  5. Release another version.

I really think that would help the vast majority of users and that whatever breakage it caused would be minimal. (Happy to hear contrarian views though!)

Bandit already warns on lack of a timeout. Anyone with any real care about security of their code should be using it as a static analysis tool.

As for a warning Requests emits, people already silence all warnings from the library so they likely wouldn't see a warning about not specifying a timeout

That's good news, right? That means that the most serious users aren't a concern, and we only have to worry about folks that are less serious, where any breaking impact โ€” if any โ€” would be less of a big deal.

I agree most people don't see warnings, so maybe there's not much point in doing them (I know I don't). If that's the case, I still think the upside of just adding a default timeout in the next release is an OK way to go.

I appreciate you engaging in this discussion again. If a default timeout is added in the next release, who do you think would be worse off? After all these years and so much engagement, I still can't think of anybody โ€” is this unnecessary conservatism?

There was a PR to add warnings (unfortunately it was rejected for various reasons): #5434 (comment)
An env var to suppress the warning seems like a reasonable solution to address the log pollution concern.

The rationale for rejecting the PR actually underscores the need for the warning and a default timeout, as it acknowledges how widespread the issue is (if those projects would be setting proper timeouts, there wouldn't be any warnings):

It produces a considerable amount of noise in people's systems and we've generally stopped using warnings due to how often Requests is bundled in other projects

This is precisely what needs to be addressed, tracking this type of issue down in third-party libraries is difficult, it won't emit any logs, reproducability is limited and it often ends up requiring attaching gdb for a backtrace (which is arguably not what a "simple" http library should expect from the end user).

I don't want to pile on, and I know I'm chiming in a lot, but my passion for this issue stems from when I wrote celery tasks that used requests without a timeout. Over time, my celery workers would all slowly get mysteriously tied up, and I'd have to restart Celery. It took me forever to figure out why because I thought it was a celery bug. Nope! I'd have 1000ร— preferred if my code crashed or timed out.

I very much believe that it's needed with some default timeout. "Infinite" is a very bad default, and I cannot believe any systems out there depends on the default to be "infinite". That said, I believe the reason for not having a default timeout is that it's difficult to agree on what the timeout should be. Different systems require different timeouts. The lower the timeout, the bigger chance of breaking existing systems that doesn't specify a timeout. However, I believe even 24 hours would be better than "infinite" - even systems based on RFC 2549 should survive a 24 hour timeout. The idea of starting on a very high default value and gradually take it down may be worth looking into.

The problem that it's difficult to agree on a default doesn't go away by tossing it at the programmer using the Requests library (even if throwing warnings at him). Now, quite often there are chains like Requests being used by library X, library X used by library Y, library Y used by software Z, and only the user of software Z knows what the appropriate default timeout should be. I made a suggestion further up to let Requests honor an environment variable for default timeout (just like environment variables for proxy, etc is honored). Then, if the user of software Z is on an RFC 2549-network, he can simply set the environment variable to force insanely high timeout values.

(RFC 2549, packet transmissions by pigeons, was an April joke, implemented once as a "proof of concept").