Oathkeeper bombards Ory Network with requests after upgrade to 40.x
Withel opened this issue · 15 comments
Preflight checklist
- I could not find a solution in the existing issues, docs, nor discussions.
- I agree to follow this project's Code of Conduct.
- I have read and am following this repository's Contribution Guidelines.
- I have joined the Ory Community Slack.
- I am signed up to the Ory Security Patch Newsletter.
Ory Network Project
https://vibrant-dubinsky-6d1qtx5k0i.projects.oryapis.com
Describe the bug
After upgrading from 39.4 to 40.7, @tricky42 notified us, about huge amount of requests made from oathkeeper to ory network's /introspect endpoint, as can be seen on the first screenshot (Upgrade was performed on 23.04 at ~15:58). From our perspective we don't see any issues in the logs or metrics. On the second screenshot you can see oathkeeper traffic (spike is from the moment of upgrade). We didn't change any configuration between the versions (except the naming of log level), and we suspect that the cache might not be working properly.
Reproducing the bug
- Upgrade oathkeeper to 40.7 from 39.x
Relevant log output
No response
Relevant configuration
authenticators:
oauth2_introspection:
config:
cache:
enabled: true
max_cost: 100
ttl: 1h
introspection_request_headers:
Authorization: [REDACTED]
introspection_url: https://[REDACTED]/admin/oauth2/introspect
retry:
give_up_after: 3s
max_delay: 500ms
token_from:
header: Authorization
enabled: true
Version
40.7
On which operating system are you observing this issue?
None
In which environment are you deploying?
Kubernetes with Helm
Additional Context
No response
Thank you for the report! Can you pin-point which version introduced this regression? It would make the search for the regression much easier!
I'm not sure if I understand you correctly, as I wrote in the description, we have upgraded from 39.4 to 40.7 or do you mean something else?
It's worth mentioning that this is our second attempt with the upgrade to 0.40.x. First time we tried with 0.40.6 and had the same effect.
Since there are a couple of versions between 39.4 and 40.6 I wanted to know if you specifically can pin point which version exactly introduced the issue, making it easier to find the root cause
Since there are a couple of versions between 39.4 and 40.6 I wanted to know if you specifically can pin point which version exactly introduced the issue, making it easier to find the root cause
Unfortunately not, we've only tried these two versions :(
@aeneasr we've managed to pin-point exact version which introduces this issue. This happens between v0.39.4 and v0.40.0. Hope this helps, please let me know if you need anything else.
I think your max_cost
is just way too low. I see that we also use a misleading configuration in ristretto. The default max_cost
is set to 100000000 which I think is way too high.
Try playing around with that value to see if it has an impact.
Basically before we were using the internal cost which is I think the key length + cost function. Since your maxcost is like 100 the cache probably ran out of space after one or two keys so it's constantly evicting your values.
The fixes ignore the internal cost so you actually get 1 cost = 1 token instead of 1 token = 1 cost + cost of key
Soooo, with the help of @Demonsthere, we checked following values of the max_cost
value:
~15:05 CET: 100M (default) -> 300-400 req/m with initial spike
~15:35 CET: 50M -> 300-400 req/m
~15:55 CET: 5M -> for a second dropped to ~270 req/m, then increased to ~300-400 req/m
I hope this helps, for the time being we will go back to the 39.4 and we'll wait for the further updates. Please let us know if you need anything else.
So what you're saying is that it doesn't have an effect?
So, I must admitt that I made a mistake that I realised just after posting last comment. Unfortunately, I didnt keep an eye on the pods after deployment and turned out that they were not restarting after changing max_count
. From what I know oathkeeper does not have configuration hot reload, thus I will have to redo those experiments. Nevertheless I’m not sure when I’ll be able to perform those experiments since right now Im out of the office, but Ill do this as soon as possible. Apologies for that.
Hot reloading only works for things that can be changed during runtime. Caches unfortunately are large memory objects that are allocated at process start and can not be changed at runtime.
So, I have redo the tests (now making sure that after each update, pods are restarted correctly) and here are the results. Values same as before:
Before - 39.4 with max_count set to 100
9:30 CET - 40.7 with max_count set to 100
~9:55 CET - 40.7 with max_count removed (set to 100M, default)
~10:20 CET - 40.7 with max_count set to 50M
~10:50 CET - 40.7 with max_count set to 5M
Results (thanks to @tricky42 curtesy):
Also, we were suspecting that maybe oathkeeper containers are running out of memory, but we confirmed that it's not the case:
OK so increasing the cache size fixes the problem?