how to configure envoyfilter to support ratelimit in istio 1.5.0?
sd797994 opened this issue · 81 comments
because of the mixer policy was deprecated in Istio 1.5,officials suggested use envoy rate limiting instead of mixer rate limiting 。but we don't have any document to guide us how to configure envoyfilter support ratelimit, the native envoy ratelimit configure like this:
but how to configure istio envoyfilter make it work?
@catman002 there is an envoy ratelimit example: https://github.com/jbarratt/envoy_ratelimit_example can help you I hope。simple strategies only, if your mixer policy are not too complicated...
I was successed to run this example. but it need Injection configure when the sidercar(dockerimage:envoyproxy/envoy-alpine:latest) to starting(copy config.yaml in to the right path like '/data/ratelimit/config/'), this is so many different from istio's envoy. I try to contrast istio envoy container and this envoy container ,I dont find any way to Injection configure by istio envoy. so.....
@gargnupur Is there work going on to provide an example set up using envoy rate limit filter?
@bianpengyuan @gargnupur After much trial and error, here is a working template for rate-limiting for the default Istio Ingress gateway
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit
namespace: istio-system
spec:
workloadSelector:
# select by label in the same namespace
labels:
istio: ingressgateway
configPatches:
# The Envoy config you want to modify
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.rate_limit
config:
# domain can be anything! Match it to the ratelimter service config
domain: test
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate_limit_service
timeout: 0.25s
- applyTo: CLUSTER
match:
cluster:
service: ratelimit.default.svc.cluster.local
patch:
operation: ADD
value:
name: rate_limit_service
type: STRICT_DNS
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
hosts:
- socket_address:
address: ratelimit.default.svc.cluster.local
port_value: 8081
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit-svc
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: GATEWAY
routeConfiguration:
vhost:
name: "*:80"
route:
action: ANY
patch:
operation: MERGE
value:
rate_limits:
- actions: # any actions in here
# Multiple actions nest the descriptors
# https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/rate_limit_filter#config-http-filters-rate-limit-composing-actions
# - generic_key:
# descriptor_value: "test"
- request_headers:
header_name: "Authorization"
descriptor_key: "auth"
# - remote_address: {}
# - destination_cluster: {}
Do you have any plan in order to reduce the complexity of the configuration for the rate limit? Seems a feature of a service mesh and implemented in other ones.
@jsenon : would like to know the pain points you are facing as that would help us know what we need to improve and we can take care of it in the next release of Istio...
@devstein : Great it worked for you and thanks for the example!! Can you share any problems that you faced or improvements that you would like to see...
@bianpengyuan, @sd797994 , @catman002 , @jsenon : I will be working on steps for using envoy ratelimiting for Istio services and will share here soon.
Hi @gargnupur thanks for your reply. Gloo or ambassador, have implemented a simple way of configuration, why do not add the rate limiting feature in the virtual service, or have only one CRD rate-limiter that will translate simple user configuration to envoy proxy:
rate-limit:
ratelimit_server_ref:
name: #Rate limiter URL
namespace: #Rate limiter namespace
request_timeout: #Time out of limiter
deny_on_fail: #Do we accept if no answer from limiter server
rate_limit:
maxAmount: #Number of request
ValidDuration: #Bucket duration
Hi @gargnupur thanks for tackling this! The two biggest challenges I faced were:
-
Understanding Envoy's concept of a
cluster
and how that related to the ratelimit service I deployed. I would have hoped that I could have referenced the service directly using usualratelimit.default.svc.cluster.local
K8s syntax. It's still unclear to me if this is intended or a bug. -
Debugging. To properly debug this filter, I had to look at examples online of what the raw envoy configuration for the rate limiting filter should look like then use
istioctl prox-config
to check if theEnvoyFilter
s I applied modified the config to be as such. I also ran into an issue where the rate limit actions I applied didn't have a match in the rate limit service's config but I couldn't find any logs for this.
Let me know if I can help in any other way!
@bianpengyuan @gargnupur After much trial and error, here is a working template for rate-limiting for the default Istio Ingress gateway
apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: filter-ratelimit namespace: istio-system spec: workloadSelector: # select by label in the same namespace labels: istio: ingressgateway configPatches: # The Envoy config you want to modify - applyTo: HTTP_FILTER match: context: GATEWAY listener: filterChain: filter: name: "envoy.http_connection_manager" subFilter: name: "envoy.router" patch: operation: INSERT_BEFORE value: name: envoy.rate_limit config: # domain can be anything! Match it to the ratelimter service config domain: test rate_limit_service: grpc_service: envoy_grpc: cluster_name: rate_limit_service timeout: 0.25s - applyTo: CLUSTER match: cluster: service: ratelimit.default.svc.cluster.local patch: operation: ADD value: name: rate_limit_service type: STRICT_DNS connect_timeout: 0.25s lb_policy: ROUND_ROBIN http2_protocol_options: {} hosts: - socket_address: address: ratelimit.default.svc.cluster.local port_value: 8081 --- apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: filter-ratelimit-svc namespace: istio-system spec: workloadSelector: labels: istio: ingressgateway configPatches: - applyTo: VIRTUAL_HOST match: context: GATEWAY routeConfiguration: vhost: name: "*:80" route: action: ANY patch: operation: MERGE value: rate_limits: - actions: # any actions in here # Multiple actions nest the descriptors # https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/rate_limit_filter#config-http-filters-rate-limit-composing-actions # - generic_key: # descriptor_value: "test" - request_headers: header_name: "Authorization" descriptor_key: "auth" # - remote_address: {} # - destination_cluster: {}
If you don't mind me asking, how would you pass in the Lyft config into these EnvoyFilter
s? Like
- key: header_match
value: quote-path-auth
rate_limit:
unit: minute
requests_per_unit: 2
@devstein From the snippet you kindly provided, I can only see the filters to match for certain header. But where did you put the corresponding configuration in regards to how many requests per unit time is allowed? Thanks!
@songford An example rate limit config for snipped I provided would be:
domain: test
descriptors:
# match the descriptor_key from the EnvoyFilter
- key: auth
# Do not include a value unless you know what auth value you want to rate limit (i.e a specific API_KEY)
rate_limit: # describe the rate limit
unit: minute
requests_per_unit: 60
This config is loaded by the ratelimit service you defined in envoy.ratelimit
If you wanted to filter by remote_address: {}
then you could have the following config:
domain: test
descriptors:
# Naively rate-limit by IP
- key: remote_address
rate_limit:
unit: minute
requests_per_unit: 60
I hope this helps!
@devstein Thanks a lot! It really helps!
My configuration (modification) based on the config from @devstein:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit
namespace: istio-system
spec:
workloadSelector:
# select by label in the same namespace
labels:
istio: ingressgateway
configPatches:
# The Envoy config you want to modify
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.rate_limit
config:
# domain can be anything! Match it to the ratelimter service config
domain: {your_domain_name}
failure_mode_deny: true
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate_limit_cluster
timeout: 10s
- applyTo: CLUSTER
match:
cluster:
service: ratelimit.default.svc.cluster.local
patch:
operation: ADD
value:
name: rate_limit_cluster
type: STRICT_DNS
connect_timeout: 10s
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
hosts:
- socket_address:
address: ratelimit.default.svc.cluster.local
port_value: 8081
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit-svc
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_ROUTE
match:
context: GATEWAY
routeConfiguration:
vhost:
name: {your_domain_name}
route:
name: http-echo-service
action: ANY
patch:
operation: MERGE
value:
route:
rate_limits:
- actions: # any actions in here
# Multiple actions nest the descriptors
# - generic_key:
# descriptor_value: "test"
- {request_headers: {header_name: "user-agent", descriptor_key: "auth_key"}}
# - remote_address: {}
# - destination_cluster: {}
Several points that I wish to bring up in hope to help folks who run into this post with similar requirements:
- In my case, the
domain
has to match domain you would like to enforce the rate limit on. For instance, if a rate limiter needs to apply on https://maps.google.com/v2, thedomain
configuration has to match this domain name. - For some reasons, in the
filter-ratelimit-svc
configuration, I have to make theapplyTo
intoHTTP_ROUTE
instead ofVIRTUAL_HOST
, otherwise therequest_header
section will be injected two layers too shallow. From the official documentation, this section should sit undervirtual_hosts.routes.route
, which is the case when applying toHTTP_ROUTE
. If I useVIRTUAL_HOST
, the section will be inserted right undervirtual_hosts
. I haven't myself verified if it does make a difference.
3. YOU SHOULD NOT LET ISTIO INJECT AN ISTIO SIDECAR NEXT TO YOUR RATE LIMITER SERVICE!!!
Correction: You can let istio inject a sidecar to the rate limit service pod. But you should remember to name the port it uses to receive gRPC calls (normally 8081) accordingly in its corresponding service. (grpc-8081) - Perhaps it's useful to set
failure_mode_deny
to true if you run into trouble. You'll know it when the rate limit service stops cooperating, as all requests you send to envoy will return 500 error. If you read the access log you'll seeRLSE
, which stands for Rate Limiter Service Error (I guess?), and you'll know the rate limiter service has been wired into the loop. - Use
istioctl dashboard envoy {{the-envoy-pod-your-ratelimiter-applys-on}}
to dump the configuration that's actually being written into Envoy and carefully review it.
Any plans to support this natively in istio?
@songford @bianpengyuan @gargnupur @devstein Can some body look this below configuration and help us? It does not created routes entry(RDS) in envoy config_dump but cluster entry(CDS) is there.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit
namespace: istio-system
spec:
workloadSelector:
# select by label in the same namespace
labels:
istio: ingressgateway
configPatches:
# The Envoy config you want to modify
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.rate_limit
config:
# domain can be anything! Match it to the ratelimter service config
domain: abcdefghi.xxx.com
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: "outbound|81||vpce-xxx-xxx.vpce-svc-xxx.us-east-1.vpce.amazonaws.com"
timeout: 10s
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit-svc
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: GATEWAY
routeConfiguration:
vhost:
name: "*:80"
route:
action: ANY
patch:
operation: MERGE
value:
rate_limits:
- actions: # any actions in here
# Multiple actions nest the descriptors
# - generic_key:
# descriptor_value: "test"
- {request_headers: {header_name: "method", descriptor_key: "GET"}}
- {request_headers: {header_name: "path", descriptor_key: "/api/v2/tickets"}}
- {request_headers: {header_name: "host", descriptor_key: "abcdefghi.xxx.com"}}
- {request_headers: {header_name: "x-request-id", descriptor_key: "ac5b684b-4bc6-4474-a943-0de4f1faf8df"}}
- {request_headers: {header_name: "domain", descriptor_key: "xxxxx"}}
# - remote_address: {}
# - destination_cluster: {}
Our Service Entry:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: endpoint-new
namespace: default
spec:
hosts:
- vpce-xxx-xx.vpce-svc-xx.us-east-1.vpce.amazonaws.com
location: MESH_EXTERNAL
ports:
- name: grpc
number: 81
protocol: GRPC
resolution: DNS
endpoints:
- address: vpce-xxx-xxx.vpce-svc-xxx.us-east-1.vpce.amazonaws.com
@VinothChinnadurai: can you share your config_dump ? Config looks ok..
Your ratelimit service is hosted on vpce-xxx-xxx.vpce-svc-xxx.us-east-1.vpce.amazonaws.com?
For reference: I followed examples above and this has been working for me: https://github.com/istio/istio/compare/master...gargnupur:nup_try_ratelimit_envoy?expand=1#diff-87007efb70dda4500545ba652cb0b30e
What does your rate limit service config look like? Have you tried simplifying your rate limit actions as a sanity check? (i.e only use - remote_address: {}
)
Also, did you try explicitly to create a CLUSTER
definition for the service? I found this to be simpler/less error-prone than referencing the default generated cluster name.
If you can, post your istioctl proxy-config route $POD_NAME
output here.
configPatches:
# The Envoy config you want to modify
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.rate_limit
config:
# domain can be anything! Match it to the ratelimter service config
domain: test
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate_limit_service
timeout: 0.25s
- applyTo: CLUSTER
match:
cluster:
service: ratelimit.default.svc.cluster.local
patch:
operation: ADD
value:
name: rate_limit_service
type: STRICT_DNS
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
hosts:
- socket_address:
address: endpoint-new.default.svc.cluster.local
port_value: 81
@gargnupur @devstein First of all, thanks a lot for your responses.
@gargnupur
Your ratelimit service is hosted on vpce-xxx-xxx.vpce-svc-xxx.us-east-1.vpce.amazonaws.com?
Yes it is our ratelimit service endpoint.
Config_dump
https://gist.github.com/VinothChinnadurai/66561838310c63b6a7657b0cde6fc194
@devstein
What does your rate limit service config look like? Have you tried simplifying your rate limit actions as a sanity check? (i.e only use - remote_address: {})
I am not exactly getting this point. please briefly explain. Our ratelimit is a GRPC service and tested the reachability from worker node
We tried the reachability through the below call(and its reachable)
curl vpce-xxx-xxx.vpce-svc-xxx.us-east-1.vpce.amazonaws.com:81/v1/accounts/sample
This is the host on which we are trying to apply ratelimit: abcdefghi.xxx.com
istioctl proxy-config route istio-ingressgateway-598796f4d9-h4vh6 -n istio-system
NOTE: This output only contains routes loaded via RDS.
NAME VIRTUAL HOSTS
http.80 1
1
istioctl proxy-config route istio-ingressgateway-598796f4d9-h4vh6 -n istio-system --name http.80 -o json
[
{
"name": "http.80",
"virtualHosts": [
{
"name": "*:80",
"domains": [
"*",
"*:80"
],
"routes": [
{
"match": {
"prefix": "/api/v2/activities",
"caseSensitive": true
},
"route": {
"cluster": "outbound|80||twilight-service.twilight-istio.svc.cluster.local",
"timeout": "0s",
"retryPolicy": {
"retryOn": "connect-failure,refused-stream,unavailable,cancelled,resource-exhausted,retriable-status-codes",
"numRetries": 2,
"retryHostPredicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts"
}
],
"hostSelectionRetryMaxAttempts": "5",
"retriableStatusCodes": [
503
]
},
"maxGrpcTimeout": "0s"
},
"metadata": {
"filterMetadata": {
"istio": {
"config": "/apis/networking.istio.io/v1alpha3/namespaces/twilight-istio/virtual-service/twilight-vs"
}
}
},
"decorator": {
"operation": "twilight-service.twilight-istio.svc.cluster.local:80/api/v2/activities*"
}
},
{
"match": {
"prefix": "/api/_/email_bots",
"caseSensitive": true
},
"route": {
"cluster": "outbound|80||emailbot-service.emailbot.svc.cluster.local",
"timeout": "0s",
"retryPolicy": {
"retryOn": "connect-failure,refused-stream,unavailable,cancelled,resource-exhausted,retriable-status-codes",
"numRetries": 2,
"retryHostPredicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts"
}
],
"hostSelectionRetryMaxAttempts": "5",
"retriableStatusCodes": [
503
]
},
"maxGrpcTimeout": "0s"
},
"metadata": {
"filterMetadata": {
"istio": {
"config": "/apis/networking.istio.io/v1alpha3/namespaces/emailbot/virtual-service/emailbot-vs"
}
}
},
"decorator": {
"operation": "emailbot-service.emailbot.svc.cluster.local:80/api/_/email_bots*"
}
}
],
"rateLimits": [
{
"actions": [
{
"requestHeaders": {
"headerName": "method",
"descriptorKey": "GET"
}
},
{
"requestHeaders": {
"headerName": "path",
"descriptorKey": "/api/v2/tickets"
}
},
{
"requestHeaders": {
"headerName": "host",
"descriptorKey": "abcdefghi.xxx.com"
}
},
{
"requestHeaders": {
"headerName": "x-request-id",
"descriptorKey": "ac5b684b-4bc6-4474-a943-0de4f1faf8df"
}
},
{
"requestHeaders": {
"headerName": "domain",
"descriptorKey": "xxx"
}
}
]
}
]
}
],
"validateClusters": false
}
]
Also, did you try explicitly to create a CLUSTER definition for the service? I found this to be simpler/less error-prone than referencing the default generated cluster name.
We tried that but found the below issue
The EnvoyFilter "filter-ratelimit" is invalid: []: Invalid value: map[string]interface {}{"apiVersion":"networking.istio.io/v1alpha3", "kind":"EnvoyFilter", "metadata":map[string]interface {}{"annotations":map[string]interface {}{"kubectl.kubernetes.io/last-applied-configuration":"{"apiVersion":"networking.istio.io/v1alpha3","kind":"EnvoyFilter","metadata":{"annotations":{},"name":"filter-ratelimit","namespace":"istio-system"},"spec":{"configPatches":[{"applyTo":"HTTP_FILTER","match":{"context":"GATEWAY","listener":{"filterChain":{"filter":{"name":"envoy.http_connection_manager","subFilter":{"name":"envoy.router"}}}}},"patch":{"operation":"INSERT_BEFORE","value":{"config":{"domain":"abcdefghi.freshpo.com","failure_mode_deny":true,"rate_limit_service":{"grpc_service":{"envoy_grpc":{"cluster_name":"rate_limit_cluster"},"timeout":"10s"}}},"name":"envoy.rate_limit"}}},{"applyTo":"CLUSTER","match":{"context":"GATEWAY"},"patch":{"operation":"ADD","value":{"connect_timeout":"10s","hosts":[{"socket_address":{"address":"vpce-0b247209ae0145d88-4fa54j71.vpce-svc-0874c1e9512bd57dc.us-east-1.vpce.amazonaws.com","port_value":81}}],"http2_protocol_options":{},"lb_policy":"ROUND_ROBIN","name":"rate_limit_cluster","type":"STRICT_DNS"}}}],"workloadSelector":{"labels":{"istio":"ingressgateway"}}}}\n"}, "creationTimestamp":"2020-04-09T07:09:16Z", "generation":1, "name":"filter-ratelimit", "namespace":"istio-system", "uid":"065b9b71-7a31-11ea-bcfc-0e6d31531fe3"}, "spec":map[string]interface {}{"configPatches":[]interface {}{map[string]interface {}{"applyTo":"HTTP_FILTER", "match":map[string]interface {}{"context":"GATEWAY", "listener":map[string]interface {}{"filterChain":map[string]interface {}{"filter":map[string]interface {}{"name":"envoy.http_connection_manager", "subFilter":map[string]interface {}{"name":"envoy.router"}}}}}, "patch":map[string]interface {}{"operation":"INSERT_BEFORE", "value":map[string]interface {}{"config":map[string]interface {}{"domain":"abcdefghi.freshpo.com", "failure_mode_deny":true, "rate_limit_service":map[string]interface {}{"grpc_service":map[string]interface {}{"envoy_grpc":map[string]interface {}{"cluster_name":"rate_limit_cluster"}, "timeout":"10s"}}}, "name":"envoy.rate_limit"}}}, map[string]interface {}{"applyTo":"CLUSTER", "match":map[string]interface {}{"context":"GATEWAY"}, "patch":map[string]interface {}{"operation":"ADD", "value":map[string]interface {}{"connect_timeout":"10s", "hosts":[]interface {}{map[string]interface {}{"socket_address":map[string]interface {}{"address":"vpce-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.amazonaws.com", "port_value":81}}}, "http2_protocol_options":map[string]interface {}{}, "lb_policy":"ROUND_ROBIN", "name":"rate_limit_cluster", "type":"STRICT_DNS"}}}}, "workloadSelector":map[string]interface {}{"labels":map[string]interface {}{"istio":"ingressgateway"}}}}: validation failure list:
"spec.configPatches.match" must validate one and only one schema (oneOf). Found none valid
spec.configPatches.match.listener in body is required
Kindly unblock us by suggesting what is the issue here..
What does your rate limit service config look like?
I was referring to the envoy proxy ratelimit service but I see you are using a custom GRPC service.
Have you tried simplifying your rate limit actions as a sanity check? (i.e only use - remote_address: {})
I'm referring to simplifying the rate limit actions. See below
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit-svc
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: GATEWAY
routeConfiguration:
vhost:
name: "*:80"
route:
action: ANY
patch:
operation: MERGE
value:
rate_limits:
- actions:
- remote_address: {}
We tried that but found the below issue
What version of Istio are you using?
@VinothChinnadurai Your route definition looks correct. Unfortunately, I'm not sure as to what your issue is. As a next step, I suggest enabling debug level logging on your ingress gateway pod to see what is going on.
kubectl -n istio-system exec svc/istio-ingressgateway -- curl -X POST "localhost:15000/logging?filter=debug" -s
kubectl -n istio-system logs svc/istio-ingressgateway -f
# make requests via another terminal
@devstein @gargnupur
Sorry We are also using envoyproxy ratelimit spec only. https://github.com/envoyproxy/envoy/blob/master/api/envoy/service/ratelimit/v2/rls.proto
we are using istio 1.5.0
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit
namespace: istio-system
spec:
workloadSelector:
# select by label in the same namespace
labels:
istio: ingressgateway
configPatches:
# The Envoy config you want to modify
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.rate_limit
config:
# domain can be anything! Match it to the ratelimter service config
domain: abcdefghi.freshpo.com
failure_mode_deny: true
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate_limit_cluster
timeout: 10s
- applyTo: CLUSTER
match:
cluster:
service: vpce-xxx-xxx.vpce-svc-xxx.us-east-1.vpce.amazonaws.com
patch:
operation: ADD
value:
name: rate_limit_cluster
type: STRICT_DNS
connect_timeout: 10s
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
hosts:
- socket_address:
address: vpce-xxx-xxx.vpce-svc-xxx.us-east-1.vpce.amazonaws.com
port_value: 81
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit-svc
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: GATEWAY
routeConfiguration:
vhost:
name: "*:80"
route:
action: ANY
patch:
operation: MERGE
value:
rate_limits:
- actions: # any actions in here
# Multiple actions nest the descriptors
# - generic_key:
# descriptor_value: "test"
- {request_headers: {header_name: "method", descriptor_key: "GET"}}
- {request_headers: {header_name: "path", descriptor_key: "/api/v2/tickets"}}
- {request_headers: {header_name: "host", descriptor_key: "abcdefghi.xxx.com"}}
- {request_headers: {header_name: "x-request-id", descriptor_key: "ac5b684b-4bc6-4474-a943-0de4f1faf8df"}}
- {request_headers: {header_name: "domain", descriptor_key: "xxx"}}
# - remote_address: {}
# - destination_cluster: {}
The above one applied without any issue
We tried sanity using remote_address:{} only as you mentioned and call reaches to our ratelimit service :)
Please find debug log for this:
https://gist.github.com/Adharsh-Muraleedharan/6d844d1d1cb8d5db0bd35e80749ba6b9
But if we try with necessary headers(removed the remote_address:{})(as like above manifests)
The call not reaches to our ratelimit service and we can't find the request entry in the below debug logs
https://gist.github.com/Adharsh-Muraleedharan/55680ab723763f86664aafbf4e0839cc
Does that mean, the issue is with headers?
Kindly suggest what is the issue here?
Does all the headers have values in the actual request? Please note if the request does not have value for any of those headers, Envoy skips calling the rate limit service. See this issue envoyproxy/envoy#10124 in Envoy
Sure @ramaraochavali . Lets check with my ratelimit service team and try only with supported headers and come back.
@ramaraochavali @devstein @gargnupur Thanks a lot guys for all your responses. It is working now as if we pass all headers in the request matched with request_headers(Under rate_limits.actions)
I have two questions here.
patch:
operation: MERGE
value:
rate_limits:
- actions: # any actions in here
# Multiple actions nest the descriptors
# - generic_key:
# descriptor_value: "test"
- {request_headers: {header_name: ":authority", descriptor_key: "host"}}
- {request_headers: {header_name: ":path", descriptor_key: "PATH"}}
One this header_name should match with what we sending in the request to this IG envoy and descriptor_key is something we will make it as ** header ** for all outbound request with descriptor_value as its value.
Say in the above case
if request-> IstioGateway sent as
curl https://ig.com --header "host:abcd.xxx.com" --header "path: /api/v2/tickets"
It will become {"host":"abcd.xxx.com","PATH":"/api/v2/tickets"} as request while sending to our Ratemit service from IstioGateway?
- Can I skip this rate limit calls based on some header value present in my incoming request to IstioGateway? (we need some mechanism to skip for a certain type of request)
Kindly suggest.
Thanks once again!!!
@VinothChinnadurai : yes for the first question.
for second one take a look at https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/route/route_components.proto#envoy-api-msg-route-ratelimit-action-headervaluematch, you can use this for ratelimiting based on the presence of a header value I think...
Thanks a lot @gargnupur . I will try the same and come back
Hi,
Using istio 1.5.2 here
I am using rate limit to limit n requests from the same IP and it is working !!
Do you guys happen to know how can I do if I dont want to apply this rate limit to an specific User-agent?
Now I got this: ( snipped of envoy filter in istio )
value: rate_limits: - actions: # any actions in here - request_headers: header_name: "x-custom-user-ip" descriptor_key: "remote_address"
============================================
global rate limit config.yaml
doman:test
descriptors:
- key: remote_address
rate_limit:
unit: minute
requests_per_unit: 5
Thanks in advance!
@VinothChinnadurai : yes for the first question.
for second one take a look at https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/route/route_components.proto#envoy-api-msg-route-ratelimit-action-headervaluematch, you can use this for ratelimiting based on the presence of a header value I think...
Thanks a lot @gargnupur @ramaraochavali @devstein .Your responses helped a lot and solved the problem :)
Hi,
Using istio 1.5.2 here
I am using rate limit to limit n requests from the same IP and it is working !!
Do you guys happen to know how can I do if I dont want to apply this rate limit to an specific User-agent?
Now I got this: ( snipped of envoy filter in istio )
value: rate_limits: - actions: # any actions in here - request_headers: header_name: "x-custom-user-ip" descriptor_key: "remote_address"
============================================
global rate limit config.yaml
doman:test
descriptors:
- key: remote_address
rate_limit:
unit: minute
requests_per_unit: 5Thanks in advance!
@santinoncs I can see it's closely related to our requirement. Please check this link and see whether it helps to you
#22068 (comment)
Here I want to apply this rate limit to an especific service inside the vhost, not all the routes inside the vhost yyy.com:80
is it possible?
- applyTo: HTTP_ROUTE
match:
context: GATEWAY
routeConfiguration:
vhost:
name: yyy.com:80
route:
action: ANY
patch:
operation: MERGE
value:
route:
rate_limits:
- actions:
- request_headers:
descriptor_key: remote_address
header_name: x-custom-user-ip
For instance..
- applyTo: HTTP_ROUTE
match:
context: GATEWAY
routeConfiguration:
vhost:
name: yyy.com:80
route:
action: ANY
name: HERE WE CAN SET A ROUTE NAME OR A CLUSTER NAME
patch:
operation: MERGE
value:
route:
rate_limits:
- actions:
- request_headers:
descriptor_key: remote_address
header_name: x-custom-user-ip
Works on an individual service by attaching to SIDECAR_INBOUND, too.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit
namespace: test
spec:
workloadSelector:
# select by label in the same namespace
labels:
app: MYAPPNAME
configPatches:
# The Envoy config you want to modify
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.rate_limit
config:
# domain can be anything! Match it to the ratelimter service config
domain: zoi-auth
failure_mode_deny: true
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate_limit_service
timeout: 0.25s
- applyTo: CLUSTER
match:
cluster:
service: ratelimit.ratelimit.svc.cluster.local
patch:
operation: ADD
value:
name: rate_limit_service
type: STRICT_DNS
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
hosts:
- socket_address:
address: ratelimit.ratelimit.svc.cluster.local
port_value: 8081
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit-svc
namespace: test
spec:
workloadSelector:
labels:
app: MYAPPNAME
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: SIDECAR_INBOUND
routeConfiguration:
vhost:
name: "inbound|http|80"
route:
action: ANY
patch:
operation: MERGE
value:
rate_limits:
- actions:
- request_headers:
header_name: "Authorization"
descriptor_key: "auth"
Hi,
The above configuration is not working for me. I am using ISTIO 1.5.2. Please can you clarify "ratelimit.default.svc.cluster.local" is the actual service that running within the cluster?
- Is there any default Istio configuration has to be set for envoy filters to be working
- How do I verify envoy filters getting applied? I don't see any logs in the ingress gateway as well the service where the rate limit has to be applied.
Please find the configuration attached.
Any help much appreciated?
@VinothChinnadurai could you help, how you sorted the rate limit. We have stuck with Istio with no response or no useful doc for configuring the rate limit using the envoy filter. The mixer based rate limit is deprecated and stuck with envoy filter. We are not accessing any external service and trying to rate-limit the calls hosted within the cluster.
Hi,
The above configuration is not working for me. I am using ISTIO 1.5.2. Please can you clarify "ratelimit.default.svc.cluster.local" is the actual service that running within the cluster?
- Is there any default Istio configuration has to be set for envoy filters to be working
- How do I verify envoy filters getting applied? I don't see any logs in the ingress gateway as well the service where the rate limit has to be applied.
Please find the configuration attached.
Any help much appreciated?
@devstein could you please help with this request?
@ragunathan23 : some answers below:
Hi,
The above configuration is not working for me. I am using ISTIO 1.5.2. Please can you clarify "ratelimit.default.svc.cluster.local" is the actual service that running within the cluster?
This is the ratelimiting service that actually does the ratelimiting. You can take a look at envoy/lyft's open source ratelimiting service for this(https://github.com/envoyproxy/ratelimit)
- Is there any default Istio configuration has to be set for envoy filters to be working
Don't think so..
- How do I verify envoy filters getting applied? I don't see any logs in the ingress gateway as well the service where the rate limit has to be applied.
You can verify from proxy config. You can get that via istioctl (https://github.com/envoyproxy/ratelimit) or getting a config dump via curl.. something like this:
kubectl -n exec -i -t <pod_name> -c istio-proxy -- curl http://localhost:15000/config_dump > dump.json
Please find the configuration attached.
Any help much appreciated?
I am working on a test for this and you can see that here: #23513
@ragunathan23 : some answers below:
Hi,
The above configuration is not working for me. I am using ISTIO 1.5.2. Please can you clarify "ratelimit.default.svc.cluster.local" is the actual service that running within the cluster?This is the ratelimiting service that actually does the ratelimiting. You can take a look at envoy/lyft's open source ratelimiting service for this(https://github.com/envoyproxy/ratelimit)
- Is there any default Istio configuration has to be set for envoy filters to be working
Don't think so..
- How do I verify envoy filters getting applied? I don't see any logs in the ingress gateway as well the service where the rate limit has to be applied.
You can verify from proxy config. You can get that via istioctl (https://github.com/envoyproxy/ratelimit) or getting a config dump via curl.. something like this:
kubectl -n exec -i -t <pod_name> -c istio-proxy -- curl http://localhost:15000/config_dump > dump.jsonPlease find the configuration attached.
Any help much appreciated?
envoy-rate-limit.txtI am working on a test for this and you can see that here: #23513
@gargnupur thanks a lot for your response. I am having sleepless nights with getting this up and running. Is it possible for you to look at the configuration details and suggest anything wrong?
Do this service "ratelimit.default.svc.cluster.local" needs to be up and running in my cluster?
@ragunathan23 : looks like you are mixing the config of setting up rate limit service with setting up ratelimit in envoy config.
This, looks like should go in rate limit service config:
descriptors:
# Naively rate-limit by IP
- key: remote_address
rate_limit:
unit: minute
requests_per_unit: 20
@ragunathan23 : looks like you are mixing the config of setting up rate limit service with setting up ratelimit in envoy config.
This, looks like should go in rate limit service config:
descriptors:Naively rate-limit by IP
- key: remote_address
rate_limit:
unit: minute
requests_per_unit: 20
@gargnupur thanks for your response. Sorry for troubling you, is there a document or steps available to configure this rating limit service in Kubernetes cluster? ISTIO just says use envoy filter but there aren't details about this rating service as I understood is a pre-requisite for envoy filter to work in Istio. We are left in the dark as the mixer based rate limit has already been deprecated.
@ragunathan23 : looks like you are mixing the config of setting up rate limit service with setting up ratelimit in envoy config.
This, looks like should go in rate limit service config:
descriptors:Naively rate-limit by IP
- key: remote_address
rate_limit:
unit: minute
requests_per_unit: 20@gargnupur thanks for your response. Sorry for troubling you, is there a document or steps available to configure this rating limit service in Kubernetes cluster? ISTIO just says use envoy filter but there aren't details about this rating service as I understood is a pre-requisite for envoy filter to work in Istio. We are left in the dark as the mixer based rate limit has already been deprecated.
@gargnupur Thanks for your response. Able to solve the issue and rate limit is working in the Kubernetes cluster with ISITO.
@ragunathan23 Can you please share the configuration made for this to work? Including the rate limiting & redis service ?
@MaheshGPai I put what I found in this repo. Check the rate limiting part! Mainly Inspired from the work of @devstein & @ragunathan23
Thanks a lot @aboullaite! Worked like a charm. 👍
Has anyone seen the discrepancy in the rate limiting? I had configured a limit of 1000/s and generated a load of 2150/s for about 5 minutes.
Status code distribution:
[200] 409510 responses
[429] 241032 responses
The allowed traffic is actually around 1355/s. That 35% more than the allowed limit.
Another thing I noticed is the resource used by the ratelimiting pod is around 5 vCPU which is very huge for such a small load
POD NAME CPU(cores) MEMORY(bytes)
ratelimit-f559844-dbmx5 ratelimit 4716m 24Mi
redis-6484dcfc8c-jtnnh redis 16m 3Mi
ratelimit's logging level was set to info and no logs were generated during the load. So not sure why such a high resource usage is required.
What is wrong with my config? I did not see any traffic on rate limiter and it does not work.
filter.yaml
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.rate_limit
config:
domain: test
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate_limit_service
timeout: 0.25s
- applyTo: CLUSTER
match:
cluster:
service: ratelimit.rate-limit.svc.cluster.local
patch:
operation: ADD
value:
name: rate_limit_service
type: STRICT_DNS
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
hosts:
- socket_address:
address: ratelimit.rate-limit.svc.cluster.local
port_value: 8081
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit-svc
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: GATEWAY
routeConfiguration:
vhost:
name: "*"
route:
action: ANY
patch:
operation: MERGE
value:
# rate limit service descriptors config relays on the order of the request headers (desriptor_key)
rate_limits:
- actions:
- remote_address: {}
service,yaml
apiVersion: v1
kind: Namespace
metadata:
name: rate-limit
labels:
istio-injection: enabled
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: rate-limit
labels:
app: redis
spec:
ports:
- name: redis
port: 6379
selector:
app: redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: rate-limit
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis:alpine
imagePullPolicy: Always
name: redis
ports:
- name: redis
containerPort: 6379
restartPolicy: Always
serviceAccountName: ""
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ratelimit-config
namespace: rate-limit
data:
# check this example: https://github.com/envoyproxy/ratelimit#example-4
config.yaml: |
domain: test
descriptors:
- key: remote_address
rate_limit:
unit: minute
requests_per_unit: 2
---
apiVersion: v1
kind: Service
metadata:
name: ratelimit
namespace: rate-limit
labels:
app: ratelimit
spec:
ports:
- name: "8080"
port: 8080
targetPort: 8080
protocol: TCP
- name: "8081"
port: 8081
targetPort: 8081
protocol: TCP
- name: "6070"
port: 6070
targetPort: 6070
protocol: TCP
selector:
app: ratelimit
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ratelimit
namespace: rate-limit
spec:
replicas: 1
selector:
matchLabels:
app: ratelimit
strategy:
type: Recreate
template:
metadata:
labels:
app: ratelimit
spec:
containers:
- image: envoyproxy/ratelimit:v1.4.0
imagePullPolicy: Always
name: ratelimit
command: ["/bin/ratelimit"]
env:
- name: LOG_LEVEL
value: debug
- name: REDIS_SOCKET_TYPE
value: tcp
- name: REDIS_URL
value: redis:6379
- name: USE_STATSD
value: "false"
- name: RUNTIME_ROOT
value: /data
- name: RUNTIME_SUBDIRECTORY
value: ratelimit
ports:
- containerPort: 8080
- containerPort: 8081
- containerPort: 6070
volumeMounts:
- name: config-volume
mountPath: /data/ratelimit/config/config.yaml
subPath: config.yaml
volumes:
- name: config-volume
configMap:
name: ratelimit-config
---
I have istio 1.6.3 mutual tls enabled
thanks
did anyone managed to get this running in Istio 1.6.5? I have my configuration taken from @aboullaite @devstein @gargnupur which worked in Istio 1.5.0 but it doesn't work in Istio 1.6.5. My configuration is exactly same as it is described here
https://github.com/aboullaite/service-mesh/tree/master/4-policy/rate-limiting
did anyone managed to get this running in Istio 1.6.5? I have my configuration taken from @aboullaite @devstein @gargnupur which worked in Istio 1.5.0 but it doesn't work in Istio 1.6.5. My configuration is exactly same as it is described here
https://github.com/aboullaite/service-mesh/tree/master/4-policy/rate-limiting
Managed to make it work on a clean minikube environment. So think it might be some stale config.
Works on an individual service by attaching to SIDECAR_INBOUND, too.
apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: filter-ratelimit namespace: test spec: workloadSelector: # select by label in the same namespace labels: app: MYAPPNAME configPatches: # The Envoy config you want to modify - applyTo: HTTP_FILTER match: context: SIDECAR_INBOUND listener: filterChain: filter: name: "envoy.http_connection_manager" subFilter: name: "envoy.router" patch: operation: INSERT_BEFORE value: name: envoy.rate_limit config: # domain can be anything! Match it to the ratelimter service config domain: zoi-auth failure_mode_deny: true rate_limit_service: grpc_service: envoy_grpc: cluster_name: rate_limit_service timeout: 0.25s - applyTo: CLUSTER match: cluster: service: ratelimit.ratelimit.svc.cluster.local patch: operation: ADD value: name: rate_limit_service type: STRICT_DNS connect_timeout: 0.25s lb_policy: ROUND_ROBIN http2_protocol_options: {} hosts: - socket_address: address: ratelimit.ratelimit.svc.cluster.local port_value: 8081 --- apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: filter-ratelimit-svc namespace: test spec: workloadSelector: labels: app: MYAPPNAME configPatches: - applyTo: VIRTUAL_HOST match: context: SIDECAR_INBOUND routeConfiguration: vhost: name: "inbound|http|80" route: action: ANY patch: operation: MERGE value: rate_limits: - actions: - request_headers: header_name: "Authorization" descriptor_key: "auth"
Hi @blankley, I can't get it to work for sidecars with SIDECAR_INBOUND with this configuration. Could you please let me know how your service configuration looks like?
@ragunathan23 : looks like you are mixing the config of setting up rate limit service with setting up ratelimit in envoy config.
This, looks like should go in rate limit service config:
descriptors:Naively rate-limit by IP
- key: remote_address
rate_limit:
unit: minute
requests_per_unit: 20@gargnupur thanks for your response. Sorry for troubling you, is there a document or steps available to configure this rating limit service in Kubernetes cluster? ISTIO just says use envoy filter but there aren't details about this rating service as I understood is a pre-requisite for envoy filter to work in Istio. We are left in the dark as the mixer based rate limit has already been deprecated.
@gargnupur Thanks for your response. Able to solve the issue and rate limit is working in the Kubernetes cluster with ISITO.
HI @ragunathan23, Can you please help in this regard..
Actually I'm also stuck in running envoy rate limit in istio 1.7 on k8s
I'm folllowing this https://github.com/aboullaite/service-mesh#1-rate-limiting procedure but not able to see rate limiting applied and I don't get any data in redis.
3. YOU SHOULD NOT LET ISTIO INJECT AN ISTIO SIDECAR NEXT TO YOUR RATE LIMITER SERVICE!!!
Correction: You can let istio inject a sidecar to the rate limit service pod. But you should remember to name the port it uses to receive gRPC calls (normally 8081) accordingly in its corresponding service. (grpc-8081)
@songford I've got a similar issue where rate limiting works as long as the ratelimit service pods don't have envoy sidecars.
However, even ensuring manual protocol selection by naming the port "grpc-8081" doesn't help with sidecar injection enabled, and no requests get routed the ratelimit pod regardless of whether I use filters on the GATEWAY or SIDECAR_INBOUND listeners. Wondering if anyone has any other pointers of what I should look at to debug further?
My istio version is 1.4
@JaveriaK First of all, my configuration works at version 1.6.x, so I haven't got any first hand experience dealing with version 1.4.
From what I've seen in (https://istio.io/v1.4/docs/ops/configuration/traffic-management/protocol-selection/), the syntax of protocol selection hasn't changed since 1.4, so I'd expect it to work likewise in 1.4. Have you tried setting failure_mode_deny=true
to verify that the rate limiter configuration has been picked up by pilot? Also, how did you ensure that no requests get routed to the ratelimit pod in your case? Just curious.
@songford yes setting failure_mode_deny
starts returning RLSE 500s for all requests, which should mean the config is getting picked up by pilot?
I see no activity at all in the ratelimiter sidecar proxy logs so assumed nothing was reaching it. I've also dumped all the proxy-configs for listeners and routes and see they got updated according to envoyfilter manifests, but apparently some kind of config is wrong.
The only warning I see in the pilot logs is for Duplicate cluster rate_limit_service found while pushing CDS
, but not sure if that's the issue
@JaveriaK Ahhhh OK.
Yes, if you see RLSE in envoy log, that means the ratelimiter config is valid and recognized by Pilot. With failure_mode_deny=false
, the request will be accepted even if Pilot fails to reach rate limit service. If you toggled debug mode in your rate limit service, there will be an entry corresponding to every request you submitted to the workload. So I assume your assumption is correct, there is something wrong with the config.
Perhaps if you can check for yourself if there is another rate_limit_service
cluster definition hiding somewhere in your envoy proxies? Duplicate cluster rate_limit_service found while pushing CDS
is very fishy. Or if you don't mind, attaching the config of interest here if it's not too long so we can jump in and help.
So applying the filters on the GATEWAY doesn't work for me in any combination (and I do see them get inserted to the proxy config).
Doing it on the application pod SIDECAR_INBOUND does, the filters I'm using are these (work with non-sidecar ratelimit pods and give RLSE otherwise):
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit-server
namespace: debug-server
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: envoy.http_connection_manager
subFilter:
name: envoy.router
patch:
operation: INSERT_BEFORE
value:
config:
domain: domain.com
failure_mode_deny: true
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate_limit_service
timeout: 0.25s
name: envoy.rate_limit
- applyTo: CLUSTER
match:
cluster:
service: ratelimit.rate-limit.svc.cluster.local
patch:
operation: ADD
value:
connect_timeout: 0.25s
hosts:
- socket_address:
address: ratelimit.rate-limit.svc.cluster.local
port_value: 8081
http2_protocol_options: {}
lb_policy: ROUND_ROBIN
name: rate_limit_service
type: STRICT_DNS
workloadSelector:
labels:
app: server
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit-svc-server
namespace: debug-server
spec:
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: SIDECAR_INBOUND
routeConfiguration:
vhost:
name: "inbound|http|8000"
route:
action: ANY
patch:
operation: MERGE
value:
rate_limits:
- actions:
- request_headers:
descriptor_key: path
header_name: :path
workloadSelector:
labels:
app: server
@JaveriaK : Istio 1.4 had ratelimiting support using Mixer. Have you tried if your config works on 1.6/1.7?
@gargnupur I did not test these configs in an istio 1.5+ setup, but I’m inclined to think the GATEWAY filters that other people have had success in using will work there.
For my current setup I wanted to stick to using the envoy ratelimiting service approach instead of mixer for upgrade compatibility.
@JaveriaK : you should check your config dump for ingressgateway to make sure the envoy filter patches are showing up there accurately. It could be a version mismatch between protos in 1.4 and 1.5 which could be causing this...
@gargnupur yes the filter patches do show up on the ingress-gateway config dumps, but they fail to actually work. I'm inclined to believe the same, that it's due to a version mismatch. Anyhow applying filters on the sidecars works out better in my case as this is more scalable for multiple applications you are rate limiting in the same cluster.
Question, does the envoy ratelimiting service support regex or widlcard pattern matching for the value
keys ? So something like using a wildcard/regex for PATH headers
domain: example4
descriptors:
- key: path
value: "/v0/*-xyz/validate-*"
rate_limit:
requests_per_unit: 300
unit: second
@talha22081992 facing the same issue in 1.7
@JaveriaK i am facing the exact same issue as you described with 1.5.4. Disabling injection on rate-limiting clean RLSE and start to work.
I am applying the settings above in version 1.7.2 of Istio
EnvoyFilter
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.rate_limit
typed_config: # Istio 1.7
'@type': type.googleapis.com/envoy.extensions.filters.http.ratelimit.v3.RateLimit
domain: sock-shop-ratelimit
failure_mode_deny: false
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate_limit_service
timeout: 0.25s
- applyTo: CLUSTER
match:
cluster:
service: ratelimit.rate-limit.svc.cluster.local
patch:
operation: ADD
value:
name: rate_limit_service
type: STRICT_DNS
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
hosts:
- socket_address:
address: ratelimit.rate-limit.svc.cluster.local
port_value: 8081
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit-svc
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: GATEWAY
routeConfiguration:
vhost:
name: "c4.com.br:80"
route:
action: ANY
patch:
operation: MERGE
value:
# rate limit service descriptors config relays on the order of the request headers (desriptor_key)
rate_limits:
- actions:
- request_headers:
header_name: "x-plan"
descriptor_key: "plan"
- request_headers:
header_name: "x-account"
descriptor_key: "account"
Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: application-gateway-c4
namespace: c4
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "c4.com.br"
VirtualService
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sendgrid-virtual-service
spec:
hosts:
- "c4.com.br"
gateways:
- application-gateway-c4
http:
- match:
- uri:
exact: /email
route:
- destination:
host: sendgrid
port:
number: 8080
istioctl proxy-config route istio-ingressgateway-xxx -n istio-system --name http.80 -o json
[
{
"name": "http.80",
"virtualHosts": [
{
"name": "c4.com.br:80",
"domains": [
"c4.com.br",
"c4.com.br:*"
],
"routes": [
{
"match": {
"path": "/email",
"caseSensitive": true
},
"route": {
"cluster": "outbound|8080||sendgrid.c4.svc.cluster.local",
"timeout": "0s",
"retryPolicy": {
"retryOn": "connect-failure,refused-stream,unavailable,cancelled,retriable-status-codes",
"numRetries": 2,
"retryHostPredicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts"
}
],
"hostSelectionRetryMaxAttempts": "5",
"retriableStatusCodes": [
503
]
},
"maxGrpcTimeout": "0s"
},
"metadata": {
"filterMetadata": {
"istio": {
"config": "/apis/networking.istio.io/v1alpha3/namespaces/c4/virtual-service/sendgrid-virtual-service"
}
}
},
"decorator": {
"operation": "sendgrid.c4.svc.cluster.local:8080/email"
}
}
],
"rateLimits": [
{
"actions": [
{
"requestHeaders": {
"headerName": "x-plan",
"descriptorKey": "plan"
}
},
{
"requestHeaders": {
"headerName": "x-account",
"descriptorKey": "account"
}
}
]
}
],
"includeRequestAttemptCount": true
}
],
"validateClusters": false
}
]
Mode debug
kubectl -n istio-system exec svc/istio-ingressgateway -- curl -X POST "localhost:15000/logging?filter=debug" -s
kubectl -n istio-system logs svc/istio-ingressgateway -f
curl --location --request POST '${INGRESS}/email' \
--header 'Content-Type: application/json' \
--header 'Host: c4.com.br' \
--header 'x-plan: BASIC' \
--header 'x-account: user' \
--data-raw '{
"email": "xxx@com"
}'
[2020-10-08T14:36:03.382Z] "POST /email HTTP/1.1" 200 - "-" "-" 31 16 2 1 "10.9.3.252" "PostmanRuntime/7.26.5" "bae51603-c7d0-952f-95ad-50c303dfe5a0" "c4.com.br" "10.9.1.213:8080" outbound|8080||sendgrid.c4.svc.cluster.local 10.9.3.53:46188 10.9.3.53:8080 10.9.3.252:40610 - -
[2020-10-08T14:36:04.506Z] "POST /email HTTP/1.1" 200 - "-" "-" 31 16 2 1 "10.9.3.252" "PostmanRuntime/7.26.5" "821d71d5-6341-9835-863c-cfac59cf9821" "c4.com.br" "10.9.1.213:8080" outbound|8080||sendgrid.c4.svc.cluster.local 10.9.3.53:46188 10.9.3.53:8080 10.9.3.252:40610 - -
Does not reach the service of ratelimit
Modified the filter to add failure_mode_deny: true
...
patch:
operation: INSERT_BEFORE
value:
name: envoy.rate_limit
typed_config:
'@type': type.googleapis.com/envoy.extensions.filters.http.ratelimit.v3.RateLimit
domain: sock-shop-ratelimit
failure_mode_deny: **true** // return 500
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate_limit_service
timeout: 0.25s
...
kubectl -n istio-system logs svc/istio-ingressgateway -f
[2020-10-08T14:48:48.231Z] "POST /email HTTP/1.1" 500 RLSE "-" "-" 0 0 0 - "10.9.3.98" "PostmanRuntime/7.26.5" "22a11ab3-f3a2-98a8-b98d-f6d492a20faf" "c4.com.br" "-" - - 10.9.3.53:8080 10.9.3.98:23396 - -
[2020-10-08T14:48:49.716Z] "POST /email HTTP/1.1" 500 RLSE "-" "-" 0 0 0 - "10.9.1.46" "PostmanRuntime/7.26.5" "83824ddd-60e5-9fb0-89a3-89be21672f81" "c4.com.br" "-" - - 10.9.3.53:8080 10.9.1.46:59446 - -
[2020-10-08T14:48:51.429Z] "POST /email HTTP/1.1" 500 RLSE "-" "-" 0 0 0 - "10.9.1.218" "PostmanRuntime/7.26.5" "4cbf61ed-bea2-9242-b42c-0d2887a8b618" "c4.com.br" "-" - - 10.9.3.53:8080 10.9.1.218:48406 - -
[2020-10-08T14:48:52.360Z] "POST /email HTTP/1.1" 500 RLSE "-" "-" 0 0 0 - "10.9.1.46" "PostmanRuntime/7.26.5" "6e272392-1e44-9fb4-8f9e-e904ab122c28" "c4.com.br" "-" - - 10.9.3.53:8080 10.9.1.46:59450 - -
https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage
"RLSE: The request was rejected because there was an error in rate limit service."
I couldn't understand why he doesn't arrive at the service.
Do you have any idea? @gargnupur @songford @ramaraochavali
My rate-limit-service
apiVersion: v1
kind: Namespace
metadata:
name: rate-limit
labels:
istio-injection: enabled
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: rate-limit
labels:
app: redis
spec:
ports:
- name: redis
port: 6379
selector:
app: redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: rate-limit
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis:alpine
imagePullPolicy: Always
name: redis
ports:
- name: redis
containerPort: 6379
restartPolicy: Always
serviceAccountName: ""
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ratelimit-config
namespace: rate-limit
data:
# check this example: https://github.com/envoyproxy/ratelimit#example-4
config.yaml: |
domain: sock-shop-ratelimit
descriptors:
- key: plan
value: BASIC
descriptors:
- key: account
rate_limit:
unit: minute
requests_per_unit: 1
- key: plan
value: PLUS
descriptors:
- key: account
rate_limit:
unit: minute
requests_per_unit: 2
---
apiVersion: v1
kind: Service
metadata:
name: ratelimit
namespace: rate-limit
labels:
app: ratelimit
spec:
ports:
- name: http-8080
port: 8080
targetPort: 8080
protocol: TCP
- name: grpc-8081
port: 8081
targetPort: 8081
protocol: TCP
- name: http-6070
port: 6070
targetPort: 6070
protocol: TCP
selector:
app: ratelimit
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ratelimit
namespace: rate-limit
spec:
replicas: 1
selector:
matchLabels:
app: ratelimit
strategy:
type: Recreate
template:
metadata:
labels:
app: ratelimit
spec:
containers:
- image: envoyproxy/ratelimit:v1.4.0
imagePullPolicy: Always
name: ratelimit
command: ["/bin/ratelimit"]
env:
- name: LOG_LEVEL
value: debug
- name: REDIS_SOCKET_TYPE
value: tcp
- name: REDIS_URL
value: redis:6379
- name: USE_STATSD
value: "false"
- name: RUNTIME_ROOT
value: /data
- name: RUNTIME_SUBDIRECTORY
value: ratelimit
ports:
- containerPort: 8080
- containerPort: 8081
- containerPort: 6070
volumeMounts:
- name: config-volume
mountPath: /data/ratelimit/config/config.yaml
subPath: config.yaml
volumes:
- name: config-volume
configMap:
name: ratelimit-config
---
@JaveriaK Ahhhh OK.
Yes, if you see RLSE in envoy log, that means the ratelimiter config is valid and recognized by Pilot. With
failure_mode_deny=false
, the request will be accepted even if Pilot fails to reach rate limit service. If you toggled debug mode in your rate limit service, there will be an entry corresponding to every request you submitted to the workload. So I assume your assumption is correct, there is something wrong with the config.Perhaps if you can check for yourself if there is another
rate_limit_service
cluster definition hiding somewhere in your envoy proxies?Duplicate cluster rate_limit_service found while pushing CDS
is very fishy. Or if you don't mind, attaching the config of interest here if it's not too long so we can jump in and help.
@JaveriaK, @songford
I believe I narrow down the cause of Duplicate cluster rate_limit_service found while pushing CDS
message.
So the rate limit filter patches cluster, adding new cluster 'rate_limit_service':
- applyTo: CLUSTER
match:
cluster:
service: ratelimit.rate-limit.svc.cluster.local
patch:
operation: ADD
value:
name: rate_limit_service
type: STRICT_DNS
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
hosts:
- socket_address:
address: ratelimit.rate-limit.svc.cluster.local
port_value: 8081
If we comment out this portion of envoy filter, no warnings in the log anymore.
Let's clarify this as, actually when you deploy reate-limit service in k8s, Istio automaticlly recognize it and adds into clusters of envoy, the raw config looks like this:
{
"version_info": "2020-10-19T11:25:53Z/83",
"cluster": {
"@type": "type.googleapis.com/envoy.api.v2.Cluster",
"name": "outbound|8080||ratelimit.rate-limit.svc.cluster.local",
"type": "EDS",
"eds_cluster_config": {
"eds_config": {
"ads": {}
},
"service_name": "outbound|8080||ratelimit.rate-limit.svc.cluster.local"
},
"connect_timeout": "10s",
"circuit_breakers": {
"thresholds": [
{
"max_connections": 4294967295,
"max_pending_requests": 4294967295,
"max_requests": 4294967295,
"max_retries": 4294967295
}
]
},
"http2_protocol_options": {
"max_concurrent_streams": 1073741824
},
"protocol_selection": "USE_DOWNSTREAM_PROTOCOL",
"filters": [
{
"name": "istio.metadata_exchange",
"typed_config": {
"@type": "type.googleapis.com/udpa.type.v1.TypedStruct",
"type_url": "type.googleapis.com/envoy.tcp.metadataexchange.config.MetadataExchange",
"value": {
"protocol": "istio-peer-exchange"
}
}
}
],
"transport_socket_matches": [
{
"name": "tlsMode-istio",
"match": {
"tlsMode": "istio"
},
"transport_socket": {
"name": "envoy.transport_sockets.tls",
"typed_config": {
"@type": "type.googleapis.com/envoy.api.v2.auth.UpstreamTlsContext",
"common_tls_context": {
"alpn_protocols": [
"istio-peer-exchange",
"istio",
"h2"
],
"tls_certificate_sds_secret_configs": [
{
"name": "default",
"sds_config": {
"api_config_source": {
"api_type": "GRPC",
"grpc_services": [
{
"envoy_grpc": {
"cluster_name": "sds-grpc"
}
}
]
}
}
}
],
"combined_validation_context": {
"default_validation_context": {
"match_subject_alt_names": [
{
"exact": "spiffe://cluster.local/ns/rate-limit/sa/default"
}
]
},
"validation_context_sds_secret_config": {
"name": "ROOTCA",
"sds_config": {
"api_config_source": {
"api_type": "GRPC",
"grpc_services": [
{
"envoy_grpc": {
"cluster_name": "sds-grpc"
}
}
]
}
}
}
}
},
"sni": "outbound_.8080_._.ratelimit.rate-limit.svc.cluster.local"
}
}
},
{
"name": "tlsMode-disabled",
"match": {},
"transport_socket": {
"name": "envoy.transport_sockets.raw_buffer"
}
}
]
},
"last_updated": "2020-10-19T11:26:40.458Z"
}
Envoy filter , when applied adds this portion of cluster config:
{
"version_info": "2020-10-21T09:35:44Z/7",
"cluster": {
"@type": "type.googleapis.com/envoy.api.v2.Cluster",
"name": "rate_limit_service",
"type": "STRICT_DNS",
"connect_timeout": "0.250s",
"hosts": [
{
"socket_address": {
"address": "ratelimit.rate-limit.svc.cluster.local",
"port_value": 8081
}
}
],
"http2_protocol_options": {}
},
"last_updated": "2020-10-21T09:35:44.779Z"
}
So the 2 clusters are about the same destination, and pilot somehow treated them as duplicates?
Can we safely ignore this warning?
Hi Guys,
Any idea if there is a way to use rateLimit for outbound https traffic? E.g. I would like to have only 5 requests per second from pod XYZ to https://google.com?
Is it doable? I assume that since it's HTTPS traffic all the HTTP headers will be encrypted and i can't use envoy descriptors to route it to the rateLimit service
@jdomag : You can use other envoy descriptors like remote_address that are not dependent on HTTP headers?
@catman002 there is an envoy ratelimit example: https://github.com/jbarratt/envoy_ratelimit_example can help you I hope。simple strategies only, if your mixer policy are not too complicated...
I was successed to run this example. but it need Injection configure when the sidercar(dockerimage:envoyproxy/envoy-alpine:latest) to starting(copy config.yaml in to the right path like '/data/ratelimit/config/'), this is so many different from istio's envoy. I try to contrast istio envoy container and this envoy container ,I dont find any way to Injection configure by istio envoy. so.....
Did you try in Istio 1.7.* ?
@bianpengyuan @gargnupur After much trial and error, here is a working template for rate-limiting for the default Istio Ingress gateway
apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: filter-ratelimit namespace: istio-system spec: workloadSelector: # select by label in the same namespace labels: istio: ingressgateway configPatches: # The Envoy config you want to modify - applyTo: HTTP_FILTER match: context: GATEWAY listener: filterChain: filter: name: "envoy.http_connection_manager" subFilter: name: "envoy.router" patch: operation: INSERT_BEFORE value: name: envoy.rate_limit config: # domain can be anything! Match it to the ratelimter service config domain: test rate_limit_service: grpc_service: envoy_grpc: cluster_name: rate_limit_service timeout: 0.25s - applyTo: CLUSTER match: cluster: service: ratelimit.default.svc.cluster.local patch: operation: ADD value: name: rate_limit_service type: STRICT_DNS connect_timeout: 0.25s lb_policy: ROUND_ROBIN http2_protocol_options: {} hosts: - socket_address: address: ratelimit.default.svc.cluster.local port_value: 8081 --- apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: filter-ratelimit-svc namespace: istio-system spec: workloadSelector: labels: istio: ingressgateway configPatches: - applyTo: VIRTUAL_HOST match: context: GATEWAY routeConfiguration: vhost: name: "*:80" route: action: ANY patch: operation: MERGE value: rate_limits: - actions: # any actions in here # Multiple actions nest the descriptors # https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/rate_limit_filter#config-http-filters-rate-limit-composing-actions # - generic_key: # descriptor_value: "test" - request_headers: header_name: "Authorization" descriptor_key: "auth" # - remote_address: {} # - destination_cluster: {}
Did you try on Istio 1.7.*
4. istioctl dashboard envoy {{the-envoy-pod-your-ratelimiter-applys-on}}
@songford Did you try on Istio 1.7.4 ?
I have designed an API, appreciate if anyone can leave your comments
https://docs.google.com/document/d/1628LHcwuCvTRFhk8rsQKQUmgCxnY6loPFSxliADQDIc/edit#
Hi
Does anyone know how to set rate limit with using cookie value?
I've tried to use header_to_metadata filter and changed header value to dynamic metadata.
But rate limit with dynamic meta data was not working proper.
Rate limit service working proper with header matcher.
This process is need for me, because I want to use cookie to metadata filter which is available from envoy v1.16.
https://www.envoyproxy.io/docs/envoy/v1.16.0/api-v3/extensions/filters/http/header_to_metadata/v3/header_to_metadata.proto.html?highlight=header_to_meta#extensions-filters-http-header-to-metadata-v3-config-rule
This is my sample configuration.
Is there any missing?
---
kind: EnvoyFilter
metadata:
name: filter-ratelimit
namespace: istio-system
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: envoy.http_connection_manager
subFilter:
name: envoy.router
patch:
operation: INSERT_BEFORE
value:
name: envoy.rate_limit
typed_config:
'@type': type.googleapis.com/envoy.extensions.filters.http.ratelimit.v3.RateLimit
domain: ratelimit
failure_mode_deny: false
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate_limit_cluster
timeout: 0.25s
- applyTo: CLUSTER
match:
cluster:
service: ratelimit.rate-limit.svc.cluster.local
patch:
operation: ADD
value:
connect_timeout: 0.25s
http2_protocol_options: {}
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: rate_limit_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: ratelimit.rate-limit.svc.cluster.local
port_value: 8081
name: dev-rate_limit_cluster
type: STRICT_DNS
workloadSelector:
labels:
--
kind: EnvoyFilter
metadata:
name: filter-ratelimit-svc
namespace: istio-system
spec:
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: GATEWAY
routeConfiguration:
vhost:
name: *80
route:
action: ANY
patch:
operation: MERGE
value:
rate_limits:
- actions:
- dynamic_metadata:
descriptor_key: user
metadata_key:
key: envoy.lb
path:
- key: cookie
workloadSelector:
labels:
istio: ingressgateway
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: header-to-meta-filter
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.header_metadata
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.header_to_metadata.v3.Config
request_rules:
- header: cookie
on_header_present:
metadata_namespace: envoy.lb
key: cookie
type: STRING
remove: false
ConfigMap of ratelimit service
apiVersion: v1
kind: ConfigMap
metadata:
name: ratelimit-config
namespace: rate-limit
data:
config.yaml: |
domain: ratelimit
descriptors:
- key: user
rate_limit:
unit: minute
requests_per_unit: 5
@jdomag : You can use other envoy descriptors like remote_address that are not dependent on HTTP headers?
I've decided to use Egress TLS origination and Envoy Rate Limiting for particular service. I've described this in more details in below article if anybody is interested:
https://domagalski-j.medium.com/istio-rate-limits-for-egress-traffic-8697df490f68
@bianpengyuan @gargnupur After much trial and error, here is a working template for rate-limiting for the default Istio Ingress gateway
apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: filter-ratelimit namespace: istio-system spec: workloadSelector: # select by label in the same namespace labels: istio: ingressgateway configPatches: # The Envoy config you want to modify - applyTo: HTTP_FILTER match: context: GATEWAY listener: filterChain: filter: name: "envoy.http_connection_manager" subFilter: name: "envoy.router" patch: operation: INSERT_BEFORE value: name: envoy.rate_limit config: # domain can be anything! Match it to the ratelimter service config domain: test rate_limit_service: grpc_service: envoy_grpc: cluster_name: rate_limit_service timeout: 0.25s - applyTo: CLUSTER match: cluster: service: ratelimit.default.svc.cluster.local patch: operation: ADD value: name: rate_limit_service type: STRICT_DNS connect_timeout: 0.25s lb_policy: ROUND_ROBIN http2_protocol_options: {} hosts: - socket_address: address: ratelimit.default.svc.cluster.local port_value: 8081 --- apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: filter-ratelimit-svc namespace: istio-system spec: workloadSelector: labels: istio: ingressgateway configPatches: - applyTo: VIRTUAL_HOST match: context: GATEWAY routeConfiguration: vhost: name: "*:80" route: action: ANY patch: operation: MERGE value: rate_limits: - actions: # any actions in here # Multiple actions nest the descriptors # https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/rate_limit_filter#config-http-filters-rate-limit-composing-actions # - generic_key: # descriptor_value: "test" - request_headers: header_name: "Authorization" descriptor_key: "auth" # - remote_address: {} # - destination_cluster: {}If you don't mind me asking, how would you pass in the Lyft config into these
EnvoyFilter
s? Like- key: header_match value: quote-path-auth rate_limit: unit: minute requests_per_unit: 2
@devstein From the snippet you kindly provided, I can only see the filters to match for certain header. But where did you put the corresponding configuration in regards to how many requests per unit time is allowed? Thanks!
Can you please provide the solution you used as we have a similar situation and we are not able to find how to use unit and requests per unit.
@A-N-S : Please take a look at working tests in istio repo: https://github.com/istio/istio/blob/master/tests/integration/telemetry/policy/envoy_ratelimit_test.go . It sets up rate limit service using lyft too..
@gargnupur Thanks for the reference. Can you give some details about " {{ .RateLimitNamespace }} " and " {{ .EchoNamespace }} " used in https://github.com/istio/istio/blob/master/tests/integration/telemetry/policy/testdata/enable_envoy_ratelimit.yaml
@gargnupur Thanks for the reference. Can you give some details about " {{ .RateLimitNamespace }} " and " {{ .EchoNamespace }} " used in https://github.com/istio/istio/blob/master/tests/integration/telemetry/policy/testdata/enable_envoy_ratelimit.yaml
RateLimitNamespace -> namespace where lyft's redis rate limit service is setup
EchoNamespace -> namespace where echo app is setup
We have tests in istio/istio for this, so closing the bug...
Hi, i followed the official documentation for rate limiting and could not make get the global ratelimiting to work at the Gateway level. I added all the details in #32381
I am totally stuck because the logs does not have any useful info. Can someone help me here ?
As far as I see, all examples for Envoy Rate limiting add a new cluster and use STRICT_DNS
. However, as far as observe, it seems that this circumvents Istio's GRPC Load Balancing, since when auto scaling the rate limiting service, GRPC requests are distributed quite unevenly between the ratelimit
's pods and it seems to use long-living GRPC connections which are going to certain pods which are then overloaded.
When using the Rate Limiting Service with Istio, there is already some kind of Cluster created by Istio
istioctl proxy-config all istio-ingressgateway-5f5f67cdd5-46r2v -o json
{
"version_info": "2021-11-16T13:08:53Z/1270",
"cluster": {
"@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster",
"name": "outbound|8081||ratelimit.api.svc.cluster.local",
"type": "EDS",
...
Is it possible to somehow use that Cluster with Envoy Rate Limiting? and hoping that GRPC Request balancing works somehow?
@msonnleitner
Did you find the solution? I have same problem.
To make traffic distribution evenly after auto scaling, I've try to find the way to create cluster with eds
type. But I couldn't find the solution.
Even your work-around(Use Istio created EDS typed cluster
) can make traffic distributed to ratelimit service(pods) evenly but ratelimit service doesn't work.(traffic is not blocked with ratelimit config).
Is there any update to make distribute traffic evenly?
@KoJJang Istio's rate limiting documentation was updated sometime ago, now it contains a config which should work:
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: outbound|8081||ratelimit.default.svc.cluster.local
authority: ratelimit.default.svc.cluster.local
transport_api_version: V3
See the given change here: https://github.com/istio/istio.io/pull/11654/files#diff-b20e3a9583a775ef679a0bc15a53c23aa9b6240757bd369d2ac81760072cd7d8R118
So since Istio's docs was updated to reference that cluster outbound|8081||ratelimit.default.svc.cluster.local, I guess it is safe to assume that this is supported and not just a "hack".
@SCLogo
I'm using below envoyfilter and this will have limit service's clusterIP as cluster(rate_limit_cluster
)'s endpoint. (I think ratelimit cluster should have pod's address for endpoint)
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: envoy.filters.network.http_connection_manager
subFilter:
name: envoy.filters.http.router
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.ratelimit
typed_config:
'@type': type.googleapis.com/envoy.extensions.filters.http.ratelimit.v3.RateLimit
domain: any_domain
failure_mode_deny: false
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate_limit_cluster
transport_api_version: V3
timeout: 5ms
- applyTo: CLUSTER
match:
cluster:
service: {LIMIT_SERVICE}.{MY_NAMESPACE}.svc.cluster.local
patch:
operation: ADD
value:
connect_timeout: 10s
http2_protocol_options: {}
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: rate_limit_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: {LIMIT_SERVICE}.{MY_NAMESPACE}.svc.cluster.local
port_value: 8081
name: rate_limit_cluster
type: STRICT_DNS
@msonnleitner
I'll try your suggestion. But, does outbound|8081||ratelimit.default.svc.cluster.local
cluster created automatically?
I also have ratelimit k8s service but there is only one cluster with STRICT_DNS type like below.
So I've add temporary virtualservice routing to create outbound|8081||{ratelimit_service}.{namespace}.svc.cluster.local
cluster.
$ istioctl pc cluster {gateway_pod}.{namespace}
...
rate_limit_cluster - - - STRICT_DNS
...
As per the updated Istio config, it should not be necessary to add that cluster manually. Try to just delete that section. IIRC, if you define a Kubernetes Service for ratelimiting, it should be "picked up" by Istio automatically.
@msonnleitner
I found the reason that there is no cluster for ratelimit service. I've set PILOT_FILTER_GATEWAY_CLUSTER_CONFIG: true
to save memory usage and this the reason.
So I revert to PILOT_FILTER_GATEWAY_CLUSTER_CONFIG: false
and check ratelimit cluster/endpoint. After then, I modify envoyfilter as you suggested(istio docs described) like below.
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: outbound|8081||{ratelimit_service}.{namespace}.svc.cluster.local
authority: {ratelimit_service}.{namespace}.svc.cluster.local
transport_api_version: V3
After then, I checked all requests are distributed to ratelimit pods evenly, but ratelimit doesn't work :-(
The rest of the configuration is the same as when using a STRICT_DNS cluster. However, with the STRICT_DNS cluster, the ratelimit behavior was correct even though the traffic was not evenly distributed.
I think my setup didn't work correctly because I was using istio 1.13 (in istio 1.13 there was a guide to set up a STRICT_DNS cluster in envoyfilter).
I found a workaround while still using version 1.13, which is to use the ratelimit service as a headless service (clusterIP: None
).
This registered the ratelimit pod as an endpoint on the STRICT_DNS cluster and load balanced it well.