Proxy Protocol Port not used to determine Access Scheme
felixbuenemann opened this issue · 7 comments
If deis router is running with PROXY protocol enabled the value of $proxy_protocol_port
is not regarded when determining the $access_scheme
which ends up in the X-Forwarded-Proto
header sent to backends.
This matters if SSL is terminated on a load balancer in front of the deis router, which would then send on requests to the router's http port, but indicate in the PROXY protocol header, that the destination port was 443.
The deis router could find this value from the nginx var $proxy_protocol_port
that is supported since nginx 1.11.0.
For those not familiar with PROXY protocol, the header includes the clients IP address, the clients source port and the clients destination port, which would be the port on the load balancer terminating SSL.
I think the following would work (untested):
{{ if $routerConfig.UseProxyProtocol -}}
map $proxy_protocol_port $proxy_protocol_scheme {
default $scheme;
"80" "http";
"443" "https";
}
{{- end }}
map $http_x_forwarded_proto $tmp_access_scheme {
{{ if $routerConfig.UseProxyProtocol -}}
default $proxy_protocol_scheme;# if X-Forwarded-Proto header is empty, $tmp_access_scheme will be the proxy protocol used
{{- else -}}
default $scheme; # if X-Forwarded-Proto header is empty, $tmp_access_scheme will be the actual protocol used
{{- end }}
"~^(.*, ?)?http$" "http"; # account for the possibility of a comma-delimited X-Forwarded-Proto header value
"~^(.*, ?)?https$" "https"; # account for the possibility of a comma-delimited X-Forwarded-Proto header value
"~^(.*, ?)?ws$" "ws"; # account for the possibility of a comma-delimited X-Forwarded-Proto header value
"~^(.*, ?)?wss$" "wss"; # account for the possibility of a comma-delimited X-Forwarded-Proto header value
}
Before I go too deep into this issue... if you're terminating SSL at the load balancer, your load balancer already speaks HTTP/S. An option, therefore, would be to configure it to set the X-Forwarded-For HTTP header instead... if on AWS or GKE, this is actually automatic. Just disable PROXY proto on the router end and you're back in business with real client IPs.
No, if you're terminating SSL at the load balancer it still speaks TCP. If you are terminating HTTPS it speaks HTTP, but than for example on ELB WebSockets won't work.
Btw. I know that terminating SSL at the ELB has drawbacks like loosing support for HTTP/2 (because the ELB does not negotiate HTTP/2 over ALPN), but many people like to use it to be able to use Amazon Certificate Manager.
I stumbled upon this issue when googling a solution to the very same problem you're having.
After having implemented it, I realized that $proxy_protocol_port
is actually the client's port, and not the destination port. The destination port is not available as a variables.
https://trac.nginx.org/nginx/ticket/1206 is a feature request to expose the destination port.
This issue was moved to teamhephy/router#12