teamhephy/router

Proxy Protocol Port not used to determine Access Scheme

Opened this issue · 7 comments

From @felixbuenemann on March 2, 2017 18:11

If deis router is running with PROXY protocol enabled the value of $proxy_protocol_port is not regarded when determining the $access_scheme which ends up in the X-Forwarded-Proto header sent to backends.

This matters if SSL is terminated on a load balancer in front of the deis router, which would then send on requests to the router's http port, but indicate in the PROXY protocol header, that the destination port was 443.

The deis router could find this value from the nginx var $proxy_protocol_port that is supported since nginx 1.11.0.

Copied from original issue: deis/router#324

From @felixbuenemann on March 2, 2017 18:12

For those not familiar with PROXY protocol, the header includes the clients IP address, the clients source port and the clients destination port, which would be the port on the load balancer terminating SSL.

From @felixbuenemann on March 2, 2017 18:26

I think the following would work (untested):

{{ if $routerConfig.UseProxyProtocol -}}
map $proxy_protocol_port $proxy_protocol_scheme {
  default $scheme;
  "80" "http";
  "443" "https";
}
{{- end }}

map $http_x_forwarded_proto $tmp_access_scheme {
  {{ if $routerConfig.UseProxyProtocol -}}
  default $proxy_protocol_scheme;# if X-Forwarded-Proto header is empty, $tmp_access_scheme will be the proxy protocol used
  {{- else -}}
  default $scheme;               # if X-Forwarded-Proto header is empty, $tmp_access_scheme will be the actual protocol used
  {{- end }}
  "~^(.*, ?)?http$" "http";      # account for the possibility of a comma-delimited X-Forwarded-Proto header value
  "~^(.*, ?)?https$" "https";    # account for the possibility of a comma-delimited X-Forwarded-Proto header value
  "~^(.*, ?)?ws$" "ws";      # account for the possibility of a comma-delimited X-Forwarded-Proto header value
  "~^(.*, ?)?wss$" "wss";    # account for the possibility of a comma-delimited X-Forwarded-Proto header value
}

From @krancour on March 3, 2017 0:46

Before I go too deep into this issue... if you're terminating SSL at the load balancer, your load balancer already speaks HTTP/S. An option, therefore, would be to configure it to set the X-Forwarded-For HTTP header instead... if on AWS or GKE, this is actually automatic. Just disable PROXY proto on the router end and you're back in business with real client IPs.

From @felixbuenemann on March 3, 2017 10:47

No, if you're terminating SSL at the load balancer it still speaks TCP. If you are terminating HTTPS it speaks HTTP, but than for example on ELB WebSockets won't work.

From @felixbuenemann on March 3, 2017 10:50

Btw. I know that terminating SSL at the ELB has drawbacks like loosing support for HTTP/2 (because the ELB does not negotiate HTTP/2 over ALPN), but many people like to use it to be able to use Amazon Certificate Manager.

From @boivie-at-sony on August 11, 2017 10:12

I stumbled upon this issue when googling a solution to the very same problem you're having.

After having implemented it, I realized that $proxy_protocol_port is actually the client's port, and not the destination port. The destination port is not available as a variables.

https://trac.nginx.org/nginx/ticket/1206 is a feature request to expose the destination port.

It seems in newer versions of nginx-ingress (using newer versions of nginx/openresty) they started to use that destination port feature

https://github.com/kubernetes/ingress-nginx/blob/master/rootfs/etc/nginx/template/nginx.tmpl#L1100

            {{ if $all.Cfg.UseProxyProtocol }}
            set $pass_server_port    $proxy_protocol_server_port;
            {{ else }}
            set $pass_server_port    $server_port;
            {{ end }}

Related PR kubernetes/ingress-nginx#4956