Deis-router not working on OpenStack
Closed this issue · 7 comments
I have an OpenStack installation with two VM m1.large.
I followed http://kubernetes.io/docs/getting-started-guides/kubeadm/ then https://deis.com/docs/workflow/installing-workflow/.
Everything installs fine expect the service "deis-router" which does not seem to start correctly.
$ kubectl get svc --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deis deis-router 100.66.30.134 <pending> 80/TCP,443/TCP,2222/TCP,9090/TCP 13s
Workaround:
replace "LoadBalancer" by "NodePort" in the file ~/.helmc/workspace/charts/workflow-v2.5.0/manifests/deis-router-service.yaml
And restart the service:
kubectl delete -f deis-router-service.yaml
kubectl apply -f deis-router-service.yaml
The controller is then accessible through NodePort:
deis register http://deis.<my floating IP>.xip.io:32159
That's interesting because the v2.5.0 (and every chart before it) has always specified a hostPort in the deployment manifest. Perhaps this is something openstack-specific but with those hostPort entries the container should be directly accessible without any modifications required on port 80.
Can you confirm that you can use deis register http://deis.<my floating IP>.xip.io
? FWIW we have to use this workflow for vagrant installs as well because of no load balancers being provisioned on that provider as well (of course).
Also note that this is documented in https://deis.com/docs/workflow/quickstart/deploy-an-app/:
If you do not have an load balancer IP, the router automatically forwards traffic from a kubernetes node to the router. In this case, use the IP of a kubernetes node and the node port that routes to port 80 on the controller.
Deis requires a wildcard DNS record to dynamically map app names to the router. Instead of setting up DNS records, this example will use nip.io. If your router IP is 1.1.1.1, its url will be 1.1.1.1.nip.io. The URL of the controller component will be deis.1.1.1.1.nip.io.
@bacongobbler I cannot use deis register http://deis.<my floating IP>.xip.io
. Only deis register http://deis.<my floating IP>.xip.io:32159
deis-router service does not start at all if set as "LoadBalancer". It starts if set as "NodePort".
I found this doc, that might help: http://docs.openstack.org/developer/magnum/dev/kubernetes-load-balancer.html
Workaround :
Use Haproxy as an extra proxy(I have it on separate host):
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 4096
#debug
#quiet
user haproxy
group haproxy
stats enable
stats uri /haproxy?stats
stats realm Strictly\ Private
stats auth HA_ADMIN:VERY_HARD_PASSWORD
defaults
log global
mode http
retries 3
timeout client 400s
timeout connect 10s
timeout server 400s
option dontlognull
option httplog
option redispatch
balance roundrobin
maxconn 20000
# Set up application listeners here.
listen ssh
bind 0.0.0.0:2222
mode tcp
server deis-git-MINION_1_IP MINION_1_IP:31642 check port 31642
server deis-git-MINION_2_IP MINION_2_IP:31642 check port 31642
timeout client 1h
listen http
bind 0.0.0.0:80
mode http
server deis-http-MINION_1_IP MINION_1_IP:32257 check port 32257
server deis-http-MINION_2_IP MINION_2_IP:32257 check port 32257
listen https
bind 0.0.0.0:443
mode http
server deis-http-MINION_1_IP MINION_1_IP:31912 check port 31912
server deis-http-MINION_2_IP MINION_2_IP:31912 check port 31912
high ports are taken from deis-proxy service
@DavidSie Thanks, I'll try.
You run it on a separate host: is it another VM? Can it be collocated on the K8S master VM?
Another question:
server deis-git-MINION_1_IP MINION_1_IP:31642 check port 31642
server deis-git-MINION_2_IP MINION_2_IP:31642 check port 31642
This means HAProxy will check each minions if the port is open?
This seems like a manifestation of the CNI networking issue with hostPorts showing up again. See deis/registry#64 for more info.
It's an upstream issue with CNI not supporting hostPorts which we rely on in two different locations (registry-proxy and the router). There's nothing we can do on our end other than re-architect the entire platform to work around this issue and wait for a fix.