Linkerd-TCP + Websockets + K8S 1.9.x - Not working
Capitrium opened this issue · 2 comments
Note: Tried posting this on discourse and got an error regarding link limits in posts by new users, but there are no links in this post...
We're having an extraordinary amount of trouble trying to set up linkerd-tcp on k8s with a simple websocket application. The closest we've been able to get to a working request is having linkerd-tcp spit out some errors like error parsing response: missing field
addrs at line 1 column 14
and our test application throwing ECONNREFUSED
. Without linkerd, our sample app works as expected.
We have some questions regarding namerd config:
- Namerd "namespace": does this need to be the same as the k8s namespace running linkerd-tcp, or the k8s namespace running the backend service pods, or does namerd have it's own concept of namespaces?
- Namerd "label": should this match the name of any k8s services or namespaces, or is this purely an internal value used to configure namerd?
More generally, does anyone see any issues with our config?
Linkerd-tcp/namerd configs:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: l5d
name: l5d
namespace: linkerd
spec:
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: l5d
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
volumes:
- name: l5d-config
configMap:
name: l5d-config
- name: l5d-tcp-config
configMap:
name: l5d-tcp-config
items:
- key: config.yaml
path: config.yaml
- name: l5d-tcp-namerd
configMap:
name: l5d-namerd-config
items:
- key: namerd.yaml
path: namerd.yaml
- name: tls-cert
secret:
secretName: certificates
containers:
- name: l5d
image: buoyantio/linkerd:1.3.6
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
args:
- /io.buoyant/linkerd/config/config.yaml
- "-log.level=DEBUG" # for debugging
ports:
- name: outgoing
containerPort: 4140
hostPort: 4140
- name: incoming
containerPort: 4141
- name: admin
containerPort: 9990
volumeMounts:
- name: l5d-config
mountPath: /io.buoyant/linkerd/config
readOnly: true
- name: tls-cert
mountPath: /io.buoyant/linkerd/certs
readOnly: true
- name: linkerd-tcp
image: linkerd/linkerd-tcp:0.1.1
command: [ "/usr/local/bin/linkerd-tcp"]
args:
- /io.buoyant/linkerd/config/config.yaml
volumeMounts:
- name: l5d-tcp-config
mountPath: /io.buoyant/linkerd/config/config.yaml
subPath: config.yaml
ports:
- name: tcp-admin
containerPort: 9989
hostPort: 9989
- name: tcp-server
containerPort: 7474
env:
- name: RUST_LOG # for debugging
value: "trace"
- name: RUST_BACKTRACE # for debugging
value: "1"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: kubectl
image: buoyantio/kubectl:v1.8.5
args:
- "proxy"
- "-p"
- "8001"
---
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: dtabs.l5d.io
spec:
scope: Namespaced
group: l5d.io
version: v1alpha1
names:
kind: DTab
plural: dtabs
singular: dtab
---
apiVersion: l5d.io/v1alpha1
dentries:
- dst: /#/io.l5d.k8s/default/http
prefix: /svc
kind: DTab
metadata:
namespace: linkerd
name: l5d
---
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-namerd-config
namespace: linkerd
data:
namerd.yaml: |-
admin:
port: 9991
ip: 0.0.0.0
storage:
kind: io.l5d.k8s
host: localhost
port: 8001
namespace: linkerd
interfaces:
- kind: io.l5d.httpController
ip: 0.0.0.0
port: 4180
telemetry:
- kind: io.l5d.prometheus
prefix: tcp_
namers:
- kind: io.l5d.k8s
host: localhost
port: 8001
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: namerd
name: namerd
namespace: linkerd
spec:
replicas: 2
selector:
matchLabels:
app: namerd
strategy:
rollingUpdate:
maxSurge: 2
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: namerd
spec:
volumes:
- name: l5d-namerd-config
configMap:
name: l5d-namerd-config
items:
- key: namerd.yaml
path: namerd.yaml
containers:
- name: namerd
image: buoyantio/namerd:1.3.5
args:
- /io.buoyant/namerd/1.3.5/config/namerd.yaml
volumeMounts:
- name: l5d-namerd-config
mountPath: /io.buoyant/namerd/1.3.5/config/namerd.yaml
subPath: namerd.yaml
ports:
- name: http
containerPort: 4180
- name: namerd-admin
containerPort: 9991
- name: kubectl
image: buoyantio/kubectl:v1.8.5
args:
- "proxy"
- "-p"
- "8001"
restartPolicy: Always
securityContext: {}
---
apiVersion: v1
kind: Service
metadata:
name: l5d
namespace: linkerd
labels:
k8s-app: l5d
app: l5d
spec:
selector:
app: l5d
type: LoadBalancer
ports:
- name: outgoing
port: 4140
- name: incoming
port: 4141
- name: admin
port: 9990
- name: tcp-admin
port: 9989
- name: tcp-server
port: 7474
---
apiVersion: v1
kind: Service
metadata:
name: namerd
namespace: linkerd
spec:
selector:
app: namerd
type: LoadBalancer
ports:
- name: http
port: 4180
- name: admin
port: 9991
---
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-config
namespace: linkerd
data:
config.yaml: |-
admin:
ip: 0.0.0.0
port: 9990
namers:
- kind: io.l5d.k8s
host: localhost
port: 8001
telemetry:
- kind: io.l5d.prometheus
- kind: io.l5d.recentRequests
sampleRate: 0.25
usage:
orgId: linkerd-examples-daemonset
routers:
- protocol: http
label: outgoing
dtab: |
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/svc => /host;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: linkerd
port: incoming
service: l5d
hostNetwork: true
servers:
- port: 4140
ip: 0.0.0.0
service:
responseClassifier:
kind: io.l5d.http.retryableRead5XX
client:
tls:
commonName: linkerd
trustCerts:
- /io.buoyant/linkerd/certs/cacertificate.pem
- protocol: http
label: incoming
dtab: |
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/svc => /host;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
hostNetwork: true
servers:
- port: 4141
ip: 0.0.0.0
tls:
certPath: /io.buoyant/linkerd/certs/certificate.pem
keyPath: /io.buoyant/linkerd/certs/key.pk8
---
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-tcp-config
namespace: linkerd
data:
config.yaml: |-
admin:
ip: 0.0.0.0
port: 9989
metricsIntervalSecs: 10
routers:
- label: default
interpreter:
kind: io.l5d.namerd.http
namespace: linkerd
# baseUrl: http://localhost:4180
baseUrl: http://namerd:4180
periodSecs: 20
servers:
- ip: 0.0.0.0
port: 7474
dstName: /svc/server
Websocket client source:
#!/usr/bin/env node
var WebSocketClient = require('websocket').client;
var client = new WebSocketClient();
client.on('connectFailed', function(error) {
console.log('Connect Error: ' + error.toString());
});
client.on('connect', function(connection) {
console.log('WebSocket Client Connected');
connection.on('error', function(error) {
console.log("Connection Error: " + error.toString());
});
connection.on('close', function() {
console.log('echo-protocol Connection Closed');
});
connection.on('message', function(message) {
if (message.type === 'utf8') {
console.log("Received: '" + message.utf8Data + "'");
}
});
function sendNumber() {
if (connection.connected) {
var number = Math.round(Math.random() * 0xFFFFFF);
connection.sendUTF(number.toString());
setTimeout(sendNumber, 1000);
}
}
sendNumber();
});
console.log('start');
// client.connect('ws://172.17.0.2:8081/', 'echo-protocol');
client.connect('ws://' + process.env.linkerd_proxy + '/', 'echo-protocol');
console.log('end');
Websocket server source:
#!/usr/bin/env node
var WebSocketServer = require('websocket').server;
var http = require('http');
var server = http.createServer(function(request, response) {
console.log((new Date()) + ' Received request for ' + request.url);
response.writeHead(404);
response.end();
});
server.listen(80, function() {
console.log((new Date()) + ' Server is listening on port 80');
});
wsServer = new WebSocketServer({
httpServer: server,
// You should not use autoAcceptConnections for production
// applications, as it defeats all standard cross-origin protection
// facilities built into the protocol and the browser. You should
// *always* verify the connection's origin and decide whether or not
// to accept it.
autoAcceptConnections: false
});
function originIsAllowed(origin) {
// put logic here to detect whether the specified origin is allowed.
return true;
}
wsServer.on('request', function(request) {
if (!originIsAllowed(request.origin)) {
// Make sure we only accept requests from an allowed origin
request.reject();
console.log((new Date()) + ' Connection from origin ' + request.origin + ' rejected.');
return;
}
var connection = request.accept('echo-protocol', request.origin);
console.log((new Date()) + ' Connection accepted.');
connection.on('message', function(message) {
if (message.type === 'utf8') {
console.log('Received Message: ' + message.utf8Data);
connection.sendUTF(message.utf8Data);
}
else if (message.type === 'binary') {
console.log('Received Binary Message of ' + message.binaryData.length + ' bytes');
connection.sendBytes(message.binaryData);
}
});
connection.on('close', function(reasonCode, description) {
console.log((new Date()) + ' Peer ' + connection.remoteAddress + ' disconnected.');
});
});
Thanks for the very detailed report!
The error message you report:
error parsing response: missing fieldaddrsat line 1 column 14
makes me think that this is a problem talking to namerd
Namerd "namespace": does this need to be the same as the k8s namespace running linkerd-tcp, or the k8s namespace running the backend service pods, or does namerd have it's own concept of namespaces?
The latter! I suspect this is the problem. (It's definitely confusing.) I think, per:
---
apiVersion: l5d.io/v1alpha1
dentries:
- dst: /#/io.l5d.k8s/default/http
prefix: /svc
kind: DTab
metadata:
namespace: linkerd
name: l5d
you want to use l5d as the namespace.
Namespace here refers to a 'namerd namespace' -- which is basically which dtab is used.
Let us know if this resolves things for you.