Can't test linkerd-tcp TLS server with curl or tls-scan
stevej opened this issue · 2 comments
Attempting to test linkerd-tcp with curl has not been working for me.
Here's my configuration (I chose 443 to work with tls-scan
more easily). I generated the certs with eg-ca
$ cat with_tls.yml
admin:
addr: 0.0.0.0:9989
metricsIntervalSecs: 10
proxies:
- label: default
servers:
- kind: io.l5d.tcp
addr: 0.0.0.0:7474
- kind: io.l5d.tls
addr: 0.0.0.0:443
identities:
localhost:
privateKey: ../eg-ca/test.fruitless.org.tls/private.pem
certs:
- ../eg-ca/test.fruitless.org.tls/cert.pem
- ../eg-ca/test.fruitless.org.tls/ca-chain.cert.pem
namerd:
url: http://127.0.0.1:4180
path: /svc/default
intervalSecs: 5
$ curl --no-alpn -vvvv -k https://localhost:7575
* Rebuilt URL to: https://localhost:7575/
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 7575 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 692 certificates in /etc/ssl/certs
^C
Eventually it times out if I don't exit it.
Here's the trace
log output and some commentary after digging through the source.
TRACE:linkerd_tcp::lb::socket : SecureServerHandshake(Some(SecureSocket(V4(127.0.0.1:48320)))).poll()
TRACE:linkerd_tcp::lb::socket : read_tcp_to_session: read_tls: 127.0.0.1:48320
TRACE:linkerd_tcp::lb::socket : read_tcp_to_session: read_tls: 127.0.0.1:48320 256B
TRACE:linkerd_tcp::lb::socket : read_tcp_to_session: process_new_packets: 127.0.0.1:48320
DEBUG:rustls::server_hs : we got a clienthello ClientHelloPayload { client_version: TLSv1_2, random: Random([89, 30, 5, 57, 172, 204, 181, 20, 196, 176, 98, 54, 211, 162, 223, 205, 205, 88, 49, 232, 128, 150, 200, 195, 109, 66, 92, 211, 185, 119, 112, 28]), session_id: SessionID, cipher_suites: [TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CAMELLIA_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, Unknown(49325), TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_CAMELLIA_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_CAMELLIA_128_GCM_SHA256, Unknown(49324), TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_CAMELLIA_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CAMELLIA_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_CAMELLIA_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_CAMELLIA_256_GCM_SHA384, TLS_RSA_WITH_AES_256_CCM, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_RSA_WITH_CAMELLIA_256_CBC_SHA, TLS_RSA_WITH_CAMELLIA_256_CBC_SHA256, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_CAMELLIA_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CCM, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_CAMELLIA_128_CBC_SHA, TLS_RSA_WITH_CAMELLIA_128_CBC_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_RSA_WITH_CAMELLIA_256_GCM_SHA384, TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_DHE_RSA_WITH_AES_256_CCM, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA, TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_RSA_WITH_CAMELLIA_128_GCM_SHA256, TLS_DHE_RSA_WITH_AES_128_CCM, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA, TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA256, TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA], compression_methods: [Null], extensions: [ExtendedMasterSecretRequest, Unknown(UnknownExtension { typ: Unknown(22), payload: Payload([]) }), Unknown(UnknownExtension { typ: StatusRequest, payload: Payload([1, 0, 0, 0, 0]) }), ServerName([ServerName { typ: HostName, payload: HostName("localhost") }]), Unknown(UnknownExtension { typ: RenegotiationInfo, payload: Payload([0]) }), SessionTicketRequest, NamedGroups([secp256r1, secp384r1, secp521r1, Unknown(21), Unknown(19)]), ECPointFormats([Uncompressed]), SignatureAlgorithms([RSA_PKCS1_SHA256, ECDSA_NISTP256_SHA256, RSA_PKCS1_SHA384, ECDSA_NISTP384_SHA384, RSA_PKCS1_SHA512, ECDSA_NISTP521_SHA512, Unknown(769), Unknown(771), RSA_PKCS1_SHA1, ECDSA_SHA1_Legacy])] }
DEBUG:rustls::server_hs : sni Some("localhost")
DEBUG:rustls::server_hs : sig schemes [RSA_PKCS1_SHA256, ECDSA_NISTP256_SHA256, RSA_PKCS1_SHA384, ECDSA_NISTP384_SHA384, RSA_PKCS1_SHA512, ECDSA_NISTP521_SHA512, Unknown(769), Unknown(771), RSA_PKCS1_SHA1, ECDSA_SHA1_Legacy]
DEBUG:linkerd_tcp::app::sni : finding cert resolver for Some("localhost")
DEBUG:linkerd_tcp::app::sni : found match for localhost
INFO :rustls::server_hs : decided upon suite SupportedCipherSuite { suite: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, kx: ECDHE, bulk: AES_256_GCM, hash: SHA384, sign: RSA, enc_key_len: 32, fixed_iv_len: 4, explicit_nonce_len: 8 }
DEBUG:rustls::server_hs : namedgroups [secp256r1, secp384r1, secp521r1, Unknown(21), Unknown(19)]
DEBUG:rustls::server_hs : ecpoints [Uncompressed]
DEBUG:rustls::server_hs : sending server hello Message { typ: Handshake, version: TLSv1_2, payload: Handshake(HandshakeMessagePayload { typ: ServerHello, payload: ServerHello(ServerHelloPayload { server_version: TLSv1_2, random: Random([253, 13, 112, 52, 236, 107, 46, 111, 70, 34, 40, 101, 237, 30, 87, 226, 214, 156, 60, 204, 108, 45, 178, 254, 81, 144, 252, 41, 214, 42, 17, 124]), session_id: SessionID, cipher_suite: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, compression_method: Null, extensions: [ServerNameAck, RenegotiationInfo(PayloadU8([])), ExtendedMasterSecretAck] }) }) }
TRACE:linkerd_tcp::lb::socket : server handshake: write_session_to_tcp: 127.0.0.1:48320
TRACE:linkerd_tcp::lb::socket : write_session_to_tcp: write_tls: 127.0.0.1:48320
TRACE:linkerd_tcp::lb::socket : write_session_to_tcp: write_tls: 127.0.0.1:48320: 62B
TRACE:linkerd_tcp::lb::socket : server handshake: write_session_to_tcp: 127.0.0.1:48320: wrote 62
TRACE:linkerd_tcp::lb::socket : read_tcp_to_session: read_tls: 127.0.0.1:48320
DEBUG:tokio_core::reactor : consuming notification queue
DEBUG:tokio_core::reactor : scheduling direction for: 5
TRACE:linkerd_tcp::lb::socket : read_tcp_to_session: read_tls: 127.0.0.1:48320: Resource temporarily unavailable (os error 11)
TRACE:linkerd_tcp::lb::socket : server handshake: write_session_to_tcp: 127.0.0.1:48320
TRACE:linkerd_tcp::lb::socket : write_session_to_tcp: write_tls: 127.0.0.1:48320
TRACE:linkerd_tcp::lb::socket : write_session_to_tcp: write_tls: 127.0.0.1:48320: 4070B
TRACE:linkerd_tcp::lb::socket : server handshake: write_session_to_tcp: 127.0.0.1:48320: wrote 4070
TRACE:linkerd_tcp::lb::socket : read_tcp_to_session: read_tls: 127.0.0.1:48320
DEBUG:tokio_core::reactor : consuming notification queue
DEBUG:tokio_core::reactor : scheduling direction for: 5
TRACE:linkerd_tcp::lb::socket : read_tcp_to_session: read_tls: 127.0.0.1:48320: would block
TRACE:linkerd_tcp::lb::socket : server handshake: write_session_to_tcp: 127.0.0.1:48320
TRACE:linkerd_tcp::lb::socket : write_session_to_tcp: write_tls: 127.0.0.1:48320
TRACE:linkerd_tcp::lb::socket : write_session_to_tcp: write_tls: 127.0.0.1:48320: 370B
TRACE:linkerd_tcp::lb::socket : server handshake: write_session_to_tcp: 127.0.0.1:48320: wrote 370
TRACE:linkerd_tcp::lb::socket : read_tcp_to_session: read_tls: 127.0.0.1:48320
DEBUG:tokio_core::reactor : consuming notification queue
DEBUG:tokio_core::reactor : scheduling direction for: 5
TRACE:linkerd_tcp::lb::socket : read_tcp_to_session: read_tls: 127.0.0.1:48320: would block
TRACE:linkerd_tcp::lb::socket : server handshake: write_session_to_tcp: 127.0.0.1:48320
TRACE:linkerd_tcp::lb::socket : write_session_to_tcp: write_tls: 127.0.0.1:48320
TRACE:linkerd_tcp::lb::socket : write_session_to_tcp: write_tls: 127.0.0.1:48320: 9B
TRACE:linkerd_tcp::lb::socket : server handshake: write_session_to_tcp: 127.0.0.1:48320: wrote 9
TRACE:linkerd_tcp::lb::socket : read_tcp_to_session: read_tls: 127.0.0.1:48320
DEBUG:tokio_core::reactor : consuming notification queue
DEBUG:tokio_core::reactor : scheduling direction for: 5
TRACE:linkerd_tcp::lb::socket : read_tcp_to_session: read_tls: 127.0.0.1:48320: would block
TRACE:linkerd_tcp::lb::socket : server handshake: write_session_to_tcp: 127.0.0.1:48320
TRACE:linkerd_tcp::lb::socket : server handshake: 127.0.0.1:48320: not complete
DEBUG:linkerd_tcp::lb::shared : poll_complete
TRACE:linkerd_tcp::app : runner 1 not ready
TRACE:linkerd_tcp::app : runner not finished
TRACE:linkerd_tcp::app : runner 0 not ready
TRACE:linkerd_tcp::app : runner not finished
DEBUG:tokio_core::reactor : loop poll - Duration { secs: 0, nanos: 1066 }
DEBUG:tokio_core::reactor : loop time - Instant { tv_sec: 7953554, tv_nsec: 368442048 }
No more references to 127.1:48320 in the logs.
On another test run, I notice there's still 51 bytes left in the Recv-Q of curl.
stevej@proxy-test-4d:~/src/linkerd-tcp$ netstat -an |grep 7575
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:7575 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:48680 127.0.0.1:7575 ESTABLISHED
tcp 51 0 127.0.0.1:7575 127.0.0.1:48680 ESTABLISHED
TRACE:linkerd_tcp::lb::socket : server handshake: write_session_to_tcp: 127.0.0.1:48704: wrote 9
TRACE:linkerd_tcp::lb::socket : read_tcp_to_session: read_tls: 127.0.0.1:48704
DEBUG:tokio_core::reactor : consuming notification queue
DEBUG:tokio_core::reactor : scheduling direction for: 5
TRACE:linkerd_tcp::lb::socket : read_tcp_to_session: read_tls: WouldBlock 127.0.0.1:48704: would block
TRACE:linkerd_tcp::lb::socket : server handshake: write_session_to_tcp: 127.0.0.1:48704
TRACE:linkerd_tcp::lb::socket : server handshake: 127.0.0.1:48704: not complete
DEBUG:linkerd_tcp::lb::shared : poll_complete
TRACE:linkerd_tcp::app : runner 1 not ready
TRACE:linkerd_tcp::app : runner not finished
TRACE:linkerd_tcp::app : runner 0 not ready
TRACE:linkerd_tcp::app : runner not finished
DEBUG:tokio_core::reactor : loop poll - Duration { secs: 0, nanos: 2895 }
DEBUG:tokio_core::reactor : loop time - Instant { tv_sec: 7956349, tv_nsec: 711642613 }
DEBUG:tokio_core::reactor : loop process - 0 events, Duration { secs: 0, nanos: 14111 }
DEBUG:tokio_core::reactor : loop poll - Duration { secs: 0, nanos: 1705382 }
DEBUG:tokio_core::reactor : loop time - Instant { tv_sec: 7956349, tv_nsec: 713367966 }
TRACE:tokio_core::reactor : event Ready {Readable | Writable} Token(12)
DEBUG:tokio_core::reactor : notifying a task handle
DEBUG:tokio_core::reactor : loop process - 1 events, Duration { secs: 0, nanos: 53141 }
DEBUG:tokio_core::reactor : loop poll - Duration { secs: 0, nanos: 5075 }
DEBUG:tokio_core::reactor : loop time - Instant { tv_sec: 7956349, tv_nsec: 713433957 }
TRACE:tokio_core::reactor : event Ready {Readable} Token(1)
DEBUG:tokio_core::reactor : loop process - 1 events, Duration { secs: 0, nanos: 10965 }
TRACE:linkerd_tcp::app : polling 1 running
TRACE:linkerd_tcp::app : polling runner 0
TRACE:linkerd_tcp::app : polling 2 running
TRACE:linkerd_tcp::app : polling runner 0
DEBUG:tokio_core::reactor : consuming notification queue
DEBUG:tokio_core::reactor : scheduling direction for: 0
DEBUG:linkerd_tcp::lb::shared : poll_complete
TRACE:linkerd_tcp::app : runner 0 not ready
TRACE:linkerd_tcp::app : polling runner 1
TRACE:linkerd_tcp::lb::socket : SecureServerHandshake(Some(SecureSocket(V4(127.0.0.1:48704)))).poll()
TRACE:linkerd_tcp::lb::socket : read_tcp_to_session: read_tls: 127.0.0.1:48704
TRACE:linkerd_tcp::lb::socket : read_tcp_to_session: read_tls: 127.0.0.1:48704 107B
TRACE:linkerd_tcp::lb::socket : read_tcp_to_session: process_new_packets: 127.0.0.1:48704
TRACE:linkerd_tcp::lb::socket : server handshake: write_session_to_tcp: 127.0.0.1:48704
TRACE:linkerd_tcp::lb::socket : server handshake: 127.0.0.1:48704: not complete
DEBUG:linkerd_tcp::lb::shared : poll_complete
TRACE:linkerd_tcp::app : runner 1 not ready
TRACE:linkerd_tcp::app : runner not finished
TRACE:linkerd_tcp::app : runner 0 not ready
TRACE:linkerd_tcp::app : runner not finished
It looks to me like we're entering this code block and are never being rescheduled because the other side isn't happy with our handshake and isn't finishing the read:
// If the remote hasn't read everything yet, resume later.
if ss.session.is_handshaking() {
trace!("server handshake: {}: not complete", ss.addr);
self.0 = Some(ss);
return Ok(Async::NotReady);
}
stevej@proxy-test-4d:~$ sudo tcpdump -A -s 1500 -i lo tcp port 7575
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lo, link-type EN10MB (Ethernet), capture size 1500 bytes
21:22:44.446293 IP localhost.48736 > localhost.7575: Flags [S], seq 748745347, win 43690, options [mss 65495,sackOK,TS val 1989056284 ecr 0,nop,wscale 7], length 0
E..<e.@.@............`..,............0.........
v...........
21:22:44.446315 IP localhost.7575 > localhost.48736: Flags [S.], seq 1713794766, ack 748745348, win 43690, options [mss 65495,sackOK,TS val 1989056284 ecr 1989056284,nop,wscale 7], length 0
E..<..@.@.<............`f&n.,........0.........
v...v.......
21:22:44.446329 IP localhost.48736 > localhost.7575: Flags [.], ack 1, win 342, options [nop,nop,TS val 1989056284 ecr 1989056284], length 0
E..4e.@.@..!.........`..,...f&n....V.(.....
v...v...
21:22:44.511685 IP localhost.48736 > localhost.7575: Flags [P.], seq 1:257, ack 1, win 342, options [nop,nop,TS val 1989056300 ecr 1989056284], length 256
E..4e.@.@.. .........`..,...f&n....V.(.....
v..,v..............Y..%Q.|.. B.JG=+.XDV$!...~...._...r.,.......
.$.s.+..... .#.r...0.......(.w./.....'.v.....{...5.=.......z.../.<.A...
...}.....9.k.......|...3.g.E.......\......................... localhost......#...
...
..........................................
21:22:44.511710 IP localhost.7575 > localhost.48736: Flags [.], ack 257, win 350, options [nop,nop,TS val 1989056300 ecr 1989056300], length 0
E..4..@.@.B............`f&n.,......^.(.....
v..,v..,
tls-scan
is also not happy.
$ ./tls-scan --tls1 --host=localhost --port=443 --pretty
host: localhost; ip: ; error: Network; errormsg: Error encountered while reading
<|---------Scan Summary---------|>
[27154] ciphers : (0)
[27154] dns-lookup : 1
[27154] network-error : 1
[27154] dns-errcount : 0
[27154] remote-close-error : 0
[27154] unknown-error : 0
[27154] timeout-error : 0
[27154] connect-error : 1
[27154] tls-handshake : 0
[27154] gross-tls-handshake : 0
[27154] elapsed-time : 0.10453 secs
<|------------------------------|>
I should mention that none of the backends returned from namerd are https.
I'm able to run a slow_cooker test with TLS against the most recent linkerd-tcp (soon to be on master):
plain server, plain client:
stevej@netty-test-8:~$ ./slow_cooker_linux_amd64 -totalRequests 100000 -interval 10s -qps 100 -concurrency 10 http://proxy-test-4d:7474
# sending 1000 GET req/s with concurrency=10 to http://proxy-test-4d:7474 ...
# good/b/f t goal% min [p50 p95 p99 p999] max bhash change
2017-06-16T21:07:51Z 9987/0/0 10000 99% 10s 0 [ 0 0 1 26 ] 26 0 +
2017-06-16T21:08:01Z 10000/0/0 10000 100% 10s 0 [ 0 0 0 4 ] 4 0
2017-06-16T21:08:11Z 10006/0/0 10000 100% 10s 0 [ 0 0 1 3 ] 3 0 +
2017-06-16T21:08:21Z 10004/0/0 10000 100% 10s 0 [ 0 0 1 6 ] 6 0 +
2017-06-16T21:08:31Z 10000/0/0 10000 100% 10s 0 [ 0 0 1 6 ] 6 0 +
2017-06-16T21:08:41Z 10000/0/0 10000 100% 10s 0 [ 0 0 1 2 ] 2 0 +
2017-06-16T21:08:51Z 10000/0/0 10000 100% 10s 0 [ 0 0 1 7 ] 7 0 +
2017-06-16T21:09:01Z 10000/0/0 10000 100% 10s 0 [ 0 0 1 2 ] 2 0
2017-06-16T21:09:11Z 10000/0/0 10000 100% 10s 0 [ 0 0 1 4 ] 4 0
2017-06-16T21:09:21Z 10000/0/0 10000 100% 10s 0 [ 0 0 1 4 ] 4 0
2017-06-16T21:09:31Z 10000/0/0 10000 100% 10s 0 [ 0 0 1 16 ] 16 0
FROM TO #REQUESTS
0 2 109724
2 8 253
8 32 20
32 64 0
64 128 0
128 256 0
256 512 0
512 1024 0
1024 4096 0
4096 16384 0
TLS server, plain client
stevej@netty-test-8:~$ ./slow_cooker_linux_amd64 -totalRequests 100000 -interval 10s -qps 100 -concurrency 10 https://proxy-test-4d:7473
# sending 1000 GET req/s with concurrency=10 to https://proxy-test-4d:7473 ...
# good/b/f t goal% min [p50 p95 p99 p999] max bhash change
2017-06-16T21:11:07Z 9940/0/0 10000 99% 10s 0 [ 0 0 1 64 ] 64 0 +
2017-06-16T21:11:17Z 9961/0/0 10000 99% 10s 0 [ 0 0 1 55 ] 55 0 +
2017-06-16T21:11:27Z 10000/0/0 10000 100% 10s 0 [ 0 0 1 10 ] 10 0 +
2017-06-16T21:11:37Z 9998/0/0 10000 99% 10s 0 [ 0 0 1 27 ] 27 0 +
2017-06-16T21:11:47Z 9976/0/0 10000 99% 10s 0 [ 0 0 1 29 ] 29 0 +
2017-06-16T21:11:57Z 9977/0/0 10000 99% 10s 0 [ 0 0 1 34 ] 34 0
2017-06-16T21:12:07Z 9975/0/0 10000 99% 10s 0 [ 0 0 1 36 ] 36 0
2017-06-16T21:12:17Z 9981/0/0 10000 99% 10s 0 [ 0 0 1 30 ] 30 0
2017-06-16T21:12:27Z 9982/0/0 10000 99% 10s 0 [ 0 0 1 28 ] 28 0
2017-06-16T21:12:37Z 9989/0/0 10000 99% 10s 0 [ 0 0 1 33 ] 33 0
2017-06-16T21:12:47Z 9983/0/0 10000 99% 10s 0 [ 0 0 1 26 ] 26 0
FROM TO #REQUESTS
0 2 109089
2 8 382
8 32 265
32 64 18
64 128 8
128 256 0
256 512 0
512 1024 0
1024 4096 0
4096 16384 0
Config
admin:
port: 9989
metricsIntervalSecs: 10
routers:
- label: default
interpreter:
kind: io.l5d.namerd.http
baseUrl: http://localhost:4180
namespace: default
periodSecs: 20
servers:
- port: 7474
ip: 0.0.0.0
dstName: /svc/default
connectTimeoutMs: 500
- port: 7473
ip: 0.0.0.0
dstName: /svc/default
connectTimeoutMs: 500
tls:
defaultIdentity:
privateKey: ../eg-ca/abra.cadab.ra.tls/private.pem
certs:
- ../eg-ca/abra.cadab.ra.tls/cert.pem
- ../eg-ca/ca/intermediate/certs/ca-chain.cert.pem