crashed/stopped container results in a 308 loop (bug?)
dev8-de opened this issue · 5 comments
Hello,
I am using docker compose for my setup. I have dev instances of my containers, which I'm running with
docker compose up app-dev
If this app crashes, or I stop it via ctrl-c, the container itself is not removed it get the status "stopped":
824710605cde app-dev:latest "/usr/local/bin/pyth…" 2 hours ago Exited (0) 10 minutes ago
If I do any requests into this stopped container, this result in a 308 redirect to itself:
[easy@workstation app$] curl -v 'https://app.dev.mb/.api/call/index'
* Host app.dev.mb:443 was resolved.
* IPv6: (none)
* IPv4: 192.168.x.x
* Trying 192.168.x.x:443...
* Connected to app.dev.mb (192.168.x.x) port 443
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: none
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256 / X25519 / RSASSA-PSS
* ALPN: server accepted h2
* Server certificate:
* subject: CN=wildmb
* start date: Sep 13 10:27:47 2023 GMT
* expire date: Dec 16 10:27:47 2025 GMT
* subjectAltName: host "app.dev.mb" matched cert's "*.dev.mb"
* issuer: CN=Easy-RSA CA
* SSL certificate verify ok.
* Certificate level 0: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
* Certificate level 1: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://app.dev.mb/.api/call/index
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: app.dev.mb]
* [HTTP/2] [1] [:path: /.api/call/index]
* [HTTP/2] [1] [user-agent: curl/8.5.0]
* [HTTP/2] [1] [accept: */*]
> GET /.api/call/index HTTP/2
> Host: app.dev.mb
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/2 308
< date: Wed, 20 Dec 2023 07:53:53 GMT
< location: https://app.dev.mb/.api/call/index
< server: Caddy
< server: Caddy
< content-length: 0
<
* Connection #0 to host app.dev.mb left intact
The reverse proxy entry for the stopped container is still in the caddy configuration. But the dial just have ":80" instead "container-ip:80", so I think this is a proxy error and not a caddy error. (But I'm not 100% sure about that).
If this is not a bug, how can I avoid this 308 redirects of crashed/stopped but not removed containers?
Reagards,
easy.
What did you use as the labels? What's in Caddy's logs?
my labels are the following:
labels:
caddy: app.dev.mb
caddy.0_import: common
caddy.1_import: log app-dev
caddy.reverse_proxy: '{{ upstreams 80 }}'
The snippets in my Caddyfile:
(common) {
tls /etc/caddy/wildmb.crt /etc/caddy/wildmb.key
}
(log) {
log {
output file /var/log/caddy/{args[0]}.log {
roll_size: 1gb
roll_local_time
roll_keep: 3
roll_keep_for: 120d
}
format json
}
}
The logentry for the request:
{"level":"info","ts":1703059069.9339566,"logger":"http.log.access.log6","msg":"handled request","request":{"remote_ip":"172.22.0.1","remote_port":"41494","client_ip":"172.22.0.1","proto":"HTTP/2.0","method":"GET","host":"app.dev.mb","uri":"/.api/call/index","headers":{"User-Agent":["curl/8.5.0"],"Accept":["*/*"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"app.dev.mb"}},"bytes_read":0,"user_id":"","duration":0.000545562,"size":0,"status":308,"resp_headers":{"Server":["Caddy","Caddy"],"Location":["https://app.dev.mb/.api/call/index"],"Date":["Wed, 20 Dec 2023 07:57:49 GMT"],"Content-Length":["0"]}}
Caddyversion: 2.7.4
In the caddylog itself, there is no error or notice entry.
So just a basic setup with a self signed wildcard certificate instead a lets encrypt one. But there is no SSL-Issue whatsoever.
Regards.
Weird. I'll need to defer to @lucaslorentz about this; why do stopped containers still generate Caddyfile config?
I dont know.
Here is the config entry with a running container:
{
"handle" : [
{
"handler" : "subroute",
"routes" : [
{
"handle" : [
{
"handler" : "reverse_proxy",
"upstreams" : [
{
"dial" : "172.22.0.31:80"
}
]
}
]
}
]
}
],
"match" : [
{
"host" : [
"app.dev.mb"
]
}
],
"terminal" : true
}
and here if the container is stopped: (but not removed)
{
"handle" : [
{
"handler" : "subroute",
"routes" : [
{
"handle" : [
{
"handler" : "reverse_proxy",
"upstreams" : [
{
"dial" : ":80"
}
]
}
]
}
]
}
],
"match" : [
{
"host" : [
"app.dev.mb"
]
}
],
"terminal" : true
},
As you can see, it is basically the same, just the ip from the container is missing. As soon as I remove the container with docker container rm ...
the config entry also disappears, what makes sense..
I think the best would be a label entry like
caddy.no_upstreams: "respond 'Error...' 500"
or something like this, so if there a no upstreams available, that the config entry is a direct response entry.
Regards...
(edit: changed example label entry it match respond config of caddy)
Weird. I'll need to defer to @lucaslorentz about this; why do stopped containers still generate Caddyfile config?
It shouldn't scan stopped containers unless --scan-stopped-containers
is set.
But it looks like I accidentally made scan-stopped-containers
default to true. I will fix that in another PR. A breaking change, unfortunately, but changing to scan stopped containers by default was an even bigger breaking change, so it's better to restore it to false.