Container image does not support IPv6-first Kubernetes environments
centromere opened this issue · 2 comments
Description
By default, the nginx configuration looks like this:
server {
listen 80;
listen 443 ssl http2;
which results in sockets looking like this:
# ss -lntp | grep nginx
LISTEN 0 511 0.0.0.0:443 0.0.0.0:* users:(("nginx",pid=743682,fd=7),("nginx",pid=743681,fd=7),("nginx",pid=743566,fd=7))
LISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=743682,fd=6),("nginx",pid=743681,fd=6),("nginx",pid=743566,fd=6))
#
This is not good, because in IPv6-first Kubernetes clusters, the kubelet is going to try to assess Pod readiness by connecting to its IPv6 address. In the current state, the kubelet will receive a connection refused
error, and the Pod will never become Ready
.
Expected Results
I expect that the Pod achieves Ready
status without having to manually modify the pmm.conf
nginx configuration file.
Actual Results
nginx listens on 0.0.0.0
which causes the readiness check to fail.
Version
PMM server 2.40.1
Steps to reproduce
Install the percona/pmm
chart on a Kubernetes cluster which uses IPv6-first and observe that the Pod is never marked as Ready
.
Relevant logs
No response
Code of Conduct
- I agree to follow Percona Community Code of Conduct
@centromere Thanks for raising this, it indeed looks like an issue!
Do you want to suggest a fix? Feel free to make a PR if you do - that may speed things up :)
I wonder if something like this would work universally in every environment without trouble:
server {
listen 80;
listen 443 ssl http2;
listen [::]:80;
listen [::]:443 ssl http2;
# ...
}
What I fear is this: if PMM happens to run in an environment that doesn't support IPv6, then it may result in a failure to bind to IPv6. The likelihood of it is rather low, but not guaranteed.