gliderlabs/docker-alpine

Not resolving using search domain

NoumanSaleem opened this issue ยท 95 comments

Hostname is not properly resolving within the container when setting --dns-search on Docker.

Docker settings:

DOCKER_OPTS="--bip=172.17.42.1/16 --dns=172.17.42.1 --dns=10.0.2.15 --dns-search=service.consul"

gliderlabs/alpine

vagrant@local:~$ docker run --rm -it gliderlabs/alpine sh
/ # ping consul
ping: bad address 'consul'
/ # ping consul.service.consul
PING consul.service.consul (10.211.55.22): 56 data bytes
64 bytes from 10.211.55.22: seq=0 ttl=64 time=0.045 ms

progrium/busybox

vagrant@local:~$ docker run --rm -it progrium/busybox
/ # ping consul
PING consul (10.211.55.22): 56 data bytes

gliderlabs/registrator#111

Looks like musl doesn't support domain and search in resolv.conf: http://wiki.musl-libc.org/wiki/Functional_differences_from_glibc#Name_Resolver_.2F_DNS. We're researching workarounds to this issue. Thanks for reporting!

@andyshinn the musl behavior of querying all nameservers in /etc/resolv.conf in parallel and using the the first response is unexpected. Perhaps that could be added to the documentation?

Yep, agree it should be noted somewhere. I'll add some more information to the docs about the caveats soon.

@andyshinn The docs are a good start, but they raise further questions. It would be great if the section on dnsmasq (http://gliderlabs.viewdocs.io/docker-alpine/caveats) could be fleshed out to provide a full solution.

I tried to provide a general example of the use case where you want to send internal requests to a Consul server and the rest out to upstream name servers. What specifically do you think the example is missing?

A brief description of dnsmasq and how to use it - on the host, as an additional container, etc. In other words, specific instructions on use, rather than "maybe try this workaround".

I think it could use a full dnsmasq config for the custom+upstream DNS
scenario so people can figure out how to run it on the host, then a pointer
to a good generic dnsmasq Docker container that would be easy to apply that
configuration for people that want to run it in a container, and lastly,
once available, a pointer to our magic resolve container.

On Tue, Mar 24, 2015 at 3:36 PM, Aris Pikeas notifications@github.com
wrote:

A brief description of dnsmasq and how to use it - on the host, as an
additional container, etc. In other words, specific instructions on use,
rather than "maybe try this workaround".

โ€”
Reply to this email directly or view it on GitHub
#8 (comment)
.

Jeff Lindsay
http://progrium.com

I was so hopeful for docker-alpine - I need a mini package for DNS utils. Alas, DNS in alpine seems to be totally busted. A shame.

I set up a dnsmasq container. In resolv.dnsmasq.conf, I added

nameserver 8.8.8.8
search example.com

In alpine container I set the dns pointing to that dnsmasq container.

The problem is when I do ping base in alpine container, the log of dnsmasq shows that only base is sent to 8.8.8.8, but base.example.com is never queried.

The 3 dnsmasq options that might append a domain to unqualified names is --expand-hosts, --domain, and --auth-zone. Can you try setting these to see if it changes the behavior? Unfortunately, the only other way to do this is with some host file manipulation on the dnsmasq container.

This cannot be solved using dnsmasq, dnsmasq assumes glibc has expanded the hostname from the search entry in /etc/resolv.conf.

If you are in an environment (like Tutum) where links don't work because of this:

https://gist.github.com/neilellis/8983d6977f45f126df28

Is my workaround in conjunction with running dnsmasq:

dnsmasq --resolv-file=/etc/dnsmasq-resolv.conf --addn-hosts=/etc/hosts.links --no-daemon

The full image is at https://github.com/vizzbuzz/base-alpine

Alternatively we have a container called Resolvable that helps:
https://github.com/gliderlabs/resolvable

Just because no one has mentioned it yet.. DNS search/domain resolution is really important in a kubernetes cluster. Kubernetes services are used for just about every integration point and they all resolve through DNS. Kubernetes injects the right information into /etc/resolv.conf to do this.

The fact that I can't connect to myservice.mynamespace from within an alpine container prevents me from doing anything useful. For example, I have a service called etcd.system that resolves to etcd host for http puts and gets. A very common thing is to write to etcd via curl for application configuration, etc. Alpine is the smallest base image that easily lets you download and use curl, but because my kubernetes service, etcd.system, isn't resolvable I am dead in the water.

I now have to use a 100 meg image to curl, when the 6 meg one would have done fine.

Just food for thought.

Totally agree. Hopefully this puts pressure on musl developers to address this problem. If anybody knows the best way to do this, please share.

In the meantime, we're probably adding search domain support to Resolvable.

I guess this is still dead in the water. Ran into the exact same use-case as @hypergig, using Kubernetes, hitting DNS problems, finding out Alpine doesn't support DNS search.

The people at Kubernetes (specifically @thockin) seem to be aware of it at kubernetes/kubernetes#10163, but no dice so far.

I'd like to see this resolved too. We're using skydns for service discovery.

Has anyone raised this upstream with the musl devs?

I seem to recall not finding a way to file a bug with them.
On Sep 2, 2015 6:11 PM, "Andrew Cutler" notifications@github.com wrote:

I'd like to see this resolved too. We're using skydns for service
discovery.

Has anyone raised this upstream with the musl devs?

โ€”
Reply to this email directly or view it on GitHub
#8 (comment)
.

I've not found a way yet but if anybody finds a way or files an issue be sure to share the link.

I've sent an email to the musl mailing list. It can be viewed at http://www.openwall.com/lists/musl/2015/09/04/3. I'm still gathering some information to answer all of Rich's questions (based on DNS use cases in Consul, Kubernetes and SkyDNS). If you have any specific use cases, please let me know or weigh in directly on the mailing list thread.

Awesome. Let me know if there is anything I can do to help.

On Fri, Sep 4, 2015 at 12:22 PM, Andy Shinn notifications@github.com
wrote:

I've sent an email to the musl mailing list. It can be viewed at
http://www.openwall.com/lists/musl/2015/09/04/3. I'm still gathering some
information to answer all of Rich's questions (based on DNS use cases in
Consul, Kubernetes and SkyDNS). If you have any specific use cases, please
let me know or weigh in directly on the mailing list thread.

โ€”
Reply to this email directly or view it on GitHub
#8 (comment)
.

Any update on this? Just ran into it this morning.

Sure, there is Alpine-kubernetes: A base image that comes with a special sauce for giving SEARCH domain powers to Docker Alpine. See here: https://github.com/janeczku/docker-alpine-kubernetes.

mhart commented

It looks like Rich's questions here are still unanswered: http://www.openwall.com/lists/musl/2015/09/04/5

Can anyone familiar with what Kubernetes et al need in this regard respond there? Or should we get Rich involved here? (I can't find him on GitHub though, so that might be out of the question)

I think @thockin just re-ignited the conversation. Sorry I've been so lax on the subject. I want to be respectful of everyone's goals and didn't really know how to reasonably answer the questions yet based on the communities goals.

mhart commented

@andyshinn cool, thanks for the update. Looks like the most recent response, as of now, is here: http://www.openwall.com/lists/musl/2015/10/22/15 โ€“ for those following along.

The thread (http://www.openwall.com/lists/musl/2015/10/22/15) looks stalled out, with no conclusions reached.

Alpine is phenomenal, with musl DNS as the one massive deal breaker. :-/

As of today this is the current progress for search in musl's resolver according to Rich Felker:

I started work on implementing it when this discussion wrapped up and
realized that it made sense to do a little bit of refactoring first
and fix some (non-serious, but possibly annoying) bugs in resolv.conf
parsing at the same time. I added it to the roadmap for this release
cycle and hope to get to it soon.

Thank you for the update. I stopped following the thread as it appeared to be just a bunch of back and forth. But it is really good news to hear Rich is at least investigating how it could work! I'd love to see Alpine working on Kubernetes and Rancher :)

kop commented

It seems like this issue will be open for a while...
Maybe it makes sense to include @janeczku work into the official Alpine image until problem will be resolved?
Cause there are a lot of good Alpine-based containers and it's really sad to duplicate them and change base image just to add Kubernetes support.

+1 mesos + mesos-dns + marathon + alpine = no service discovery

Apart from not respecting the search lines in resolv.conf, options ndots:number causes DNS to not work at all, even when providing FQDNs.

This causes alpine to not be usable at all with Kubernetes (which inserts an options ndots:number line in resolv.conf).

I decided not to create a separate issue for this since it's very much related to the search problem but please let me know if you want to treat this separately.

+1 this would be great help... it's a big issue with kubernetes

just for info, as Rich Felker mentioned in mailing list, this is currently on the musl roadmap for 1.1.13 http://wiki.musl-libc.org/wiki/Roadmap#musl_1.1.13 and it aims for Late January

@stepanstipl I have tears in my eyes right now.

This is great, any word on when an alpine release will contain the fix?

tsing commented

musl-libc 1.1.13 is released. http://www.musl-libc.org/download.html

mhart commented

@tsing @jcrugzz still 1.1.12 on the latest alpine docker images though:

~  docker pull alpine && docker run alpine ldd --version
Using default tag: latest
latest: Pulling from library/alpine
c52e3ed763ff: Already exists
Digest: sha256:d7201bdfa765bebed9a79c385db4378f55593e50dc8d002b9713810320ad93b9
Status: Image is up to date for alpine:latest
musl libc
Version 1.1.12
Dynamic Program Loader
Usage: ldd [options] [--] pathname
ncopa commented

try alpine edge:

$ docker run --rm -it alpine:edge sh
/ # apk upgrade -U -a
fetch http://dl-4.alpinelinux.org/alpine/edge/main/x86_64/APKINDEX.tar.gz
(1/13) Upgrading musl (1.1.12-r1 -> 1.1.13-r1)
...
ncopa commented

@andyshinn might be an idea to regenerate the edge image so people can start test it.

Yes! I'll push new images today.

๐Ÿ‘

I testing the new alpine:edge with musl 1.1.3 in kubernetes...seems like search is supported, but kubernetes/coreos/aws is inserting two nameserver entries (skydns and aws' dns server), and resolution is still failing. I'm assuming that's because aws' dns is responding first with not found, and musl ignores the other server's response.

just coming up to speed with k8s, so not sure yet if there's an option to drop the other nameserver.

Correct. To make it work in that scenario it would like have to be one DNS server that would handle the forwarding when resolution failed. Per SkyDNS README.md:

SkyDNS will also act as a forwarding DNS proxy, so that you can set your SkyDNS instance as the primary DNS service in /etc/resolv.conf and SkyDNS will forward and proxy requests for which it is not authoritative.

@ansel1 @andyshinn just fyi - This is already included in 1.2.0 alpha line - https://github.com/kubernetes/kubernetes/releases/tag/v1.2.0-alpha.5 - SkyDNS is the only NS for Pods with DNSPolicy=ClusterFirst (#15645, @ArtfulCoder)

Thanks for that! I thought I read somewhere Tim mentioning something like that. But couldn't find it.

So to recap, it sounds like this is a starting combination to get Alpine working with k8s service discovery:

  • Kubernetes 1.2.0 (at least alpha 5)
  • Images based on alpine:edge or gliderlabs/alpine:edge as of yesterday (Feb 16th, 2016)
  • Pod settings of DNSPolicy=ClusterFirst

DNS lookup in the default busybox image, which uses uclibc rather than musl, works fine with the kubernetes config. While it would likely be more work to migrate alpine to uclibc than to just fix musl, there is no question musl has made some interesting choices around DNS, some of which it does not plan to change.

ncopa commented

Alpine v2.x and earlier used uclibc. We are not going back.

I'm assuming that's because aws' dns is responding first with not found, and musl ignores the other server's response.

I am sad that even after the welcome introduction of search in musl's resolver we still require special treatment for Alpine images... ๐Ÿ˜ข

I've got my node servers running just fine on kubernetes. Dropped my image size by 400MB. Awesome.

With the 3.4.0 release, I guess it can be closed now.

Hi Guys,

Is the issue resolved , I upgraded the docker container still the issue seems to be reproducible.
I have tested this with following docker file
FROM alpine:3.4
RUN apk --update add curl tar
COPY test.sh /tmp/test.sh
RUN chmod +x /tmp/test.sh
ENTRYPOINT ["/tmp/test.sh"]

I am running container in host mode

ncopa commented

@rsingh2411 can you show the content of test.sh?

@rsingh2411 Note that in musl libc the resolv.conf ndots option acts as a threshold for disabling search for queries that have at least ndots dots in the name. I.e. with the default options ndots:1 a query for myservice.mynamespace will never be looked up by qualifying it with search domains.

Thanks for replying, test.sh contains an while loop which has an simple echo .. I usually exec into container and do an nslookup or ping to the domain name consul.service.local

@rsingh2411 Exec into the container and paste the output of doing

cat /etc/resolv.conf
nslookup kubernetes.default
nslookup kubernetes

Apologies for late reply
Contents for cat /etc/resolv.conf, its a simple alpine docker container on openstack centos machine...
No kubernetes.

; generated by /usr/sbin/dhclient-script
search
nameserver 127.0.0.1
nameserver 188.0.0.4
nameserver 188.0.0.3

nslookup for google and consul.service.local output
nslookup google.com
nslookup: can't resolve '(null)': Name does not resolve

Name: google.com
Address 1: 172.217.16.78 par03s13-in-f14.1e100.net
Address 2: 2a00:1450:4007:806::200e par03s13-in-x0e.1e100.net

bash-4.3# nslookup consul.service.local
nslookup: can't resolve '(null)': Name does not resolve
nslookup: can't resolve 'consul.service.local': Name does not resolve

ncopa commented

@rsingh2411

; generated by /usr/sbin/dhclient-script
search
nameserver 127.0.0.1
nameserver 188.0.0.4
nameserver 188.0.0.3

that search line with no search domain. Is that intentional? I believe that is what is causing the error. Try remove it.

qux42 commented

The problem remained for me:
I tried using alpine:3.4 and alpine:edge ...

/etc/resolv.conf:

search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.254.0.10
nameserver 185.48.116.10
nameserver 185.48.118.6
options ndots:5

nslookup kubernetes.default

nslookup: can't resolve '(null)': Name does not resolve

nslookup: can't resolve 'kubernetes.default': Name does not resolve

nslookup kubernetes

nslookup: can't resolve '(null)': Name does not resolve

nslookup: can't resolve 'kubernetes': Name does not resolve

Are you sure your cluster nodes are running Kubernetes 1.2.0 or later? It should only insert one nameserver with the default dnsPolicy of clusterFirst.

Try this:

echo "options single-request-reopen" >> /etc/resolv.conf

latest kubernetes & alpine 3.4:

/ # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local c.amorphous-horse.internal. google.internal.
nameserver 10.3.240.10
options ndots:5
options single-request-reopen
/ # nslookup kubernetes.default
nslookup: can't resolve '(null)': Name does not resolve

Name: kubernetes.default
Address 1: 10.3.240.1 kubernetes.default.svc.cluster.local
/ # nslookup kubernetes
nslookup: can't resolve '(null)': Name does not resolve

Name: kubernetes
Address 1: 10.3.240.1 kubernetes.default.svc.cluster.local

Hi,

I have this problem 2 ..I use alpine 3.4 and latest and still the rancher-metadata is missing in the /etc/resolv.conf .
How can I fix it ?

Best regards

@ae6rt has found a solution to this. Take a look at his repo

https://github.com/ae6rt/docker-alpine-dig

With this. The alpine image can resolve short names.

I'm not sure why this issue is closed as I've tested with the alpine:latest aka alpine:3.4, and it still has this problem with not honoring search ... in /etc/resolv.conf. I've also tried some of the images and fixes that claim to have resolved the problem. None of them did for me.

This is a major problem for containers deployed into a k8s cluster, where k8s service is a critical part of the end-to-end user experience. For example, the most downloaded image from docker hub is nginx, and it's based on alpine, and it's broken when deployed into k8s.

Very sad to see this broken in both 3.4 and the edge on k8s.

  • It works without search:
/ # cat /etc/resolv.conf
#search ivan.svc.cluster.local svc.cluster.local cluster.local in.foo.bar.
nameserver 10.32.0.10
options ndots:5
/ # ping -c 1 google.com
PING google.com (216.58.208.142): 56 data bytes
64 bytes from 216.58.208.142: seq=0 ttl=36 time=0.324 ms

--- google.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.324/0.324/0.324 ms
  • Doesn't work with search:
/ # cat /etc/resolv.conf
search ivan.svc.cluster.local svc.cluster.local cluster.local in.foo.bar.
nameserver 10.32.0.10
options ndots:5
/ # ping -c 1 google.com
ping: bad address 'google.com'

@bobrik - how are you deploying k8s and to which type of infrastructure environment?

This issue still exists with current alpine:3.4, alpine:3.5 and alpine:edge.

/ # nslookup kubernetes.default
nslookup: can't resolve '(null)': Name does not resolve

Name:      kubernetes.default
Address 1: 10.254.0.1 kubernetes.default.svc.cluster.local
/ # nslookup google.com
nslookup: can't resolve '(null)': Name does not resolve

nslookup: can't resolve 'google.com': Name does not resolve
/ # cat /etc/resolv.conf 
search testspace.svc.cluster.local svc.cluster.local cluster.local testcluster.example.com
nameserver 10.254.0.10
options ndots:5
/ #

Also failing for me on a Rancher provided Kubernetes cluster. However, both short and long queries work if I change ndots:5 to ndots:4. Additionally, though not really an issue in practice, based on the various nslookup results posted here it seems like the reverse lookup of the DNS server used to resolve the name is failing for everyone.

So, it seems like Alpine still has flaky DNS name resolution, but it's good enough in practice for some clusters based on the ndots settings used with resolv.conf. Could somebody with a working setup please post their resolv.conf config to confirm this?

UPDATE: The ndots option was only coincidentally relevant. As the post immediately above this one demonstrates, the remaining problem is that absolute name resolution fails unless there are at least ndots dots within the host name being resolved. However, ndots is only meant to be a performance enhancement in cases where a host name clearly couldn't be a relative name, allowing some network traffic to be avoided.

The net effect when using Alpine 3.4+ is that:

  • Kubernetes services can be looked up using relative names.
  • Kubernetets services can be looked up using absolute names (because these will always have at least ndots dots in them).
  • Name resolution for hosts not defined by Kubernetes will only work for deeply nested domains that happen to have at least ndots dots, which is rare.

Surprisingly, docker-alpine-kubernetes also has the same bug, though it has been deprecated owing to the release of Alpine 3.4. Busybox OTOH, which uses uclibc, does not have this problem.

Even based on its own documentation this would appear to be a bug in musl libc:

musl's resolver previously did not support the "domain" and "search" keywords in resolv.conf. This feature was added in version 1.1.13, but its behavior differs slightly from glibc's: queries with fewer dots than the ndots configuration variable are processed with search first then tried literally (just like glibc), but those with at least as many dots as ndots are only tried in the global namespace (never falling back to search, which glibc would do if the name is not found in the global DNS namespace). This difference comes from a consistency requirement not to return different results subject to transient failures or to global DNS namespace changes outside of one's control (addition of new TLDs).

While I can confirm the second part (queries greater than ndots never fall-back to using search), the first part (queries smaller than ndots fall-back to using an absolute query) isn't what I observe.

Using dig on an Ubuntu container and attempting to resolve the nonsensical query google.com.default.svc.cluster.local (simulates the type of initial query for a short domain that would be occurring) returns a QUESTION SECTION and an AUTHORITY SECTION, but no ANSWER SECTION. This should cause musl libc to attempt to resolve the absolute query (google.com) instead, yet it doesn't seem to based on the final result of the query.

Here's the (tiny) commit where support for search and domain was added to musl libc, and here's the name_from_dns function that that diff relies on. I think this dns_parse_callback function maybe the thing that determines whether we consider we've received a result or not, yet the code indicates this would only occur if we receive either an A, AAAA or CNAME record, yet in our case there's no ANSWER SECTION whatsoever.

... ๐Ÿ˜•

I replied to dchambers' post about the issue to the musl mailing list, here:
http://www.openwall.com/lists/musl/2017/03/15/4

Yes, many thanks @richfelker, and your most recent mail is even more helpful. Will continue investigating as soon as I'm able to...

Although they may not be the only culprits, for me this would appear to be a Rancher issue, so I've created a rancher-dns issue to try and get this resolved.

edge, musl-1.1.16-r7, still search and domain are not working sigh :(

pine:~$ ssh om
ssh: Could not resolve hostname om: Name does not resolve
pine:~$ 
pine:~# tcpdump -ni lo
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lo, link-type EN10MB (Ethernet), capture size 262144 bytes
07:18:24.612659 IP 127.0.0.1.45913 > 127.0.0.1.53: 9945+ A? om. (20)
07:18:24.616066 IP 127.0.0.1.45913 > 127.0.0.1.53: 11625+ AAAA? om. (20)
07:18:24.616457 IP 127.0.0.1.53 > 127.0.0.1.45913: 9945 0/1/0 (88)
07:18:24.616997 IP 127.0.0.1.53 > 127.0.0.1.45913: 11625 0/1/0 (88)
pine:~# cat /etc/resolv.conf 
search ruff.mobi
nameserver ::1
pine:~# netstat -ulnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
udp        0      0 127.0.0.1:53            0.0.0.0:*                           350/unbound
udp        0      0 ::1:53                  :::*                                350/unbound
pine:~# 

I can't reproduce your test case. It works fine here. It looks like your resolv.conf isn't even being read since the tcpdumped queries are to 127.0.0.1 not ::1. Check permissions and perhaps strace the process doing the lookup.

damn indeed such a silly miss

pine:~# ls -la /etc/resolv.conf 
-rw-------    1 root     root            52 Apr  8 07:12 /etc/resolv.conf

was using 127.0.0.1 initially but after certain manipulation with a file left it with ::1.

same issue with alpine 3.5 and kubernetes 1.5.x :(
Sometimes it works, and sometimes does not.

bash-4.3# nslookup zookeeper.kafka
nslookup: can't resolve '(null)': Name does not resolve

nslookup: can't resolve 'zookeeper.kafka': Name does not resolve
bash-4.3# nslookup zookeeper.kafka
nslookup: can't resolve '(null)': Name does not resolve

Name:      zookeeper.kafka
Address 1: 10.3.0.142
bash-4.3#

I'm also facing this issue on rancher, specifically connecting to mongodb databases. My image is unable to resolve the name of a link coming from another stack. However, if the links are within the same stack it works just fine. I don't understand why exactly.

I am now using a 300mb image when my average image size with alpine was 90mb. I would love to come back to alpine when this gets fixed.

@danielo515 / @tuannvm: this is fixed from an Alpine perspective, but it's not as tolerant of non-conforming DNS servers, as was the case with Rancher's DNS until they recently fixed it for example.

@dchambers do you know the minimum rancher version that works? I want to check mine. In wich Alpine version is this considered fixed ?

I'm not 100% sure because I didn't perform the Rancher upgrade myself, but I know our cluster is definitely now working. More information in this issue.

In wich Alpine version is this considered fixed ?

@danielo515, just saw the other part of your question and the answer is Alpine 3.4.

Still have an issue with alpine 3.4 / kube-dns:1.9
Hope that kube-dns:2.x.x would fix this.

alpine does not support multiple nameservers as well. Only one

Any updates on this? Using Jenkins based alpine images with dynamic slaves through kubernetes 1.8.3. Some alpine images can resolve external addresses and others can't. Both have the same /etc/resolv.conf files.

I've stumbled onto this same issue, and I've solved it removing the a search directive, which was triggering a CloudFlare DNS (buggy) resolution.

I've found out about the solution with this blog post: https://blog.maio.me/alpine-kubernetes-dns/

Cloudflare has EWONTFIX'd this issue according to:

kubernetes/dns#119 (comment)

Exactly, that's why the only solution was removing the triggering domain from the search directive on my end.

inodb commented

This is closed, but I'm still having problems in the latest release running on kubernetes (kops on AWS):

kubectl run -i --tty alpine --image=alpine --restart=Never -- nslookup kubernetes.default
nslookup: can't resolve '(null)': Name does not resolve

Name:      kubernetes.default
Address 1: 100.64.0.1 kubernetes.default.svc.cluster.local

in busybox nslookup works fine:

kubectl run -i --tty busybox --image=busybox --restart=Never -- nslookup kubernetes.default
Server:    100.64.0.10
Address 1: 100.64.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 100.64.0.1 kubernetes.default.svc.cluster.local
inodb commented

@Cloven nice, thanks for clarifying! Seems like it could be nice to have a p big warning sign for using alpine images on k8s somewhere

I was experiencing a similar issue, and it was caused by a subtle configuration difference between alpine and glibc based linuxes.
If the "ndots" flag in /etc/resolv.conf is set to zero, then the search path will be disabled.
See: https://wiki.musl-libc.org/functional-differences-from-glibc.html