/root/.kube/ setup in Dockerfile
ScottBrenner opened this issue · 1 comments
Have a simple Dockerfile based on this image:
FROM dtzar/helm-kubectl
RUN mkdir /root/.kube/
COPY .kube/ /root/.kube/ <-- copied ~/.kube to local directory
RUN kubectl config get-contexts <-- this shows the context is setup correctly
...
but seeing this when running my image:
$ docker run --rm <my image> helm ls
Error: Get http://localhost:6445/api/v1/namespaces/kube-system/pods?labelSelector=app%3Dhelm%2Cname%3Dtiller: dial tcp 127.0.0.1:6445: connect: connection refused
Similarly, when running this image:
$ docker run --rm -v ~/.kube:/root/.kube dtzar/helm-kubectl helm ls
Error: Get http://localhost:6445/api/v1/namespaces/kube-system/pods?labelSelector=app%3Dhelm%2Cname%3Dtiller: dial tcp 127.0.0.1:6445: connect: connection refused
Am I missing something?
Does /root/.kube/
have to be a volume mount, or how could I set it up in a Dockerfile?
--
I'm using the built-in Kubernetes cluster from Docker for Desktop on Windows 10.
$ helm version
Client: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
I just tested this command out using a Windows machine with PowerShell, very similar to the readme and it worked fine for me:
docker run -it -v C:\Users\davete\.kube:/root/.kube dtzar/helm-kubectl
This configuration is useful when you want to do local development/testing and not have to keep the kubectl or helm binaries local.
I would not recommend creating a new Docker image which contains the kube config file in it since now anyone who downloads the image has full access to your cluster (at least whatever rights/permissions are given to that kube config file). In a production scenario you need to use whatever the secure mechanism is to mount the kube config file into the container. With a kubernetes cluster that would be using a securely mounted volume, such as Azure Key Vault + Flex Volume. Most build systems have some way to mount secret files into the container/system during runtime.
If you have further questions, feel free to respond - but I'm going to go ahead and close out this issue.