Failed to load kubeconfig due to Invalid kube-config file. No configuration found.
Closed this issue · 7 comments
SUMMARY
Remote execution of k8s functions is not working
ISSUE TYPE
- Bug Report
COMPONENT NAME
community.kubernetes.k8s
ANSIBLE VERSION
ansible 2.9.9
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
CONFIGURATION
Ansible Controller Host --> Bastion Host --> AWS EKS Kubernetes Cluster
What I want to archive is quite simple: I create an AWS EKS Cluster and a bastion host with Terraform and Terraform triggers a playbook which is configuring my bastion host as well as the EKS Cluster. BUT all k8s commands should be executed from the bastion host and not from the host, running the playbook (due to access limitations).
I promise I did not change anything and it worked until last week
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible.log
OS / ENVIRONMENT
Ansible Controller-Host:
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
AWS EKS-Cluster:
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.9-eks-658790", GitCommit:"6587900c2b7bd83f0937204894202c93a1ecfb5f", GitTreeState:"clean", BuildDate:"2020-07-16T01:29:42Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
STEPS TO REPRODUCE
In the past I used the depricated k8s module for ansible to configure an AWS EKS Cluster. Since last week for some reason both k8s modules (the depricated and the community version) are trying to execute them from my ansible controller host instead of the host where it should be executed.
affected snippet of the playbook:
tasks:
- name: create .kube folder in home folder
file:
path: /home/{{ ace_user }}/.kube
state: directory
- name: Copy kube config
copy:
src: "{{ item }}"
dest: "/home/{{ ace_user }}/.kube/config"
owner: "{{ ace_user }}"
mode: u=rw,g=r,o=r
with_fileglob:
- "../kubeconfig*"
- name: Create namespace
community.kubernetes.k8s:
kubeconfig: /home/ubuntu/.kube/config
state: present
definition:
apiVersion: v1
kind: Namespace
metadata:
name: TestNamespace
labels:
istio-injection: disabled
become_user: "{{ ace_user }}"
EXPECTED RESULTS
Namespace gets created - at least some more detailed info on the error.
ACTUAL RESULTS
Beside the fact, that the previous version where OK with having the kubeconfig on my remote host, it now needs to reside on my ansible controller host, the playbook fails with:
"Failed to load kubeconfig due to Invalid kube-config file. No configuration found." - even with -vvvv
The full traceback is:
File "/tmp/ansible_community.kubernetes.k8s_payload_5augh7xc/ansible_community.kubernetes.k8s_payload.zip/ansible_collections/community/kubernetes/plugins/module_utils/common.py", line 241, in get_api_client
kubernetes.config.load_kube_config(auth.get('kubeconfig'), auth.get('context'), persist_config=auth.get('persist_config'))
File "/usr/local/lib/python3.5/dist-packages/kubernetes/config/kube_config.py", line 794, in load_kube_config
persist_config=persist_config)
File "/usr/local/lib/python3.5/dist-packages/kubernetes/config/kube_config.py", line 752, in _get_kube_config_loader
'Invalid kube-config file. '
fatal: [IP]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"api_key": null,
"api_version": "v1",
"append_hash": false,
"apply": false,
"ca_cert": null,
"client_cert": null,
"client_key": null,
"context": null,
"definition": {
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"labels": {
"istio-injection": "disabled"
},
"name": "TestNamespace"
}
},
"force": false,
"host": null,
"kind": null,
"kubeconfig": "/home/ubuntu/.kube/config.json",
"merge_type": null,
"name": null,
"namespace": null,
"password": null,
"persist_config": null,
"proxy": null,
"resource_definition": {
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"labels": {
"istio-injection": "disabled"
},
"name": "TestNamespace"
}
},
"src": null,
"state": "present",
"template": null,
"username": null,
"validate": null,
"validate_certs": null,
"wait": false,
"wait_condition": null,
"wait_sleep": 5,
"wait_timeout": 120
}
},
"msg": "Failed to load kubeconfig due to Invalid kube-config file. No configuration found."
}
The used kubeconfig file looks like this - if I try to use this kube-config with kubectl everything works as expected:
apiVersion: v1
preferences: {}
kind: Config
clusters:
- cluster:
server: https://****************************.***.zone.eks.amazonaws.com
certificate-authority-data: ##SSLDATA##
name: eks_ace-eks-XZUaNkz7
contexts:
- context:
cluster: clustername
user: clustername
name: clustername
current-context: clustername
users:
- name: eclustername
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "clustername"
@reneforstner Just to clarify because I don't see it in the snippet you supplied -- where are you delegating the k8s namespace present task to run on your bastion host? Also, in the copy task you have a dest
set to /home/{{ ace_user }}/.kube/config
but in the following task you are getting the kubeconfig from /home/ubuntu/.kube/config
. Did you meant to do that? Is ace_user
always equal to "ubuntu"?
@tima Sorry for this...{{ ace_user }} is "ubuntu" i just hardcoded this for testing purposes. I launch my playbook with an ini file which contains all the vars, as well as the host:
Calling the playbook:
ansible-playbook -i bastion.ini playbook.yml -vvvv
Start of the playbook:
- name: Configure host
hosts: acebastion
become: true
bastion.ini:
[acebastion]
ip-of-my-host ansible_user=ubuntu ansible_ssh_private_key_file=../../../key
[acebastion:vars]
ace_user=ubuntu
some other vars = some other values
ansible_python_interpreter=/usr/bin/python3
Hi again,
I just figured out that this issue is related to the newest python kubernetes module (which was released on 15th of October)
Because I just installed any kubernetes module newer than 10.0.0, the latest and greatest was installed.
I changed it to 11.0.0 and everything works as expected.
I'll try to figure out the issue with using the python module natively without ansible and let you know
@reneforstner Do you think this will resolve your issue - #276?
@Akasurde indeed, when I do not specify any kubeconfig (which I usually do not do) i get the identical error message with the kubernetes module 12.0.0
@reneforstner Thanks for the information.