kubernetes-client/python

host is set to localhost when loading from kube config file in v12

limonkufu opened this issue ยท 17 comments

What happened (please include outputs or screenshots):
We are using the following code to set up:

from kubernetes import client, config, watch

config.load_kube_config(kubeconfig)
    host = config.kube_config.Configuration().host
    logging.info("HOST INFO: {}".format(host))

The kubeconfig file has server field set up correctly. This is working with with version 11.0.0 correctly but when we change the version to 12.0.0 then it returns:

HOST INFO: http://localhost

I tried to investigate it and according to kube_config.py file it should be set correctly in _load_cluster_info method but it is not setting up correctly.

What you expected to happen:
Host url to be set correctly in compliance with kube-config file

How to reproduce it (as minimally and precisely as possible):

  • Install version 11.0.0 run the above code with a valid kube-config file
  • See the host url set correctly
  • Update the version to 12.0.0 and run the code again
  • See the host url set incorrectly to http://localhost

Anything else we need to know?:

Environment:

  • Kubernetes version (kubectl version): v1.18.2
  • OS (e.g., MacOS 10.13.6): Ubuntu 18.04
  • Python version (python --version) 3.7
  • Python client version (pip list | grep kubernetes) 12.0.0

We just encountered the same thing. It seems like this change requires you now to explicitly get the default configuration.

So your line would need to change to something like:
host = config.kube_config.Configuration.get_default_copy().host

Something like this should probably be mentioned in the release notes.

@felixhuettner thanks this solves the issue but using get_default_copy() instead of direct call seems counter-intuitive after setting the configuration from file. (note that this change is also not documented in the readme/examples)

I am also facing this issue.

At the very least, this needs a mention in the changelog so that users know how to fix this.

load_incluster_config() broke too for 12.0.0. It's not clear to me yet how to retain the same mechanism...

My process for setting up my configuration when using load_incluster_config() was:

from kubernetes import client, config, utils
from kubernetes.client import Configuration

config.load_incluster_config()
c = Configuration()
c.assert_hostname = False
Configuration.set_default(c)

which failed to work with for 12.0.0. But if I don't set the new Configuration object as default and only have:

config.load_incluster_config()

it seems to use the correct cluster configuration

I have the same problem,when i use client 8.0.0 get kubernetes version info,It is normal.

from sdk.v8.kubernetes import client
from sdk.v8.kubernetes import config

config.load_kube_config()
configuration = client.Configuration()
configuration.verify_ssl = False

api_client = client.ApiClient(configuration=configuration)
version_api = client.VersionApi(api_client)
print(version_api.get_code())

##output
{
   'build_date': '2019-04-22T11:34:20Z',
   'compiler': 'gc',
   'git_commit': '8cb561c',
   'git_tree_state': '',
   'git_version': 'v1.12.6-aliyun.1',
   'go_version': 'go1.10.8',
   'major': '1',
   'minor': '12+',
   'platform': 'linux/amd64'
}

but when i use client 12.0.0.0,a procedural exception has occurred.

urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /version/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9c7cf6f070>: Failed to establish a new connection: [Errno 61] Connection refused'))

It looks like a configuration error,After referring to the above answer, I modified it to

from sdk.v12.kubernetes import client
from sdk.v12.kubernetes import config

config.load_kube_config()
configuration = client.Configuration().get_default_copy()
configuration.verify_ssl = False

api_client = client.ApiClient(configuration=configuration)
version_api = client.VersionApi(api_client)
print(version_api.get_code())

That solved my problem,I think that part of the configuration code logic has changed

@ntavares I confirm this. Have you found a workaround that uses either v12.0.0 or v12.0.1? I am currently using the following code snippet and it fails

from kubernetes import config, client
from kubernetes.client import ApiClient
from kubernetes.dynamic import DynamicClient

config.load_incluster_config()
configuration = client.Configuration.get_default_copy()

k8s_client = ApiClient(configuration=configuration)
dyn_client = DynamicClient(k8s_client)
dyn_client.resources.get(kind="Secret").create(...)

Screen Shot 2021-01-15 at 3 35 52 PM

Version: 12.0.1
Python Version: 3.7.6
K8s version: v1.18.9

I fixed this changing my Kubernetes version from 12 to 11:
Check Kubernetes version with
pip3 list | grep kubernetes

This problem is related with Kubernetes 12, if you see that the version is 12, downgrade to 11 with:

pip3 install kubernetes==11

@chrisegb We are doing the exact same thing right now but, this needs to be addressed in v12 as well for us to upgrade the client version eventually

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

I'm experiencing the same issue and came up with the following workaround:

from kubernetes.config.kube_config import KubeConfigLoader, KubeConfigMerger, KUBE_CONFIG_DEFAULT_LOCATION

config_loader = KubeConfigLoader(config_dict=KubeConfigMerger(
    KUBE_CONFIG_DEFAULT_LOCATION).config,
    config_base_path=None)

current_cluster_name = config_loader.current_context["context"]["cluster"]
current_cluster_url = [cluster.value["cluster"]["server"]
                       for cluster in config_loader._config.value["clusters"] if cluster.value["name"] == current_cluster_name][0]

It's not pretty and I am not sure how portable this solution is. However, it uses similar code as kubernetes.config does internally and works for me. Hope this helps.

The way it seems to work is when the kubeconfig file is loaded before instantiating the Kubernetes Api Client. But keep in mind that kubeconfig file has to be loaded from the default .kube directory and not anywhere else.

from kubernetes import client, config

config.load_kube_config()

kbn_client = client.CoreV1Api()

ctx_namespaces = kbn_client.list_namespace(watch=False, pretty=True)
namespace_list = [i.metadata.name for i in ctx_namespaces.items]
print(namespace_list)

configuration = client.Configuration()
print(configuration.host)

Output:

> ['default', 'kube-node-lease', 'kube-public', 'kube-system', 'cloud-ctx-namespace1', 'cloud-ctx-namespace2']
> 'https://kubernetes.cloud-provider.com'

Still the issue I have been facing is loading the kubeconfig file from any other location other than default .kube directory.

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.