openshift/openshift-restclient-python

Authentication via username / password regresses to anonymous

AlanCoding opened this issue · 15 comments

I'm trying to get the Ansible OpenShift and other k8s inventory plugins to work. The process for creating the client is here, in case you want to check my work:

https://github.com/ansible/ansible/blob/0d9c9234642107d932dced2ee09f8869feffc676/lib/ansible/module_utils/k8s/common.py#L132-L170

Then this is the point where a request is actually performed:

https://github.com/ansible/ansible/blob/0d9c9234642107d932dced2ee09f8869feffc676/lib/ansible/plugins/inventory/k8s.py#L206-L213

This roughly mirrors the example in your README, but I will distill the plugin steps (specific to my attempted configuration) into a smaller script here:

import kubernetes
from openshift.dynamic import DynamicClient

configuration = kubernetes.client.Configuration()

configuration.username = '<my username>'
configuration.password = '<my password>'
configuration.host = 'https://<my server>:8443'
configuration.verify_ssl = False

kubernetes.client.Configuration.set_default(configuration)

client = DynamicClient(kubernetes.client.ApiClient(configuration))

v1_pod = client.resources.get(api_version='v1', kind='Pod')

namespace = '<my namespace>'

obj = v1_pod.get(namespace=namespace)

print obj

This fails with an ApiException, 403, response body JSON:

{
  "kind":"Status",
  "apiVersion":"v1",
  "metadata":{},
  "status":"Failure",
  "message":"projects.project.openshift.io is forbidden: User \"system:anonymous\" cannot list projects.project.openshift.io at the cluster scope: User \"system:anonymous\" cannot list all projects.project.openshift.io in the cluster",
  "reason":"Forbidden",
  "details":{"group":"project.openshift.io","kind":"projects"},
  "code":403
}

This is the same error I encounter with the inventory plugin. To put this in terms more similar to the example...

import kubernetes
from openshift.dynamic import DynamicClient

configuration = kubernetes.client.Configuration()

configuration.username = '<my username>'
configuration.password = '<my password>'
configuration.host = 'https://<my server>:8443'
configuration.verify_ssl = False

kubernetes.client.Configuration.set_default(configuration)

# k8s_client = config.new_client_from_config()
k8s_client = kubernetes.client.ApiClient(configuration)

# rest is straight from example

dyn_client = DynamicClient(k8s_client)

v1_projects = dyn_client.resources.get(api_version='project.openshift.io/v1', kind='Project')

project_list = v1_projects.get()

for project in project_list.items:
    print(project.metadata.name)

The entire point here is that we are trying to avoid loading a kube config file. That's intrinsic to the purpose of the inventory plugin.

So, could I get one example of using the client in isolation of the config file to authenticate with username/password and/or API token?

If find that if I attempt the same thing, but load from the config file, this works. That would indicate that my username/password are sufficient credentials (oc login demonstrates this, creating the config file that works). Again, the point is that I want to avoid using this.

Yes, I think the user:password process is completely broken currently (and maybe always has been), but api-token authentication should work without issue. I'm honestly not sure what it would take to implement user:password authentication as I'm fairly unfamiliar with the whole authentication process. It's definitely something we need to fix in the Ansible modules (either there or the python kubernetes client, we actually just consume their library for all the configuration/authentication logic). For an example of the api-token method working:

import kubernetes
from openshift.dynamic import DynamicClient

import urllib3
urllib3.disable_warnings()


def get_client(**kwargs):
    configuration = kubernetes.client.Configuration()

    for k, v in kwargs.items():
        setattr(configuration, k, v)

    kubernetes.client.Configuration.set_default(configuration)

    # k8s_client = config.new_client_from_config()
    k8s_client = kubernetes.client.ApiClient(configuration)

    # rest is straight from example

    return DynamicClient(k8s_client)


token_auth = dict(
    api_key={'authorization': 'Bearer {}'.format('<api-token>')},
    host='<host>',
    verify_ssl=False
)

try:
    client = get_client(**token_auth)
    v1_projects = client.resources.get(api_version='project.openshift.io/v1', kind='Project')

    for project in v1_projects.get().items:
        print('\t{}'.format(project.metadata.name))
except Exception as e:
    print("\tFailed with exception: {}".format(type(e)))

Thanks a lot for that suggestion! Using the token would provide a viable means of using it without the additional kube config file. On my side, I am still troubleshooting permissions, but it seems that the inventory plugin is passing through the token to the configuration in a way similar to your example. I'm hopeful that this will work. Appreciate the help.

Yeah, I think all the k8s modules + plugins are using the same authentication function now, so hopefully it should be possible to pass through a valid API key right now, using the same parameters used by the k8s module or lookup plugin.

Yes, we have now verified that works on our infrastructure, given the linked patch in Ansible. We get a token via oc CLI oc serviceaccounts get-token <username> then use a namespace that user has read access to via the api_key key in the openshift.yml inventory file. I'm pretty sure it uses the same effective method you gave here.

For future reference I'm attaching the log of running oc login --v=8 against openshift 3.11, we would likely need to implement something like this to properly enable admin:username authentication from the Ansible modules. It may also require the addition of a new dependency to handle oauth stuff.

login.log

Did a little more poking around in the oc code, I've managed to mimic the series of requests made by my oc login with this:

import requests
from urllib3.util import make_headers
from requests_oauthlib import OAuth2Session
from six.moves.urllib_parse import urlparse, parse_qs, urlencode

HOST = 'https://master.example.org:8443'
USERNAME = 'admin'
PASSWORD = 'admin'

# Get needed info to access authorization APIs
oauth_server_info = requests.get('{}/.well-known/oauth-authorization-server'.format(HOST), verify=False).json()

openshift_oauth = OAuth2Session(client_id='openshift-challenging-client')
authorization_url, state = openshift_oauth.authorization_url(oauth_server_info['authorization_endpoint'], state="1", code_challenge_method='S256')

basic_auth_header = make_headers(basic_auth='{}:{}'.format(USERNAME, PASSWORD)).get('authorization')

# Request authorization code using basic auth credentials
challenge_response = openshift_oauth.get(
    authorization_url,
    headers={'X-Csrf-Token': state, 'authorization': basic_auth_header},
    verify=False,
    allow_redirects=False
)
# In here we have `code` and `state`, I think `code` is the important one
qwargs = {k: v[0] for k, v in parse_qs(urlparse(challenge_response.headers['Location']).query).items()}
qwargs['grant_type'] = 'authorization_code'

# Using authorization code given to us in the Location header of the previous request, request a token
auth = openshift_oauth.post(
    '{}/oauth/token'.format(HOST),
    headers={
        'Accept': 'application/json',
        'Content-Type': 'application/x-www-form-urlencoded',
        # This is just base64 encoded 'openshift-challenging-client:'
        'Authorization': 'Basic b3BlbnNoaWZ0LWNoYWxsZW5naW5nLWNsaWVudDo='
    },
    data=urlencode(qwargs),
    verify=False
).json()

# We now have the Bearer token and can interact with the API
print(requests.get('{}/api/v1/pods'.format(HOST), verify=False, headers={'authorization': '{} {}'.format(auth['token_type'], auth['access_token'])}).json())

So, here's what I've managed to piece together up to this point:

  1. host+user+pwd auth in k8s modules is a perfect fit for authenticating against vanilla kubernetes clusters with http basic auth configured, since all the api calls stay the same and one just adds one header. This should be also quite easy to add to the lib.
  2. This is very much not the case with standard openshift auth, since one uses a separate login step, which issues an access token which is then used for actual api calls. By default those tokens are valid for 24h, so if we want to use them like this, we'd likely also need to delete them afterwards.
  3. Docs for identity providers mention the ability to specify a challenge mechanism based on WWW-Authenticate, which potentially could be equivalent to the vanilla k8s basic auth. I'm looking into this one right now.

My current thinking is that what should happen is:

  1. k8s vanilla auth support should be properly added as an implicit auth mechanism for running any api calls. If the openshift challenge mechanism is what I think it is, then that too should be handled this way as well without the user having to think about it.
  2. Standard openshift token–issuing auth functionality should be made an explicit method that the user is supposed to call and then be expected to pass the token to any further api calls. Downstream this would result in k8s ansible module requiring an additional k8s_auth module to handle any explicit auth methods that aren't implicitly handled by the tls/http layers (like client ssl keys and http basic auth).

Thoughts?

Right, so I've read up on the WWW-Authenticate thing and sure, that's a way to get authenticated with http basic, but only for the purpose of talking to /oauth/authorize to get a token. With openshift you provide a token or a client cert and that's it.

So I guess the explicit openshift login method is a must, because downstream definitively will want to have the option of logging in and storing the token for future use on its own (think ansible/k8s_auth module).

Question remains: will there be automatic openshift user+pwd token creation/teardown functionality that will emulate what vanilla k8s does with http basic auth?

FYI to whoever stumbles upon this: in ansible 2.8 a k8s_auth module has been merged that can log into openshift. It's a bit rough around the edges at present, but gets the job done.

Thanks, so that's

https://github.com/ansible/ansible/blob/34671a64b30a854f741cc87a4ced79d34270a5b2/lib/ansible/modules/clustering/k8s/k8s_auth.py

from

ansible/ansible#50807

I might borrow this for the Ansible openshift inventory plugin sometime. That could involve either duplicating your method, or pushing it into module utils.

@AlanCoding why not go all in – afaik Fabian is not in principle opposed to getting the actual openshift auth code directly into openshift-restclient, which would make it easily consumable for all of us. So if you get some cycles allocated to dealing with that, maybe try pushing a PR in here?

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.