kubernetes-sigs/aws-iam-authenticator

got "*access denied*" when pick up authenticator v0.6.0

nnmin-aws opened this issue · 11 comments

I got an access denied when picking up authenticator v0.6.0. and during debug, found it is caused by #416. in this PR, it changed how authenticator look up the arn https://github.com/kubernetes-sigs/aws-iam-authenticator/pull/416/files#diff-4dfc68b8a5ed36d024d6f6626d8aa66132e3c5702e882182396fbbcb1299cf01L68.

before this PR, it used the value passed in as the key of map to look up the mapping. the value passed in is canonical arn, and the key is also canonical arn as well, so it worked.

But this PR changed to compare the value passed in with the value part of map. the value passed in is canonical arn, and the value of map does not guarantee canonical arn, so it failed.

Could you please take a look and also provide an integration test case for 416 to validate?
@logandavies181

Hi @nnmin-aws, it's been 6 months since I've looked at this code so you'll have bear with me.

I don't believe I changed any logic with regards to canonicalized ARNs being passed to the mapper. AFAIK, for the File and EKSConfigMap mappers, the user is implicitly required to declare the ARNs in their Canonicalized form and then the server canonicalizes the ARN from incoming requests before passing to the mappers:
https://github.com/kubernetes-sigs/aws-iam-authenticator/blob/master/pkg/server/server.go#L363

I'm not familiar with this new mapper or your scenario so I'm not sure how the ARNs have been written in the config or how they're parsed into RoleMappings.

Could you perhaps provide some more info on the new mapper and your scenario? How does your new mapper work; and what is the input you've given it (config and sts caller identity)?

Hi @logandavies181 thank you for the reply.

yes the server canonicalizes the ARN from sts response and passs it to the mappers.

The change is this function https://github.com/kubernetes-sigs/aws-iam-authenticator/pull/416/files#diff-4dfc68b8a5ed36d024d6f6626d8aa66132e3c5702e882182396fbbcb1299cf01L68.

it previously use m.lowercaseUserMap[canonicalARN]. the key of m.lowercaseUserMap is also canonical format. so it works.

But this PR changed it to use the value part of m.userMap (see below). and it doesn't ensure userMapping includes a canonical ARN. So if userMapping.Matches(canonicalARN) can fail and caused access denied.

for _, userMapping := range m.userMap {
if userMapping.Matches(canonicalARN) {
return &config.IdentityMapping{
IdentityARN: canonicalARN,
Username: userMapping.Username,
Groups: userMapping.Groups,
}, nil
}
}

@nnmin-aws the input is still canonicalized, I just changed when the lowercasing happens because it seemed tidier at the time - instead of relying on the various mappers to do the lowercasing as they saved the keys.

I'm wondering if you're trying to map a non-canonicalized ARN in your config. Could you please share the config you're using?

And also a brief description of the new feature for my benefit? Just so I can understand how it works without diving into the code. I've got a day job after all 😄

If I understand the issue correctly, the problem is that a possibly untested behavior that EKS uses is the ability to put a non-canonical ARN in a config source, and before the SSO change this worked, but after the SSO change this no longer worked because of a change in ARN used as a key to lookup mapping. This is from my memory of a conversation we had a week or so ago, please correct me if I'm wrong @nnmin-aws.

So in order to unblock releasing new versions that are compatible with EKS, we are currently planning on reverting #416 from the release-0.6 branch and then we can consider next steps and make the fix in the master branch.

@nckturner are you sure that's the case? I just tested on my EKS cluster and my sts assumed-role creds did not map successfully.

I'm pretty sure only Canonicalized role ARNs in the EKS configmap work currently in EKS

I actually haven't tested in the EKS configmap, but EKS actually puts this in a static file (config of backend mode type MountedFile) and uses it for the "cluster creator" identity.

Hmm ok I tested and I concur the behaviour has changed for the MountedFile case. In the absence of docs specifying the behaviour, there's only examples of canonical ARNs so I'd possibly consider it a bug on EKS's side if it's relying on that behaviour 😉

Anyway, given how much pain went into getting that PR merged in the first place, and how I'm pretty sure it'd be easier and cleaner to fix forward - how about I submit a PR cementing and documenting the behaviour or allowing users to specify non-canonicalized ARNs in their mappings? I can probably get that in tonght UTC+13

Probably just canonicalize the ARNs as they get added to the map

Probably just canonicalize the ARNs as they get added to the map

Yeah this should work, but it still might be worth reverting from release-0.6 in the meantime while we work on the fix and tests, documentation for it. We will need to decide in the next couple days, and we will take into consideration any fix you propose. And to be clear this is just for the most recent release branch, I think we should only revert master if this results in more complexity than expected and the fix is more than a week or two out.

bug on EKS's side

Yes usage of undocumented behavior could be considered a bug, but given the practical implication of migrating I think I'd argue it makes sense to try to keep the existing behavior intact and make the change non-breaking.

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale