RedHatInsights/insights-rbac

Inconsistent get_principal_access result

Closed this issue · 4 comments

In the QE environment a user insights-qa has 2 different result sets coming back from a call to get_principal_access when called externally via 3scale and when called internally from a pod. The user is a member of the "Catalog Administrators" group and has a "Catalog Administrator" role.
When doing an external access we get back this ACL
{:permission=>"catalog:portfolios:write", :resourceDefinitions=>[{:attributeFilter=>{:key=>"id", :operation=>"equal", :value=>"*"}}]}
This one is missing when being accessed internally from the pod.

To Reproduce
Steps to reproduce the behavior:

  1. Call get_principal_access for insights-qa externally
  2. Call get_principal_access from a pod and pass in x-rh-identity with the following base64 string

"eyJpZGVudGl0eSI6eyJpbnRlcm5hbCI6eyJhdXRoX3RpbWUiOjAsImF1dGhfdHlwZSI6ImJhc2ljLWF1dGgiLCJvcmdfaWQiOiIxMTc4OTc3MiJ9LCJh Y2NvdW50X251bWJlciI6IjYwODk3MTkiLCJ1c2VyIjp7ImZpcnN0X25hbWUiOiJKZWZmIiwibGFzdF9uYW1lIjoiTmVlZGxlIiwiaXNfaW50ZXJuYWwiOmZhbHNlLCJpc19hY3R pdmUiOnRydWUsImxvY2FsZSI6ImVuX1VTIiwiaXNfb3JnX2FkbWluIjp0cnVlLCJ1c2VybmFtZSI6Imluc2lnaHRzLXFhIiwiZW1haWwiOiJqbmVlZGxlK3FhQHJlZGhhdC5jb2 0ifSwidHlwZSI6IlVzZXIifSwiZW50aXRsZW1lbnRzIjp7Imluc2lnaHRzIjp7ImlzX2VudGl0bGVkIjp0cnVlfSwib3BlbnNoaWZ0Ijp7ImlzX2VudGl0bGVkIjp0cnVlfSwic 21hcnRfbWFuYWdlbWVudCI6eyJpc19lbnRpdGxlZCI6ZmFsc2V9LCJoeWJyaWRfY2xvdWQiOnsiaXNfZW50aXRsZWQiOnRydWV9fX0="

  1. Compare the results and check for catalog:portfolios:write with *
  2. See error

Expected behavior
The result should be same in both internal and external call

This is the difference list

diff direct.txt from_pod.txt 
1,2d0
< {:permission=>"catalog:portfolios:write", :resourceDefinitions=>[{:attributeFilter=>{:key=>"id", :operation=>"equal", :value=>"102"}}]}
< {:permission=>"catalog:portfolios:write", :resourceDefinitions=>[{:attributeFilter=>{:key=>"id", :operation=>"equal", :value=>"104"}}]}
3a2,3
> {:permission=>"catalog:portfolios:write", :resourceDefinitions=>[{:attributeFilter=>{:key=>"id", :operation=>"equal", :value=>"613"}}]}
> {:permission=>"catalog:portfolios:write", :resourceDefinitions=>[{:attributeFilter=>{:key=>"id", :operation=>"equal", :value=>"615"}}]}
5,7d4
< {:permission=>"catalog:portfolios:write", :resourceDefinitions=>[{:attributeFilter=>{:key=>"id", :operation=>"equal", :value=>"265"}}]}
< {:permission=>"catalog:portfolios:write", :resourceDefinitions=>[{:attributeFilter=>{:key=>"id", :operation=>"equal", :value=>"313"}}]}
< {:permission=>"catalog:portfolios:write", :resourceDefinitions=>[{:attributeFilter=>{:key=>"id", :operation=>"equal", :value=>"*"}}]}

I tried to reproduce this bug but was unable. I compared the results from external 3scale to both a port-forwarded request & a curl from a pod.

curl -X GET   'http://rbac.rbac-qa.svc:8080/api/rbac/v1/access/?application=catalog&limit=200'   -H 'Accept: */*'   -H 'x-rh-identity: eyJpZGVudGl0eSI6eyJpbnRlcm5hbCI6eyJhdXRoX3RpbWUiOjAsImF1dGhfdHlwZSI6ImJhc2ljLWF1dGgiLCJvcmdfaWQiOiIxMTc4OTc3MiJ9LCJh Y2NvdW50X251bWJlciI6IjYwODk3MTkiLCJ1c2VyIjp7ImZpcnN0X25hbWUiOiJKZWZmIiwibGFzdF9uYW1lIjoiTmVlZGxlIiwiaXNfaW50ZXJuYWwiOmZhbHNlLCJpc19hY3R pdmUiOnRydWUsImxvY2FsZSI6ImVuX1VTIiwiaXNfb3JnX2FkbWluIjp0cnVlLCJ1c2VybmFtZSI6Imluc2lnaHRzLXFhIiwiZW1haWwiOiJqbmVlZGxlK3FhQHJlZGhhdC5jb2 0ifSwidHlwZSI6IlVzZXIifSwiZW50aXRsZW1lbnRzIjp7Imluc2lnaHRzIjp7ImlzX2VudGl0bGVkIjp0cnVlfSwib3BlbnNoaWZ0Ijp7ImlzX2VudGl0bGVkIjp0cnVlfSwic 21hcnRfbWFuYWdlbWVudCI6eyJpc19lbnRpdGxlZCI6ZmFsc2V9LCJoeWJyaWRfY2xvdWQiOnsiaXNfZW50aXRsZWQiOnRydWV9fX0='

All responses returned the same result.

However, when looking at the logs I can see that pagination by increments of 10 were taking place. Since this response is derived data (not just a straight DB look-up) I am wondering if perhaps you may be seeing some subtle ordering issue that needs to be resolved.

Could you check whether you see the discrepancy if you ask with a limit >= the count so you get all access objects with a single request? If you do not see the discrepancy then you could then work-around this in the short-term by increasing the limit. If you do see the discrepancy then please confirm the local cluster service URL your hitting.

@chambridge I tried with different limits but couldn't re-create the problem. Before I would see this ACL was missing
{:permission=>"catalog:portfolios:write", :resourceDefinitions=>[{:attributeFilter=>{:key=>"id", :operation=>"equal", :value=>"*"}}]}

And it comes from the Catalog Administrators role. Are you able to see the above ACL show up?

@mkanoor Yes, I was always able to see the above ACL.