camaraproject/IdentityAndConsentManagement

Camara Identity Profile the way forward

AxelNennker opened this issue ยท 11 comments

Problem description
We now have two proposals for an Camara "profile" which use inherently different ways to profile existing standards.

While the proposal from TEF in PR #113
goes the MobileConnect way of profiling, DT's proposal in PR #121 goes the FAPI 2.0 way.

The major difference between both ways is that the MobileConnect way repeats all standards that Camara might want to profile, the FAPI 2.0 way mostly removes options from existing standards.

TEF document: (these links might become outdated. If they do please go to the respective PR and view the file there)
https://github.com/camaraproject/IdentityAndConsentManagement/blob/3a17b29fc1d18f8e8c2816b5fdb655885b8cb02c/documentation/CAMARA-OIDC-profile.md

DT document: (these links might might become outdated. If they do please go to the respective PR and view the file there)
https://github.com/camaraproject/IdentityAndConsentManagement/blob/f9fac42c3e1079e12ffa23492e579528f2e5669f/documentation/CAMARA-Security-Interoperability.md

Expected action
Decide on which "profile" this working group wants to accepts as a basis of our profile

Additional context

My, @AxelNennker , view on pros and cons:

Arguments pro and con #113

pro: continues the MobileConnect work
pro: one document that specifies everything in one place makes it easier for newbies to understand

con: specifying everything in one document makes it hard to determine whether changes and rephrasing of definitions of request parameters are intentional or mistakes that might lead to misunderstandings or security risks.
con: by rephrasing definitions the original standard's security considerations might become void
con: changes to standards are not clearly marked as such.
con: mistakes happen and are harder to find in a long document.
con: the original standards are 10 years old and newer standards exist that improve security and interoperability. If we go the MobileConnect way we would need to repeat all those standards in this one document. Examples of new standards Camara might consider profiling: FAPI 2.0 Grant Management, OAuth 2.0 Pushed Authorization Requests

Arguments pro and con #121

con: As an implementer you have to read and understand the profiled standards. Although, if you don't do that you should not be an implementer.

pro: by choosing implementation options from existing standards the security considerations of the original standard are kept intact
pro: the profile is a concise document, highlighting the areas implementers need to focus on.
pro: When implementers choose a OIDF certified solution it is easy to configure that solution based on the profile because the profile clearly names e.g. which PKCE code_challenge_method is mandatory or recommended by this profile.
pro: "The FAPI 2.0 way" is a recently OIDF approved way of profiling standards. Newer is not always better but here it is.

Please add your own pro and cons in the comments.

The proposed style in #121 seems to be easier to identify the points CAMARA is defining and it also addresses some of the comments submitted to #113 by @mengan
cc: @murthygorty

Having read both specs (and having a long experience of helping many organisations implement OAuth2 profiles as both clients and authorization servers) the spec in #121 looks vastly more straightforward to implement. It is clear where it is defining behaviours that differ from the base specification, and hence it would be simple to take an existing authorization server (whether custom code or off the shelf vendor product) and update it to implement the specification. It is also simple to see how you might take an existing test suite for the underlying protocols and update it to conform to this specification. It is also more aligned with long established best practices, for example it requires Authorization Servers to implement PKCE.

The specification in #113 it would take me a week or so to just analyse where it differs to the underlying specifications, and my general experience of doing such analysis is that it is discovered that there are differences that were not actually intended and cause interoperability issues / rework of implementations.

The con for #121 highlighted above ('As an implementer you have to read and understand the profiled standards.') could be addressed in various ways if necessary. For example it might be sensible to provide non-normative examples of the various protocol calls, or a short implementer's guide - but it should be made clear that any examples or implementors guide are not defining any new behaviours not already described in the specifications/profile.

I agree with the comments from @jogu and I believe that any ambiguity (I wouldn't call it"con") can be addressed in an implementer's guide and/or clarifications in the technical specification. I also believe that the approach in #121 gives the group more flexibility in supporting multiple use cases.

The referenced profile is easy to read, it has few lines, but the rest of the specifications are hard to read because they contain a lot of information that only applies to certain scenarios and use cases. Developers looking only at those references would not know what would be the minimum functionality they would have to implement to ensure interoperability:

  • Should they implement support for the display or claims or whatever parameter in Oauth Code flow or could they ignore them?
  • What is the handling behaviour of the openid scope or its absence? (not explicit in the standard)
  • Should they implement the Userinfo endpoint and support for the scopes and claims included in the OpenID Connect specification?

Moreover, there are examples where specifications have conflicts. For instances, in the case of sending the request object the OpenID Connect specification and the referenced OAuth JWT-Secured Authorization Request are contradictory. The first one says "parameters MAY also be passed using the OAuth request syntax even when a Request Object is used". And the second one says: you can passed them using the OAuth request syntax but "the authorization server MUST only use the parameters included in the Request Object."

If there are changes in the standards you reference, you as a developer may not be aware that something has changed. Are those changes assumed to be automatically included into the profile? Because maybe you are not interested in them or they have backward compatibility problems.

You mention hypothetical mistakes problems in the self-contained document, but it is only one source of error that can be fixed. But in the referenced model, there are multiple sources of error with developers misinterpreting the specifications or even looking for the wrong version of the standard (because for sure every day they are not going to look at the link in the profile reference, but what Google returns on searching).

For me the inclusion of PKCE or DPoP in a profile to prefer one or the other does not seem to me a valid argument. The self-contained profile only includes the functionality on which there has been consensus right now. We are not saying not to include them if it is agreed among all of us in the future.

@garciasolero

The referenced profile is easy to read, it has few lines, but the rest of the specifications are hard to read because they contain a lot of information that only applies to certain scenarios and use cases. Developers looking only at those references would not know what would be the minimum functionality they would have to implement to ensure interoperability:

Do we know who the audience of this specification is? In particular:

  1. Is the assumption that companies implementing this profile have existing authorization servers that they will use for this API, or that they will create brand new authorization servers just for this project?
  2. Are we expecting developers that want to access APIs to build upon the standard OAuth2/OpenID Connect libraries for their language or to implement everything from scratch?

We need to separate two aspects if we want to make a fair comparison.

  1. The format and level of detail. Should the profile be a "delta" vs "self-contained"? Both should be valid as long as they are complete (leaving no room for interpretations) #121 currently doesn't address some REAL issues he have faced integrating real telcos in production during the last few months. That work has already been done in #113. I've added more details at #121 (comment).

  2. We need to separate that discussion from new features oportunistically added to the profiles. I think we can agree on this. #113 only includes what has already been agreed. #121 adds new features. This complicates the debate and leads to an unfair situation where it is tempting to vote for one option just because it includes in the same pack one feature that has never been discussed. For example, DPOP. It has not even been mentioned in CAMARA in one year! Suddenly TMobile US suggests to include it in #113, DPOP is included in #121 (what a coincidence) and people from TMobile argues that #121 includes that new feature as a pro. Please...

My suggestion to move forward is:

  1. Remove extra features that have not even been discussed yet (DPOP, PKCE) from #121, to compare apples to apples. Open separate issues to discuss each new feature as separate track.
  2. In parallel, give 1-2 weeks to gather all the issues we currently have in production due to different interpretations of the RFCs. Review whether they are solved in #113 and #121. Fix them otherwise.
  3. Choose one option. I don't have a strong opinion as long as they are complete and reflect our current agreements + clarifications needed to avoid repeating the same issues that have caused we (CSPs) are not 100% interoperable.

What do you think?

@jogu,

Do we know who the audience of this specification is? In particular:

1. Is the assumption that companies implementing this profile have existing authorization servers that they will use for this API, or that they will create brand new authorization servers just for this project?

2. Are we expecting developers that want to access APIs to build upon the standard OAuth2/OpenID Connect libraries for their language or to implement everything from scratch?

In my opinion, it is irrelevant whether you have to provide the service from an existing authserver or a new one. You should know what is the minimum that your implementation has to support.

In the same way for client integrations, whether they use libraries or not. You should know what parameters you could send and with what values, and the rest you can directly ignore because the behavior is not defined.

In my opinion, it is irrelevant whether you have to provide the service from an existing authserver or a new one. You should know what is the minimum that your implementation has to support.

If you have existing auth servers that already support the specs (or are looking to license a server for this project), a delta document is considerably quicker to work with. A full specification is substantially more troublesome to compare and it is incredibly easy to miss differences. I agree there is more detail that needs to be fleshed out in the current delta document to make it clear which optional features of specifications are and aren't used.

Similarly if you are using an existing library, it's considerably more useful to know which parts of a specification are used than how to do things at the protocol level.

The problem with self-contained specs gets worse as more is added; introducing PKCE and a method of sender constraining access tokens (which while not yet agreed by this working group are really quite well established best practices) would start to make a self-contained specification unmanageable, and again most implementers would want to take off the shelf implementations and to just know what parts of the specification are and aren't needed. There generally isn't a need for people to implement PKCE or DPoP from first principles these days, so simply stating "use PKCE with the S256 code_challenge_method as per RFC 7636" is sufficient for people to verify if the oauth client there are using has already implemented that).

If there is a need for first principles descriptions, that is in my opinion better covered by non-normative examples or implementation guides.

Per feedback and suggestion, Issue #125 is added with an urgency to address. Please note that this issue is only added for DPoP, doesn't reference or cover PKCE.

Another issue that is open to interpretation by the reader of the referenced profile is what errors can be received specifically and under what circumstances depending on the scenario. For instance, we already encountered this possible ambiguity when we dealt with the issue of what the response should be in the authorize call of the authorisation code flow when the user does not give consent:

It's important to consider which security profile:

  1. Has been formally proofed
  2. Has extensive vendor support (certified)
  3. Has certification suite available for implementers and vendors
  4. Has active community of implementers globally.

It's a separate issue for me whether to use layers specs in Camara profile (the traditional FAPI way) or all-in-one profile that has all relevant snippets repeated. The experience shows that all-in-one profiles tend to quickly go out-of-date as specifications are evolving. It's harder to maintain, requires deep knowledge of underlying specs that keep evolving.