Distinction between policies which can be enforced by technology and by law
Opened this issue · 5 comments
A while ago we had discussion around https://protect.oeg.fi.upm.es/odrl-access-control-profile/oac.html
I was emphasizing that many of ODRL policies can be enforced by technology, lead to access denied, they can only be enforced by legal action. I think we should provide distinct ways to express policies based on how they can get enforced.
Interesting example here would problems related to acl:Control
access mode. While Resource Owner setting policies using this mode may get impression that it will prevent users without acl:Control
to share their access, I would consider such thinking as naive.
In Delegation use case i list various pretty straight forward way how someone without acl:Control
access can still fully share their access by using impersonation pattern. It may be clearer if policies restricting delegation are set as one that can be enforced by law not by technology. I think we may ended up with this naive notion of security associated with acl:Control
because we didn't provide way to express policies which are not enforceable by technology.
TODO
- add use cases and requirements related to setting policies enforceable by law
- define how those policies can be communicated, for any level of access granted (eg. just read) - Data Grants could be good candidate
I'd like to note that by simply not granting some user acl:Control
access, resource owner didn't communicate to them in any explicit way that sharing their access with others is prohibited. Communicating this policy among other legally enforceable policies has advantage of making this policy explicit and having it communicated to user along what access has been granted to them.
When for example Data Grant have such policy attached. When grantee tries to delegate it, their Authorization Agent can warn them about the fact that this would break policy set by the Resource Owner who granted them access.
Just to point out that with OAC (w3id.org/oac), you can already communicate/create a policy that states that sharing with others is prohibited, e.g.,
:example-4-2 a odrl:Policy ;
odrl:profile oac: ;
odrl:prohibition [
odrl:assigned "https://beatriz.databox.me/profile/card#me" ;
odrl:target "https://beatriz.databox.me/docs/file1" ;
odrl:action oac:Share ;
odrl:constraint [
odrl:leftOperand oac:Recipient ;
odrl:operator odrl:gt ;
odrl:rightOperand "3"^^xsd:integer
]
] .
Full example here.
However the oac:Share
term comes from DPV and there's no direct mapping to the current ACL terms.
So even to enforce this technologically there's still work we need to do.
Thank you @besteves4
So even to enforce this technologically there's still work we need to do.
I'm making a case here that many policies can NOT be enforced technologically. For example, once we have given someone access we can't truly prevent them from sharing it further. It seems pretty straight forward to create proxy which would enable sharing one's access using impersonation approach.
I think in general, any policies that apply after a request for a protected resource has been made and access has been granted, can be only enforced through legal means.
I see scenarios where after detecting that policy has been violated future access could be revoked. In that case, an enforceable access policy would be probably just based on the simple identity of an agent whose violation has been detected. We possibly could support annotating such a new policy for that specific agent with information about the prior violation of another general policy as the reason for their access being revoked.
Please correct the title of this issue,
from Distinction between polices
to Distinction between policies
Hi. IMHO - "technological control" is entirely based on what the environment provides or makes possible. For the current Solid spec, this is controlling whether someone can read/write/modify/erase data - and only this. There is no "technological contorl" for why someone may want to read/write that data i.e. its purpose.
For example, if an app wants to read your contacts (data) to help with sending an email (purpose) - then the only way to technologically enforce or at least control this is to assert the purpose alongside the access request. In this case, {contact} -> {send email}
should pass the authorisation and {contact} -> {find common people}
should not because even if the data and access are the same (contact & email), the purpose is not.
The only method for completely controlling the purpose would be to set up a trusted execution environment where that purpose is implemented. Otherwise there is always a degree of trust that is needed in the app/service doing only what it says is doing. There is always legal enforceability, but IMHO technology should assist with it by providing accountability artefacts (e.g. policies accepted, logs recorded). Which is why I argue for reflecting technological designs to be compatible with legal requirements to make such tasks easier.
Here is an article where we argue for why there should be user-side stored logs and (authenticated) receipts to provide accountability where it is not always possible to control all the execution - https://doi.org/10.1109/ACCESS.2022.3157850 Its about consent, but you can consider this to be applicable to any decision - including providing apps with certain authorisations. This means storing the decision made (e.g. which permissions to whom for what purpose) as well as the actions (e.g. app read data for stated purpose) - and through these enabling legal/technical/other forms of accountability.