Best Practices for Inline Contexts
OR13 opened this issue · 2 comments
I am starting to feel like although they increase the size, they are a best practice over remote / by reference contexts....
A couple notes:
- prefixing is safer in inline contexts, you don't have to worry about collisions for popular prefixes like
sec
. - only define the terms you are using... this is the biggest point. Its hard to tell what parts of a massive context, like schema.org, you are using... which its easy to fix this by only pulling the terms you need into say
citizenship
ortraceability
... IMO, thats still not as good as only defining the terms you are actually using in the vc. - avoids dead terminology... folks are afraid to delete terms from popular contexts.... so they tend to fill with lots of deprecated terms... thats not an issue with inline contexts.
- not even a hint of networks requests, inlining makes it even clearer that no network requests are required to verify a VC.
Are there any additional benefits we should document?
Are there any serious reasons not to inline IF size is not a concern?.... CBOR-LD is the only thing I can think of and it does not really exist yet...
Reasons not to inline contexts:
- Developers that are not well versed in JSON-LD often get it wrong (mistyping URLs, etc.)
- It removes a developer's ability to easily use JSON Schema to check well-formed-ness.
- It requires systems processing JSON-LD input to expand and recompact to their context before processing and writing code against the incoming data structure.
In short, inlining contexts removes one of the biggest benefits of JSON-LD... which is that you can treat it as pure JSON for early parts of the processing pipeline until you know that you're pretty sure that you're not working with an attack or malformed input... and THEN you can switch to JSON-LD processing.
2.0 It removes a developer's ability to easily use JSON Schema to check well-formed-ness.
I don't understand how this is the case, I assume you mean using JSON-Schema to check for a URI like https://w3id.org/traceability/v1
?... but JSON Schema would potentially by wrong about the schema requirements of such a check if the context had shifted underneath.... in short, JSON Schema can help with JSON shape, but not term validation... the shape of JSON remains the same except for the value of the inlined context.
3.0 It requires systems processing JSON-LD input to expand and recompact to their context before processing and writing code against the incoming data structure.
If you plan on doing JSON-LD processing yes.... if you don't you would be ignoring the @context
entirely anyway.
I guess you are really saying that trust in JSON members comes from specific values of @context
... I am not sure I agree with that... I don't trust any JSON.... really, ever....
I don't trust its shape, if it uses inline or not, I use JSON Schema to check its shape.
I don't trust its valid JSON-LD inline, or not, I use a semantic checker to test it.
if its shape and semantics are valid, I trust that it meets my requirements, but I still don't trust it, so if possible I check signatures / hashes... if those match, I still don't trust it, but I believe its not been altered from whatever form I expected it in.
In the context of VCs, I don't see inlining contexts as in any way harmful if a verifier is always performing defense in depth:
- shape check
- semantic check
- signature check
It does increase size, but after a verifier has checked it, they can actually replace the inlined value with the reference, and the signature would still be valid, right?....
This is not to say that contexts / focused vocab is not useful, just wondering about how its used with or without inlining... it seems like treating specific canonicalized values of @context
as "types" is ok when you have really, really high confidence in them, but no better than inlining if they are highly unstable / not trustworthy.