Does PyLD do scoped contexts within node objects ?
Closed this issue · 5 comments
See Example 28 in http://www.w3.org/TR/json-ld/ for the kind of JSON-LD that I am having trouble with.
I have some code producing JSON-LD containing a sub-dictionary with a context of its own. It appears that PyLD does not process that context, while it handles the main context of the data structure just fine.
This apparent issue is independent of whether the scoped context is in-lined or not.
My service is not (yet) available externally, so I can't provide a link to the JSON-LD.
Yes, PyLD supports scoped contexts within node objects. It is a fully-compliant JSON-LD processor. Anything you can do on the JSON-LD playground you should be able to do with PyLD (it's a near direct port of the JavaScript-based processor), so if something isn't working, there may be a bug. Can you provide a simple example?
I have made an example by inlining and cutting away stuff, and then expanded it using
expanded = jsonld.expand(data)
print json.dumps(expanded, indent=2)
The start JSON-LD is example4.jsonld
, and the expanded expanded4.jsonld
.
Doing so, I have finally some idea what is going on. I am not sure if it is spec-compliant, though. My original suspicion - that subcontexts are not processed - was incorrect.
The problem is the following: The entries given
and result
are both containers for parameters, and they are separate namespaces, and need their own contexts. When expanding, only those parameters in given
and result
(csv_file
and n_rows
) having non-null values are kept. The null-valued parameter (n_columns
) disappears from the expanded file. Is this correct behaviour? (If yes, then I am sorry to have bothered you.)
This interface won't allow me to attach the JSON-LD files, so have to inline them:
example4.jsonld
:
{
"@context":
{
"dc": "http://purl.org/dc/elements/1.1/",
"xsd": "http://www.w3.org/2001/XMLSchema#",
"poly": "http://polycentric.org/ontology/",
"description": {
"@id": "dc:description",
"@type": "xsd:string",
"dc:description": "Full human-readable description of the entity."
},
"given": {
"@id": "poly:given",
"dc:description": "The container for the given parameters for a specification or task."
},
"result": {
"@id": "poly:result",
"dc:description": "The container for the result parameters for a specification or task."
}
},
"@id": "http://localhost:8882/task/222299c15b4f4263a170bc40dd0f2967",
"given": {
"csv_file": "http://localhost:8882/data/ef63594f11d04664a9a45432a46ea957",
"@context": {
"csv_file": {
"@id": "http://localhost:8882/spec/test/given/csv_file",
"@type": "xsd:anyURI"
}
}
},
"result": {
"n_rows": 5,
"@context": {
"n_rows": {
"@id": "http://localhost:8882/spec/test/result/n_rows",
"@type": "xsd:int"
},
"n_columns": {
"@id": "http://localhost:8882/spec/test/result/n_columns",
"@type": "xsd:int"
}
},
"n_columns": null
},
"description": ""
}
expanded4.jsonld
:
[
{
"http://purl.org/dc/elements/1.1/description": [
{
"@type": "http://www.w3.org/2001/XMLSchema#string",
"@value": ""
}
],
"http://polycentric.org/ontology/result": [
{
"http://localhost:8882/spec/test/result/n_rows": [
{
"@type": "http://www.w3.org/2001/XMLSchema#int",
"@value": 5
}
]
}
],
"@id": "http://localhost:8882/task/222299c15b4f4263a170bc40dd0f2967",
"http://polycentric.org/ontology/given": [
{
"http://localhost:8882/spec/test/given/csv_file": [
{
"@type": "http://www.w3.org/2001/XMLSchema#anyURI",
"@value": "http://localhost:8882/data/ef63594f11d04664a9a45432a46ea957"
}
]
}
]
}
]
The null-valued parameter (n_columns) disappears from the expanded file. Is this correct behaviour?
Yes. See JSON-LD 1.0 General Terminology:
A key-value pair in the body of a JSON-LD document whose value is null has the
same meaning as if the key-value pair was not defined.
Also, in Section 7.8 of the Expansion Algorithm, expanded values of null
cause keys to be "ignored" by dropping them from the output.
It may also be of note that a value of null
is not a valid xsd:int
(I think), which is the type n_columns
is mapped to in the @context
(not that this is considered by a JSON-LD processor, just commenting on the data).
OK, issue resolved. My apologies for bothering you.
And yes, I have thought about that problem with n_columns
. I need to specify the type for the parameter (for the interface, and for the task solver), and the task cannot have a non-null value here, since it has not yet been executed. I would very much like to avoid defining my own datatypes to handle this, but I may have, I suppose. Annoying.
No worries -- and good luck! :)