/KnowledgeGraph

OpenAPI specification of Isagog KG service

Primary LanguageHTMLApache License 2.0Apache-2.0

KnowledgeGraph

Isagog Knowledge Graph service API specification

The Isagog platform for Knowledge Graphs consists in the following logical modules:

  • interaction: user interaction interfaces
  • knowledge: concept-level methods and structures
  • language: natural language processing methods and structures
  • data: instance-level manipulation, search and query methods

Each module is tagged and features a specific root path.

The supplied maven pom file produces a java client and may be modified to generate any resource by openapi tools

The supplied maven pom file produces a java server stub and may be modified to generate any resource by openapi tools

Here is a high-level sketch of the platform's architecture featuring two main use cases:

User interaction

This picture shows how a user utterance should be processed.

  1. The utterance (raw text) is received by the interaction service and forwarded to the language service.
  2. The language service gives back an annotation structure over the given sentence, or a rebuttal message.
  3. The interaction service sends the annotated sentence to the knowledge service, which tries to build conceptual frames upon it (here is most of the magic!).
  4. The interaction service selects the best frame candidate ans sends it to the user to review.
  5. The interaction service dispatches the curated frame to the data service, based on its type, i.e. Query or Update. Users' frame curation may feed a continuous learning process.
  6. The data service result flows to the interaction service and then to the user.

Data ingestion

Here is a sketch of the data ingestion, i.e. texts such as documents or mails, or rdf files previously prepared.

  1. The ingestion service discriminates the file type: it sends texts to the language service for content analysis, and structured data (e.g. csv) to the knowledge service for conceptualization.
  2. The ingestion service pulls the textual content in the document store, and possibly the extracted triples in the data store.