/qEndpoint

A highly scalable RDF triple store with full-text and GeoSPARQL support

Primary LanguageJavaOtherNOASSERTION

qEndpoint
Report a Bug · Request a Feature · Ask a Question

Package build and deploy Tests

dev

Tests

The QA Company over the social networks


Table of Contents

About

The qEndpoint is a highly scalable triple store with full-text and GeoSPARQL support. It can be used as a standalone SPARQL endpoint, or as a dependency. The qEndpoint is for example used in Kohesio where each interaction with the UI corresponds to an underlying SPARQL query on the qEndpoint. Also qEndpoint is part of QAnswer enabeling question answering over RDF Graphs.

Built With


Getting Started

Prerequisites

For the backend/benchmark

  • Java 17
  • Maven

For the frontend (not mandatory to run the backend)

Installation

Scoop

You can install qEndpoint using the Scoop package manager.

You need to add the the-qa-company bucket, and then you will be able to install the qendpoint manifest, it can be done using these commands

# Add the-qa-company bucket
scoop bucket add the-qa-company https://github.com/the-qa-company/scoop-bucket.git
# Install qEndpoint CLI
scoop install qendpoint

Brew

You can install qEndpoint using the Brew package manager.

You can install is using this command

brew install the-qa-company/tap/qendpoint

Command Line Interface

If you don't have access to Brew or Scoop, the qEndpoint command line interface is available in the releases page under the file qendpoint-cli.zip. By extracting it, you can a bin directory that can be added to your path.

Code

Back-end
  • Clone the qEndpoint from this link: git clone https://github.com/the-qa-company/qEndpoint.git

  • Move to the back-end directory cd qendpoint-backend

  • Compile the project using this command: mvn clean install -DskipTests

  • Run the project using java -jar target/qendpoint-backend-1.2.3-exec.jar (replace the version by the latest version)

    You can use the project as a dependency (replace the version by the latest version)

<dependency>
    <groupId>com.the_qa_company</groupId>
    <artifactId>qendpoint</artifactId>
    <version>1.2.3</version>
</dependency>
Front-end
  • Clone the qEndpoint from this link: git clone https://github.com/the-qa-company/qEndpoint.git
  • Move to the front-end directory cd qendpoint-frontend
  • Install the packages using npm install
  • Run the project using npm start

Installers

The endpoint installers for Linux, MacOS and Windows can be found here, the installers do not contain the command line (cli), only the endpoint.


Usage

Docker Image

You can use one of our preconfigured Docker images.

qacompany/qendpoint

DockerHub: qacompany/qendpoint

This Docker image contains the endpoint, you can upload your dataset and start using it.

You just have to run the image and it will prepare the environment by downloading the index and setting up the repository using the snippet below:

docker run -p 1234:1234 --name qendpoint qacompany/qendpoint

You can also specify the size of the memory allocated by setting the docker environnement value MEM_SIZE. By default this value is set to 6G. You should not set this value below 4G because you will certainly run out of memory with large dataset. For bigger dataset, a bigger value is also recommended for big dataset, as an example, Wikidata-all won't run without at least 10G.

docker run -p 1234:1234 --name qendpoint --env MEM_SIZE=6G qacompany/qendpoint

You can stop the container and rerun it at anytime maintaining the data inside (qendpoint is the name of the container) using the following commands:

docker stop qendpoint
docker start qendpoint

: Note this container may occupy a huge portion of the disk due to the size of the data index, so make sure to delete the container if you don't need it anymore by using the command below:

docker rm qendpoint

qacompany/qendpoint-wikidata

DockerHub: qacompany/qendpoint-wikidata

This Docker image contains the endpoint with a script to download an index containing the Wikidata Truthy statements from our servers, so you simply have to wait for the index download and start using it.

You just have to run the image and it will prepare the environment by downloading the index and setting up the repository using the code below:

docker run -p 1234:1234 --name qendpoint-wikidata qacompany/qendpoint-wikidata

You can also specify the size of the memory allocated by setting the docker environnement value MEM_SIZE. By default this value is set to 6G, a bigger value is also recommended for big dataset, as an example, Wikidata-all won't run without at least 10G.

docker run -p 1234:1234 --name qendpoint-wikidata --env MEM_SIZE=6G qacompany/qendpoint-wikidata

You can specify the dataset to download using the environnement value HDT_BASE, by default the value is wikidata_truthy, but the current available values are:

  • wikidata_truthy - Wikidata Truthy statements (need at least 6G of memory)
  • wikidata_all - Wikidata-all statements (need at least 10G of memory)
docker run -p 1234:1234 --name qendpoint-wikidata --env MEM_SIZE=10G --env HDT_BASE=wikidata_all qacompany/qendpoint-wikidata

You can stop the container and rerun it at anytime maintaining the data inside (qendpoint is the name of the container) using the below code:

docker stop qendpoint-wikidata
docker start qendpoint-wikidata

Note this container may occupy a huge portion of the disk due to the size of the data index, so make sure to delete the container if you don't need it anymore using the command as shown below:

docker rm qendpoint-wikidata

Useful tools

You can access http://localhost:1234 where there is a GUI where you can write SPARQL queries and execute them, and there is the RESTful API available which you can use to run queries from any application over HTTP like so:

curl -H 'Accept: application/sparql-results+json' localhost:1234/api/endpoint/sparql --data-urlencode 'query=select * where{ ?s ?p ?o } limit 10'

Note first query will take some time in order to map the index to memory, later on it will be much faster!

Most of the result formats are available, you can use for example:

  • JSON: application/sparql-results+json
  • XML: application/sparql-results+xml
  • Binary RDF: application/x-binary-rdf-results-table

Standalone

You can run the endpoint with this command:

java -jar endpoint.jar &

you can find a template of the application.properties file in the backend source

If you have the HDT file of your graph, you can put it before loading the endpoint in the hdt-store directory (by default hdt-store/index_dev.hdt)

If you don't have the HDT, you can upload the dataset to the endpoint by running the command while the endpoint is running:

curl "http://127.0.0.1:1234/api/endpoint/load" -F "file=@mydataset.nt"

where mydataset.nt is the RDF file to load, you can use all the formats used by RDF4J.

As a dependency

You can create a SPARQL repository using this method, don't forget to init the repository

// Create a SPARQL repository
SparqlRepository repository = CompiledSail.compiler().compileToSparqlRepository();
// Init the repository
repository.init();

You can execute SPARQL queries using the executeTupleQuery, executeBooleanQuery, executeGraphQuery or execute.

// execute the a tuple query
try (ClosableResult<TupleQueryResult> execute = sparqlRepository.executeTupleQuery(
        // the sparql query
        "SELECT * WHERE { ?s ?p ?o }",
        // the timeout
        10
)) {
    // get the result, no need to close it, closing execute will close the result
    TupleQueryResult result = execute.getResult();

    // the tuples
    for (BindingSet set : result) {
        System.out.println("Subject:   " + set.getValue("s"));
        System.out.println("Predicate: " + set.getValue("p"));
        System.out.println("Object:    " + set.getValue("o"));
    }
}

Don't forget to shutdown the repository after usage

// Shutdown the repository (better to release resources)
repository.shutDown();

You can get the RDF4J repository with the getRepository() method.

// get the rdf4j repository (if required)
SailRepository rdf4jRepo = repository.getRepository();

Connecting with your Wikibase

  • run the qEndpoint locally

  • cd wikibase

  • move the file prefixes.sparql to your qEndpoint installation

  • (re-)start your endpoint to use the prefixes

  • run

    java -cp wikidata-query-tools-0.3.59-SNAPSHOT-jar-with-dependencies.jar org.wikidata.query.rdf.tool.Update \
            --sparqlUrl http://localhost:1234/api/endpoint/sparql \
            --wikibaseHost https://linkedopendata.eu/ \
            --wikibaseUrl https://linkedopendata.eu/ \
            --conceptUri https://linkedopendata.eu/ \
            --wikibaseScheme https \
            --entityNamespaces 120,122 \
            --start 2022-06-28T11:27:08Z

    you can adapt the parameters to your wikibase, in this case we are querying the EU Knowledge Graph, you may also change the start time.


Roadmap

See the open issues for a list of proposed features (and known issues).


Support

Reach out to the maintainer at one of the following places:


Project assistance

If you want to say thank you or/and support active development of qEndpoint:

  • Add a GitHub Star to the project ⭐
  • Tweet about the qEndpoint
  • Write interesting articles about the project on Dev.to, Medium or your personal blog.

Contributing

First of all, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are greatly appreciated.

Please read our contribution guidelines, and thank you for being involved!


Authors & contributors

The original setup of this repository is by The QA Company.

For a full list of all authors and contributors, see the contributors page.


Security

qEndpoint follows good practices of security, but 100% security cannot be assured. qEndpoint is provided "as is" without any warranty. Use at your own risk.

For more information and to report security issues, please refer to our security documentation.


Publications

  • Willerval Antoine, Dennis Diefenbach, and Pierre Maret. "Easily setting up a local Wikidata SPARQL endpoint using the qEndpoint." Workshop ISWC (2022). PDF
  • Willerval Antoine, Dennis Diefenbach, Angela Bonifati. "qEndpoint: A Wikidata SPARQL endpoint on commodity hardware" Demo at The Web Conference (2023). PDF
  • Willerval Antoine, Dennis Diefenbach, Angela Bonifati. "qEndpoint: A Novel Triple Store Architecture for Large RDF Graphs" Semantic Web Journal (2024). PDF
  • Willerval Antoine, Dennis Diefenbach, Angela Bonifati. "Generate and Update Large HDT RDF Knowledge Graphs on Commodity Hardware" ESWC (2024). PDF

License

This project is licensed under the GNU General Public License v3 with a notice.

See LICENSE for more information.


Let's Connect

LinkedIn Badge The QA Company Web Twitter Badge Email Badge