/IDP-KG

Scripts and notebooks for generating and analysing the IDP-KG.

Primary LanguageJupyter NotebookApache License 2.0Apache-2.0

IDPcentral Scripts, Data, and Notebooks

This repository contains the scripts to generate the IDPcentral Knowledge Graph based on data harvested from DisProt, MobiDB, and ProteinEnsemble (PED).

The starting point for this repository were the files developed during the ELIXIR sponsored BioHackathon-Europe 2020. That work was reported in BioHackrXiv v3jct. This repo updates the scripts for the revised deployments, and scales the work to the entire content of the three sites.

Authors:

Citing IDP-KG: If you used IDP-KG in your work, please cite the SWAT4HCLS paper:

@inproceedings{GrayEtal:bioschemas-idpkg:swat4hcls2022,
  author = {Gray, Alasdair J. G. and Papadopoulos, Petros and Asif, Imran and Micetic, Ivan and Hatos, Andr{\'{a}}s},
  title = {Creating and Exploiting the Intrinsically Disordered Protein Knowledge Graph {(IDP-KG)}},
  booktitle = {13th International Conference on Semantic Web Applications and Tools for Health Care and Life Sciences, {SWAT4HCLS} 2022, Virtual Event, Leiden, The Netherlands, January 10th to 14th, 2022},
  series = {{CEUR} Workshop Proceedings},
  volume = {3127},
  pages = {1--10},
  publisher = {CEUR-WS.org},
  year = {2022},
  url = {http://ceur-ws.org/Vol-3127/paper-1.pdf}
}

Notes

  • The term 'source' is used to distinguish the page that was scraped
  • The term 'dataset' is used to identify the collection of data that a particular record page (e.g. disprot:DP000003) belongs to

Notebooks

The repository contains two Jupyter notebooks in the notebooks directory:

  1. ETLProcess notebook converts the harvested data into a semantic knowledge graph represented in RDF using the Bioschemas terms;

  2. AnalysisQueries notebook runs some queries over the resulting knowledge graph.

Full instructions for running the notebooks are contained within the notebooks. In both notebooks, all cells should be run and then the GUI used to generate the desired outputs.

To install the dependencies that the notebooks rely on run the following from the command line (or Jupyter shell terminal):

pip install -r requirements.txt

Running the Analysis Notebook in the Cloud

The notebook for exploring the generated knowledge graph can be run on the cloud using the mybinder service1; click on logo below to get going.

Binder

REST API

A Linked Data REST API is provided using the grlc services.

Footnotes

  1. See this tutorial for an overview of what MyBinder is and offers.