/schematic

Package for biomedical data model and metadata ingress management

Primary LanguagePythonMIT LicenseMIT

Schematic

Build Status Documentation Status PyPI version

Table of contents

Introduction

SCHEMATIC is an acronym for Schema Engine for Manifest Ingress and Curation. The Python based infrastructure provides a novel schema-based, metadata ingress ecosystem, that is meant to streamline the process of biomedical dataset annotation, metadata validation and submission to a data repository for various data contributors.

Installation

Installation Requirements

  • Python version 3.9.0≤x<3.11.0

Note: You need to be a registered and certified user on synapse.org, and also have the right permissions to download the Google credentials files from Synapse.

Installation guide for data curator app

Create and activate a virtual environment within which you can install the package:

python3 -m venv .venv
source .venv/bin/activate

Note: Python 3 has a built-in support for virtual environment venv so you no longer need to install virtualenv.

Install and update the package using pip:

python3 -m pip install schematicpy

If you run into error: Failed building wheel for numpy, the error might be able to resolve by upgrading pip. Please try to upgrade pip by:

pip3 install --upgrade pip

Installation guide for developers/contributors

When contributing to this repository, please first discuss the change you wish to make via issue, email, or any other method with the owners of this repository before making a change.

Please note we have a code of conduct, please follow it in all your interactions with the project.

Development environment setup

  1. Clone the schematic package repository.
git clone https://github.com/Sage-Bionetworks/schematic.git
  1. Install poetry (version 1.3.0 or later) using either the official installer or pipx. If you have an older installation of Poetry, we recommend uninstalling it first.

  2. Start the virtual environment by doing:

poetry shell
  1. Install the dependencies by doing:
poetry install

This command will install the dependencies based on what we specify in poetry.lock. If this step is taking a long time, try to go back to step 2 and check your version of poetry. Alternatively, you could also try deleting the lock file and regenerate it by doing poetry install (Please note this method should be used as a last resort because this would force other developers to change their development environment)

If you want to install the API you will need to install those dependencies as well:

poetry install --extras "api"

If you want to install the uwsgi:

poetry install --extras "api"
  1. Fill in credential files: Note: If you won't interact with Synapse, please ignore this section.

There are two main configuration files that need to be edited: config.yml and synapseConfig

Configure .synapseConfig File

Download a copy of the .synapseConfig file, open the file in the editor of your choice and edit the username and authtoken attribute under the authentication section

Note: You could also visit configparser doc to see the format that .synapseConfig must have. For instance:

[authentication]
username = ABC
authtoken = abc

Configure config.yml File

There are some defaults in schematic that can be configured. These fields are in config_example.yml:


# This is an example config for Schematic.
# All listed values are those that are the default if a config is not used.
# Save this as config.yml, this will be gitignored.
# Remove any fields in the config you don't want to change
# Change the values of any fields you do want to change


# This describes where assets such as manifests are stored
asset_store:
  # This is when assets are stored in a synapse project
  synapse:
    # Synapse ID of the file view listing all project data assets.
    master_fileview_id: "syn23643253"
    # Path to the synapse config file, either absolute or relative to this file
    config: ".synapseConfig"
    # Base name that manifest files will be saved as
    manifest_basename: "synapse_storage_manifest"

# This describes information about manifests as it relates to generation and validation
manifest:
  # Location where manifests will saved to
  manifest_folder: "manifests"
  # Title or title prefix given to generated manifest(s)
  title: "example"
  # Data types of manifests to be generated or data type (singular) to validate manifest against
  data_type:
    - "Biospecimen"
    - "Patient"

# Describes the location of your schema
model:
  # Location of your schema jsonld, it must be a path relative to this file or absolute
  location: "tests/data/example.model.jsonld"

# This section is for using google sheets with Schematic
google_sheets:
  # The Synapse id of the Google service account credentials.
  service_acct_creds_synapse_id: "syn25171627"
  # Path to the synapse config file, either absolute or relative to this file
  service_acct_creds: "schematic_service_account_creds.json"
  # When doing google sheet validation (regex match) with the validation rules.
  #   true is alerting the user and not allowing entry of bad values.
  #   false is warning but allowing the entry on to the sheet.
  strict_validation: true

If you want to change any of these copy config_example.yml to config.yml, change any fields you want to, and remove any fields you don't.

For example if you wanted to change the folder where manifests are downloaded your config should look like:


manifest:
  manifest_folder: "my_manifest_folder_path"

Note: config.yml is ignored by git.

Note: Paths can be specified relative to the config.yml file or as absolute paths.

  1. Login to Synapse by using the command line On the CLI in your virtual environment, run the following command:
synapse login -u <synapse username> -p <synapse password> --rememberMe

Please make sure that you run the command before running schematic init below

  1. Obtain Google credential Files To obtain schematic_service_account_creds.json, please run:
schematic init --config ~/path/to/config.yml

As v22.12.1 version of schematic, using token mode of authentication (in other words, using token.pickle and credentials.json) is no longer supported due to Google's decision to move away from using OAuth out-of-band (OOB) flow. Click here to learn more.

Notes: Use the schematic_service_account_creds.json file for the service account mode of authentication (for Google services/APIs). Service accounts are special Google accounts that can be used by applications to access Google APIs programmatically via OAuth2.0, with the advantage being that they do not require human authorization.

Background: schematic uses Google’s API to generate google sheet templates that users fill in to provide (meta)data. Most Google sheet functionality could be authenticated with service account. However, more complex Google sheet functionality requires token-based authentication. As browser support that requires the token-based authentication diminishes, we are hoping to deprecate token-based authentication and keep only service account authentication in the future.

  1. Set up pre-commit hooks

This repository is configured to utilize pre-commit hooks as part of the development process. To enable these hooks, please run the following command and look for the following success message:

$ pre-commit install
pre-commit installed at .git/hooks/pre-commit

Development process instruction

For new features, bugs, enhancements

  1. Pull the latest code from develop branch in the upstream repo
  2. Checkout a new branch develop-<feature/fix-name> from the develop branch
  3. Do development on branch develop-<feature/fix-name> a. may need to ensure that schematic poetry toml and lock files are compatible with your local environment
  4. Add changed files for tracking and commit changes using best practices
  5. Have granular commits: not “too many” file changes, and not hundreds of code lines of changes
  6. Commits with work in progress are encouraged: a. add WIP to the beginning of the commit message for “Work In Progress” commits
  7. Keep commit messages descriptive but less than a page long, see best practices
  8. Push code to develop-<feature/fix-name> in upstream repo
  9. Branch out off develop-<feature/fix-name> if needed to work on multiple features associated with the same code base
  10. After feature work is complete and before creating a PR to the develop branch in upstream a. ensure that code runs locally b. test for logical correctness locally c. wait for git workflow to complete (e.g. tests are run) on github
  11. Create a PR from develop-<feature/fix-name> into the develop branch of the upstream repo
  12. Request a code review on the PR
  13. Once code is approved merge in the develop branch
  14. Delete the develop-<feature/fix-name> branch

Note: Make sure you have the latest version of the develop branch on your local machine.

Installation Guide - Docker

  1. Install docker from https://www.docker.com/ .
  2. Identify docker image of interest from Schematic DockerHub
    Ex docker pull sagebionetworks/schematic:latest from the CLI or, run docker compose up after cloning the schematic github repo
    in this case, sagebionetworks/schematic:latest is the name of the image chosen
  3. Run Schematic Command with docker run <flags> <schematic command and args>.
    - For more information on flags for docker run and what they do, visit the Docker Documentation
    - These example commands assume that you have navigated to the directory you want to run schematic from. To specify your working directory, use $(pwd) on MacOS/Linux or %cd% on Windows.
    - If not using the latest image, then the full name should be specified: ie sagebionetworks/schematic:commit-e611e4a
    - If using local image created by docker compose up, then the docker image name should be changed: i.e. schematic_schematic
    - Using the --name flag sets the name of the container running locally on your machine

Example For REST API

Use file path of config.yml to run API endpoints:

docker run --rm -p 3001:3001 \
  -v $(pwd):/schematic -w /schematic --name schematic \
  -e SCHEMATIC_CONFIG=/schematic/config.yml \
  -e GE_HOME=/usr/src/app/great_expectations/ \
  sagebionetworks/schematic \
  python /usr/src/app/run_api.py

Use content of config.yml and schematic_service_account_creds.jsonas an environment variable to run API endpoints:

  1. save content of config.yml as to environment variable SCHEMATIC_CONFIG_CONTENT by doing: export SCHEMATIC_CONFIG_CONTENT=$(cat /path/to/config.yml)

  2. Similarly, save the content of schematic_service_account_creds.json as SERVICE_ACCOUNT_CREDS by doing: export SERVICE_ACCOUNT_CREDS=$(cat /path/to/schematic_service_account_creds.json)

  3. Pass SCHEMATIC_CONFIG_CONTENT and schematic_service_account_creds as environment variables by using docker run

docker run --rm -p 3001:3001 \
  -v $(pwd):/schematic -w /schematic --name schematic \
  -e GE_HOME=/usr/src/app/great_expectations/ \
  -e SCHEMATIC_CONFIG_CONTENT=$SCHEMATIC_CONFIG_CONTENT \
  -e SERVICE_ACCOUNT_CREDS=$SERVICE_ACCOUNT_CREDS \
  sagebionetworks/schematic \
  python /usr/src/app/run_api.py

Example For Schematic on mac/linux

To run example below, first clone schematic into your home directory git clone https://github.com/sage-bionetworks/schematic ~/schematic
Then update .synapseConfig with your credentials

docker run \
  -v ~/schematic:/schematic \
  -w /schematic \
  -e SCHEMATIC_CONFIG=/schematic/config.yml \
  -e GE_HOME=/usr/src/app/great_expectations/ \
  sagebionetworks/schematic schematic model \
  -c /schematic/config.yml validate \
  -mp /schematic/tests/data/mock_manifests/Valid_Test_Manifest.csv \
  -dt MockComponent \
  -js /schematic/tests/data/example.model.jsonld

Example For Schematic on Windows

docker run -v %cd%:/schematic \
  -w /schematic \
  -e GE_HOME=/usr/src/app/great_expectations/ \
  sagebionetworks/schematic \
  schematic model \
  -c config.yml validate -mp tests/data/mock_manifests/inValid_Test_Manifest.csv -dt MockComponent -js /schematic/data/example.model.jsonld

Other Contribution Guidelines

Updating readthedocs documentation

  1. cd docs
  2. After making relevant changes, you could run the make html command to re-generate the build folder.
  3. Please contact the dev team to publish your updates

Other helpful resources:

  1. Getting started with Sphinx
  2. Installing Sphinx

Update toml file and lock file

If you install external libraries by using poetry add <name of library>, please make sure that you include pyproject.toml and poetry.lock file in your commit.

Reporting bugs or feature requests

You can create bug and feature requests through Sage Bionetwork's FAIR Data service desk. Providing enough details to the developers to verify and troubleshoot your issue is paramount:

  • Provide a clear and descriptive title as well as a concise summary of the issue to identify the problem.
  • Describe the exact steps which reproduce the problem in as many details as possible.
  • Describe the behavior you observed after following the steps and point out what exactly is the problem with that behavior.
  • Explain which behavior you expected to see instead and why.
  • Provide screenshots of the expected or actual behaviour where applicable.

Command Line Usage

Please visit more documentation here

Testing

All code added to the client must have tests. The Python client uses pytest to run tests. The test code is located in the tests subdirectory.

You can run the test suite in the following way:

pytest -vs tests/

Updating Synapse test resources

  1. Duplicate the entity being updated (or folder if applicable).
  2. Edit the duplicates (e.g. annotations, contents, name).
  3. Update the test suite in your branch to use these duplicates, including the expected values in the test assertions.
  4. Open a PR as per the usual process (see above).
  5. Once the PR is merged, leave the original copies on Synapse to maintain support for feature branches that were forked from develop before your update.
    • If the old copies are problematic and need to be removed immediately (e.g. contain sensitive data), proceed with the deletion and alert the other contributors that they need to merge the latest develop branch into their feature branches for their tests to work.

Code style

  • Please consult the Google Python style guide prior to contributing code to this project.
  • Be consistent and follow existing code conventions and spirit.

Contributors

Main contributors and developers: