Wellcome Reach is an open source service for discovering how research publications are cited in global policy documents, including those produced by policy organizations such as the WHO, MSF, and the UK government. Key parts of it include:
- Web scrapers for pulling PDF "policy documents" from policy organizations,
- A reference parser for extracting references from these documents,
- A task for sourcing publications from Europe PMC (EPMC),
- A task for matching policy document references to EPMC publications,
- An Airflow installation for automating the above tasks, and
- A web application for searching and retrieving data from the datasets produced above.
Wellcome Reach is written in Python and developed using docker-compose. It's deployed into Kubernetes.
Although parts of the Wellcome Reach have been in use at Wellcome since mid-2018, the project has only been open source since March 2019. Given these early days, please be patient as various parts of it are made accessible to external users. All issues and pull requests are welcome. Contributing guidelines can be found in CONTRIBUTING.md.
To develop for this project, you will need:
- Python 3.6+, plus
pip
andvirtualenv
- Docker and docker-compose
- AWS credentials with read/write S3 permissions to an S3 bucket of your choosing.
To bring up the development environment using docker:
- Set your AWS credentials into your environment. (
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
) - Build and start the env with:
make docker-build docker-compose up -d
- Verify it all came up with:
docker-compose ps
Once up, you'll be able to access:
- airflow on http://localhost:8080/
- Elasticsearch on http://localhost:9200/
- the website on http://localhost:8081/
For local development outside of airflow or other services, use the project's virtualenv:
make virtualenv
source build/virtualenv/bin/activate
To run all tests for the project using the official Python version and other dependencies, run:
make docker-test
You can also run tests locally using the project's virtualenv, with
make test
or using the appropriate pytest command, as documented in Makefile
.
Wellcome Reach uses Apache Airflow to automate running its data pipelines. Specifically, we've broken down the batch pipeline into a series of dependent steps, all part of a Directed Acyclic Graph (DAG).
It's quite common to want to run a single task in Airflow without having to click through in the UI, not least because all logging messages are then on the console. To do this, from top of the project directory:
- Bring up the stack with
docker-compose
as shown above, and - Run the following command, substituting for
DAG_NAME
,TASK_NAME
, andJSON_PARAMS
:./docker_exec.sh airflow test \ ${DAG_NAME} ${TASK_NAME} \ 2018-11-02 -tp '${JSON_PARAMS}'
Although not required, you can add Sentry reporting from your local dev environment to a localdev project inside Wellcome's sentry account by running:
```
eval $(./export_wellcome_env.py)
```
before running docker-compose up -d
above.
For production, a typical deployment uses:
- a Kubernetes cluster that supports persistent volumes
- a PostgreSQL or MySQL database for Airflow to use
- a distributed storage service such as S3
- an ElasticSearch cluster for searching documents
We have devised some evaluation data in order to evaluate 5 steps of the model. The results can be calculated by first installing poppler
brew install poppler
and then downloading the evaluation data from here and storing it in algo_evaluation/data_evaluate/
, which can be done in the command line by running
aws s3 cp --recursive s3://datalabs-data/policy_tool_tests ./reach/refparse/algo_evaluation/data_evaluate/
and then running
python evaluate_algo.py
(or set the verbose argument to False (python evaluate_algo.py --verbose False
) if you want less information about the evaluation to be printed).
You can read more about how we got the evaluation data and what the evaluation results mean here.
See the Contributing guidelines