For detailed API documentation see here
API for Orange medication management app. RESTful and implemented in Node & Mongo. Implements:
- Setup user/patient
- Save medications/doctors/pharmacies/user habits
- Record dose events
- View adherence schedule
- Share information with other users (and outside email addresses who aren't yet users)
- Development Setup and Run
- Load Testing
- Code Analysis
- Deployment
- Environment Variables
- Contributing
- Technical Documentation
- License
- Node.js (v0.10+) and NPM
- Grunt.js
- MongoDB (v3.6 - higher versions will not work. If you need to downgrade instructions, click here)
- Amida Auth Microservice (https://github.com/amida-tech/amida-auth-microservice)
- Initalize MongoDB
- Set up Amida Auth Microservice
- see Auth Microservice README for details on setup
- if you are developing locally, you may need to install and configure Postgres
cp .env.example .env
cp .env.example .env.test
- Configure settings in
.env
. See Environment Variables- Vital settings:
X_CLIENT_SECRET
(any hexstring is suitable)JWT_SECRET
(must match Auth Microservice)AUTH_MICROSERVICE_URL
(must point to wherever youramida-auth-microservice
server is running)- Web Address
- Database Address
- Vital settings:
npm install
orange-api has an endpoint for searching for medications in the VA Formulary. This is enabled by populating a collection in MongoDB with the contents of the VA Formulary.
The latest spreadsheet that contains the VA Formulary can be downloaded from here, and is included in the repo for convenience.
Import the data from the spreadsheet into MongoDB with the npm script import_va_formulary
. For example, npm run import_va_formulary
.
Note: This is optional. These steps are in their own section because this setup is complicated and not required if you don't need to develop/test push notifications.
-
Setup and start the Amida Notification Microservice
-
In your .env file, set these variables:
NOTIFICATION_MICROSERVICE_URL
- Variables that start with
PUSH_NOTIFICATIONS_
Note: Their values must be identical to the corresponding variables in your Amida Notification Microservice.
- Obtain an Apple Developer Key and corresponding KeyId. You can download this file by logging into the team's apple developer console on
developer.apple.com
. Navigate toKeys
on the left pane and create or download a key. Add this file to the root of the project and rename it toiosKey.p8
. Set the corresponding keyId to the value ofPUSH_NOTIFICATIONS_APN_KEY_ID
in your.env
file.
grunt dev
SSH Tunnel into the remote machine where orange-api
has been deployed and from where you will be installing Locust and running your load tests. The following command will create an SSH tunnel into the specified address and begin forwarding your machine's local port 8089
(making a 'tunnel' with the remote machine's port 8089
) so that you can run the load tests on the server and still view the locust web interface from your local machine.
ssh -L 8089:localhost:8089 user@example.com
Once you have SSH'd into your remote machine, you will do the following on that machine to install the necessary libraries to run the load test script:
Create a new virtual enviroment using virtualenv with the command:
virtualenv env
I have called mine env
.
(If you do not have virtualenv installed you can install it using pip install virtualenv
)
activate your new enviroment with the command
source env/bin/activate
Once inside your new enviroment you will need to install locust, faker, and arrow using the following commands
pip install locustio
pip install faker
pip install arrow
On the remote machine, navigate inside the directory that holds the orange-api repository and contains the file locustfile.py
Launch locust
locust -f locustfile.py -H "http://localhost:5000/v1"
Now, on your local machine:
Point your browser to http://127.0.0.1:8089/
From the Locust web interface you can change the settings and run the load-test
$ gulp appAnalysis
to analyze code in./lib
$ gulp testAnalysis
to analyze code in./test
- Files are written to
./artifacts
Prerequisite: The Amida Notification Service up and running.
Docker deployment requires two docker containers:
- An instance of the official MongoDB 3.6 docker image (see: https://hub.docker.com/_/mongo/).
- An instance of this service's docker image (see: https://hub.docker.com/r/amidatech/orange-api).
Also, the containers communicate via a docker network. Therefore,
- First, create the Docker network:
docker network create {DOCKER_NETWORK_NAME}
- Start the MongoDB container:
docker run -d --name amida-orange-api-db --network {DOCKER_NETWORK_NAME} mongo:3.6
-
Create a
.env
file for use by this service's docker container. A good starting point is this repo's.env.example
file. For additional details, see the next step. -
Configure push notifications according to the Enabling Push Notifications subsection under Development Setup and Run
-
Start the Orange API container:
docker run -d -p 5000:5000 \
--name amida-orange-api --network {DOCKER_NETWORK_NAME} \
-v {ABSOLUTE_PATH_TO_YOUR_ENV_FILE}:/app/.env:ro \
amidatech/orange-api
Environment variables are applied in this order, with the former overwritten by the latter:
- Default values, which are set automatically by joi within
config.js
, even if no such environment variable is specified whatsoever. - Variables specified by the
.env
file. - Variables specified via the command line.
Variables are listed below in this format:
A description of what the variable is or does.
- A description of what to set the variable to, whether that be an example, or what to set it to in development or production, or how to figure out how to set it, etc.
- Perhaps another example value, etc.
All requests made to this API must have HTTP header x-client-secret
with a value that matches this environment variable.
An array of domains, including protocol and port. Self-explanatory if you understand CORS.
Note: If req.origin
is not found in your ACCESS_CONTROL_ALLOW_ORIGIN
array, orange-api
will print req.origin
to stdout. You can use that to figure out how to set this value.
- Don't forget that if your client is running on https and/or a port other than 80 or 443, you will have to specify this as well, as in
["https://localhost:12345"]
. - To enable all domains (which is insecure and therefore should only be done in development), set to
["*"]
or["http://something.com", "http://doesntmatter.com", "*"]
- When using Postman, Postman sets the origin to something like
chrome-extension://fhbjgbiflinjbdggehcddcbncdddomop
. However, Postman probably ignores theAccess-Control-Allow-Origin
header of the OPTIONS response, so you might not need to set this. - Phones don't set the origin header, so
req.origin
is undefined in their requests. However, this is ok because they ignore the CORS-related headers on any OPTIONs respones anyway.
If true, the user registration endpoint is public. If false, only users with admin
or programAdministrator
scope can create users.
URL of the webpage the user must go to to start the email verification process. This gets pluggged into in the new-user welcome email.
- Should be
https://domain-of-your-orange-web.whatever/verify-email
.
MongoDB connection URI.
Enable SSL for the connection to MongoDB.
- In production, set to true.
- If this is truthy, then
MONGO_CA_CERT
must be set with a valid value (the value/contents (not filename) of CA bundle that is able to verify the cert being used by the MongoDB you are trying to connect to is valid). - An easy way to set this in development is
MONGO_CA_CERT=$(cat /path/to/your-ca-bundle.pem) grunt dev
.
Only used when MONGO_SSL_ENABLED=true
. Specifies an SSL cert to trust for the connection to MongoDB. If not set, only Mozilla's list of root certs are trusted.
The URL of the Auth Service API.
Must match value of the JWT secret being used by your amida-auth-microservice
instance.
- See that repo for details.
The URL of the Notification Service API.
The username of the service user on the Auth Service. This user is named as such because original is only performed push-notifications-related requests. However, this user now performs a variety of functions.
The password of the user specified by PUSH_NOTIFICATIONS_SERVICE_USER_USERNAME
.
Contributors are welcome. See issues https://github.com/amida-tech/orange-api/issues
The API is structured as a standard Express app using Mongoose for data storage. The Controller-Model pattern is followed, with everything output over
JSON so seperate views not as necessary (although semantically each model instance has a getData method that acts as the view). App setup and initialisation
is in app.js
and database connection/etc is in run.js
. config.js
contains configuration for API keys (sendgrid and twilio for notifications), logging
and database hosts.
Tests are in test/
, structured as directories for each resource group containing e2e tests, and sometimes unit/
directories inside those containing
unit
tests. Grunt (gruntfile.js
) is used to run tests (npm test
) and can also be used to spin up a development server (grunt server:dev
), although node run.js
is much quicker to start up and will work for all endpoints apart from those that rely on schedule matching
(/patients/:id/schedule
, /patients/:id.json
and /patients/:id.pdf
). Please verify you have a .env.test
file before running tests
Controllers are in in lib/controllers
and models in lib/models
. Most are standard CRUD controllers, with various CRUD helper functions used (mainly as
middleware) to DRY things up. See lib/controllers/helpers/crud.js
mainly (e.g., formatObject
and formatList
are used in nearly all endpoints).
Models are pretty standard mongoose models. counter.js
and helpers/increment_plugin.js
are used to provide auto-incrementing numerical IDs. All models
that correspond to patient resources (Doctor
, Dose
, JournalEntry
, Medication
and Pharmacy
) are stored as subdocuments or subarrays within
Patient
, and because of this and some mongoose intracies some of their logic is in lib/models/patient/resources.js
rather than e.g.,
lib/models/doctor.js
.
Schedule matching is slightly more complex. Each medication stores a schedule object, freshly-parsed into a Schedule
(lib/models/schedule/
) object
upon instance initialisation. This represents the schedule when the medication should be taken in an abstract form. schedule/generation.js
uses this
to generate a concrete schedule for when the medication should be taken, given a start and end date. Various endpoints then need to match this up
with the doses the user has actually recorded (either taken or not taken), represented as Dose
objects in patient.doses
. Depending on the level of
information we have about each dose, this is slightly nontrivial problem, solved with an algorithm documented in lib/models/helpers/schedule_matcher.js
.
Patient images ('avatars') are stored in gridfs rather than as files or raw in mongo, and the relevant code is in lib/models/patient/avatar.js
(slightly
more complicated than standard because it parses MIME types from the actual image data whilst storing images).
All errors that should be visible to the API user are passed up the stack then handled by error_handler.js
and errors.js
. Each API error has an
instance of the custom APIError
classs initialised in errors.js
which can then be used anywhere else in the app. error_handler.js
handles both these
and mongoose errors (a couple heuristics are used to look up APIError
instances based on field name, etc). These errors are then returned by setting
the HTTP response code appropriately and returning { success: false, errors: [...] }
as a response body.
The external RXNorm and NPI APIs are proxied for various queries (lib/controllers/rxnorm.js
and lib/controllers/npi.js
). The RXNorm spelling suggestions
endpoint is hit very heavily and RXNorm rate-limit us to 20 queries per second so that's cached (mongo because the mongo infrastructure was already set up
and the advantages of redis/memcached/etc are irrelevant here) in lib/models/rxnorm.js
, although the actual queries for both APIs are just delegated to the
rxnorm-js
and npi-js
NPM libraries (both Amida written).
The /patients/:id.pdf
endpoint generates and returns a report PDF. This is done dynamically on-the-fly but is fast enough this shouldn't be an issue (and
could of course easily be cached if so). The relevant code is in lib/controllers/patients/report.js
(although much of that that should probably be
abstracted out to a lib/views
directory at some point) and uses the pdfmake
library for the actual PDF generation. The fonts/
and images/
directories are used to provide assets in that generation process. grunt report
can be used to generate a sample PDF for test data, and regenerate it
whenever the relevant code changes so is useful for development here.
Notifications are sent out upon various actions (user registration, sharing request received/cancelled/closed/accepted) and notifications for new actions
can easily be added (user.notify
). The relevant code is in lib/models/user/notifications.js
. Handlebars templates for the notifications sent are taken
from the views/
directory. Notifications can be sent to either SMS (Twilio) or email (Sendgrid) (dependent on both the data available for a user and
individual notification settings). API keys for Twilio and Sendgrid are configured in config.js
and are left blank on the staging server so notifications
are not sent out during testing.
The static/
directory contains webpages that are statically accessible on the staging server (with the .html
suffix removed so login.html
becomes
http://STAGING-SERVER-ADDRESS/login
). login
uses the custom URI scheme in the mobile app to launch the app to the login page if it's installed, or
take you to the relevant app store if on mobile and the app's not installed, or just displays a static page on desktop (this page is linked to by the
email notification received when resetting password).
All API endpoints are fully documented using API Blueprint in docs/src
, and docs/build.sh
(grunt docs
) is used to build this into HTML documentation
at docs/output/
with the aglio
library. Some slightly hackish deviations from the API Blueprint spec to get the desired output from Aglio are made,
although these are very apparent and self-explanatory in docs/src
. As well as on the staging server, docs are published on github and the newest docs
can be generated from source and pushed to the gh-pages
branch with grunt docs:push
.
Deployment things are in deploy/
and are documented in deploy/README.md
. We currently recommend using the traditional/Ansible deployment option
which is documented in detail in deploy/traditional/README.md
.
Licensed under Apache 2.0