¶ ↑
SINGLE CELL PORTAL README¶ ↑
SETUPThis application is built and deployed using Docker, specifically native Docker for Mac OSX. Please refer to their online documentation for instructions on installing and creating a default VM for managing Docker images.
¶ ↑
BUILDING THE DOCKER IMAGEOnce all source files are checked out and Docker has been installed and your VM configured, open a terminal window and execute the following steps:
-
Navigate to the project directory
-
Build the Single Cell Portal image:
docker build -t single_cell_docker -f Dockerfile .
This will start the automated process of building the Docker image for running the portal. The image is built off of the Passenger-docker baseimage and comes with Ruby, Nginx, and Passenger by default, with additional packages added to the Broad Institute KDUX Rails baseimage which pulls from the original baseimage. The extended image contains Oracle Instant Client, ImageMagick, and Sphinx Search.
If this is your first time building the image, it may take several minutes to download and install everything.
¶ ↑
BEFORE RUNNING THE CONTAINERSince this project utilizes native Docker for Mac OSX, any resources on the host machine cannot be reached by the running container (specifically, any database resources). Therefore, we will need to deploy a database container using Docker as well. This project uses MongoDB as the primary datastore.
First, create a directory somewhere on your computer in which to store the raw database content (it doesn’t matter where as long as it has rw
permissions, but preferably it would be inside your home directory).
To deploy the database container:
-
Pull the image:
docker pull mongo
-
Navigate to the project directory
-
Run the helper script to start the DB container:
bin/boot_mongo -d (path to data store directory)
Note: Once the container has been run once, you can stop & restart it using: docker stop mongodb
or docker restart mongodb
¶ ↑
RUNNING THE CONTAINEROnce the image has successfully built and the database container is running, use the following command to start the container:
bin/boot_docker -u (sendgrid username) -P (sendgrid password) -k (service account key path) -o (oauth client id) -S (oauth client secret)
This sets up several environment variables in your shell and then runs the following command:
docker run --rm -it --name $CONTAINER_NAME -p 80:80 -p 443:443 -p 587:587 --link mongodb:mongodb -h localhost -v $PROJECT_DIR:/home/app/webapp:rw -e PASSENGER_APP_ENV=$PASSENGER_APP_ENV -e MONGO_LOCALHOST=$MONGO_LOCALHOST -e SENDGRID_USERNAME=$SENDGRID_USERNAME -e SENDGRID_PASSWORD=$SENDGRID_PASSWORD -e SECRET_KEY_BASE=$SECRET_KEY_BASE -e SERVICE_ACCOUNT_KEY=$SERVICE_ACCOUNT_KEY -e OAUTH_CLIENT_ID=$OAUTH_CLIENT_ID -e OAUTH_CLIENT_SECRET=$OAUTH_CLIENT_SECRET single_cell_docker
The container will then start running, and will execute its local startup scripts that will configure the application automatically.
You can also run the bin/boot_docker
script in help mode by passing -H
to print the help text which will show you how to pass specific values to the above env variables. Note: running the shortcut script with an environment of ‘production’ will cause the container to spawn headlessly by passing the -d
flag, rather than --rm -it
.
¶ ↑
BROAD INSTITUTE CONFIGURATIONBroad Institute project members can load all project secrets from Vault and boot the portal directly by using the bin/load_env_secrets.sh
script.
bin/load_env_secrets.sh -p (path/to/service/account) -f (path/to/portal/config) -e (environment)
This script takes three parameters:
-
VAULT_SECRET_PATH (passed with -p): Path to portal configuration JSON inside Vault.
-
SERVICE_ACCOUNT_PATH (passed with -c): Path to GCP service account configuration JSON inside Vault.
-
PASSENGER_APP_ENV (passed with -e; optional): Environment to boot portal in. Defaults to ‘development’.
The script requires two command line utilities: vault and jq. Please refer to their respective sites for installation instructions.
¶ ↑
DOCKER RUN COMMAND ENVIRONMENT VARIABLESThere are several variables that need to be passed to the Docker container in order to run properly:
-
CONTAINER_NAME (passed with –name): This names your container to whatever you want. This is useful when linking containers.
-
PROJECT_DIR (passed with -v): This mounts your local working directory inside the Docker container. Makes doing local development via hot deployment possible.
-
PASSENGER_APP_ENV (passed with -e): The Rails environment you wish to load. Can be either development, test, or production (default is development).
-
MONGO_LOCALHOST (passed with -e): Name of the container running MongoDB. Even though our two containers are linked, this needs to be set to allow Rails to communicate with the database.
-
SENDGRID_USERNAME (passed with -e): The username associated with a Sendgrid account (for sending emails).
-
SENDGRID_PASSWORD (passed with -e): The password associated with a Sendgrid account (for sending emails).
-
SECRET_KEY_BASE (passed with -e): Sets the Rails SECRET_KEY_BASE environment variable, used mostly by Devise in authentication for cookies.
-
SERVICE_ACCOUNT_KEY (passed with -e): Sets the SERVICE_ACCOUNT_KEY environment variable, used for making authenticated API calls to FireCloud & GCP.
-
OAUTH_CLIENT_ID (passed with -e): Sets the OAUTH_CLIENT_ID environment variable, used for Google OAuth2 integration.
-
OAUTH_CLIENT_SECRET (passed with -e): Sets the OAUTH_CLIENT_SECRET environment variable, used for Google OAuth2 integration.
¶ ↑
RUN COMMAND IN DETAILThe run command explained in its entirety:
-
–rm: This tells Docker to automatically clean up the container after exiting.
-
-it: Leaves an interactive shell running in the foreground where the output of Nginx can be seen.
-
–name CONTAINER_NAME: This names your container to whatever you want. This is useful when linking other Docker containers to the portal container, or when connecting to a running container to check logs or environment variables. The default is single_cell.
-
-p 80:80 -p 443:443 -p 587:587: Maps ports 80 (HTTP), 443 (HTTPS), and 587 (smtp) on the host machine to the corresponding ports inside the Docker container.
-
–link mongodb:mongodb: Connects our webapp container to the mongodb container, creating a virtual hostname inside the single_cell_docker container called mongodb.
-
-v [PROJECT_DIR]/:/home/app/webapp: This mounts your local working directory inside the running Docker container in the correct location for the portal to run. This accomplishes two things:
-
Enables hot deployment for local development
-
Persists all project data past destruction of Docker container (since we’re running with –rm), but not system-level log or tmp files.
-
-
-e PASSENGER_APP_ENV= [RAILS_ENV]: The Rails environment. Will default to development, so if you’re doing a production deployment, set this accordingly.
-
-e MONGO_LOCALHOST= [MONGO_LOCALHOST]: Name of the container running MongoDB. Even though our two containers are linked, this needs to be set to allow Rails to communicate with the database.
-
-e SENDGRID_USERNAME= [SENDGRID_USERNAME] -e SENDGRID_PASSWORD= [SENDGRID_PASSWORD]: The credentials for Sendgrid to send emails. Alternatively, you could decide to not use Sendgrid and configure the application to use a different SMTP server (would be done inside your environment’s config file).
-
-e SECRET_KEY_BASE= [SECRET_KEY_BASE]: Setting the SECRET_KEY_BASE variable is necessary for creating secure cookies for authentication. This variable automatically resets every time we restart the container.
-
-e SERVICE_ACCOUNT_KEY= [SERVICE_ACCOUNT_KEY]: Setting the SERVICE_ACCOUNT_KEY variable is necessary for making authenticated API calls to FireCloud and GCP. This should be a file path relative to the app root that points to the JSON service account key file you exported from GCP.
-
-e OAUTH_CLIENT_ID= [OAUTH_CLIENT_ID] -e OAUTH_CLIENT_SECRET= [OAUTH_CLIENT_SECRET]: Setting the OAUTH_CLIENT_ID and OAUTH_CLIENT_SECRET variables are necessary for allowing Google user authentication. For instructions on creating OAuth 2.0 Client IDs, refer to the Google OAuth 2.0 documentation.
-
single_cell_docker: This is the name of the image we created earlier. If you chose a different name, please use that here.
¶ ↑
MEMORY CONSTRAINTSWhen the portal and database are under load, and memory usage is of a concern (more in production than in local development) as file parsing is resource-intensive. To address this, running bin/boot_docker -e production
will enable memory limits (passed via docker run -m
) to prevent either the portal or the database from running out of memory and being killed by the host OS. In production, by default the Google VM is configured with 4 cores and 26GB of RAM. Therefore, the memory defaults in both boot_docker
and boot_mongo
have been set to keep total memory usage under this limit.
Memory defaults are:
-
Portal memory: 12GB (production only)
-
MongoDB memory: 12GB (always enabled)
-
MongoDB WiredTiger cache size: 4GB (always enabled)
These defaults can be overridden inside both boot_docker
and boot_mongo
by using the appropriate flags. To see these, run bin/boot_docker -H
and boot_mongo -H
. Depending on your environment, you may wish to override these depending on resources available.
¶ ↑
ADMIN USER ACCOUNTSThe Single Cell Portal has the concept of a ‘super-admin’ user account, which will allow portal admins to view & edit any studies in the portal for QA purposes, as well as receive certain admin-related emails. This can only be enabled manually through the console.
To create an admin user account:
-
Create a user account normally through the UI (using sign-up or Google auth features)
-
Start the portal locally (or ssh into production VM)
-
Connect to the running portal container:
docker exec -it single_cell bash
-
Query for the desired user account:
user = User.find_by(email: '<email address here>')
-
Set the admin flag to true:
user.update(admin: true)
¶ ↑
TESTS¶ ↑
UI REGRESSION SUITE¶ ↑
TEST SETUPAll user interface tests are handle through Selenium Webdriver and Chromedriver and are run against a regular instance of the portal, usually in development mode. The test suite is run from the test/ui_test_suite.rb
script.
Due to the nature of Docker, the tests cannot be run from inside the container as the Docker container cannot connect back to Chromedriver and the display from inside the VM. As a result, the UI test suite has no knowledge of the Rails environment or application stack. Therefore, you will need to have a minimal portal environment enabled outside of Docker. The minumum requirements are as follows:
-
Gems: rubygems, test-unit, selenium-webdriver (see Gemfile.lock for version requirements)
-
Google Chrome along with 2 Google accounts, one of which needs to be a portal admin account (see above)
-
Chromedriver
-
FireCloud accounts for both Google accounts (see FIRECLOUD INTEGRATION below)
¶ ↑
RUNNING UI TESTSTo begin the test suite, launch an instance of the portal in development mode and run the following command in another shell in the portal root directory:
ruby test/ui_test_suite.rb -- -e=(email account #1) -s=(email account #2) -p='(email account #1 password)' -P='(email account #2 password)'
Passwords are required as all portal accounts are actual Google accounts, so we pass in passwords at runtime to allow the script to authenticate as the specified user. Passwords are only stored temporarily in-memory and are not persisted to disk at any point.
Paths to the chromedriver binary and your profile download directory can also be configured with the -c
and -d
flags, respectively.
In addition to the above configuration options, it is possible to run the UI test suite against a deployed instance of the portal by passing in the base portal URL via the -u
flag. Note that all of the above user requirements must be met for whatever instance you test.
There are 3 main groups of tests: admin (study creation & editing), front-end (study data searching & visualization), and cleanup (removing studies created during testing). You can run groups discretely by passing -n /pattern/
to the test suite as follows:
To run all admin tests:
ruby test/ui_test_suite.rb -n /admin/ -- (rest of test parameters)
To run all front-end tests:
ruby test/ui_test_suite.rb -n /front-end/ -- (rest of test parameters)
This can also be used to run smaller groups by refining the regular expression to match only certain names of tests. For example, to run all front-end tests that deal with file downloads:
ruby test/ui_test_suite.rb -n /front-end.*download/ -- (rest of test parameters)
You can also run a single test by passing the entire test name as the name paramenter:
ruby test/ui_test_suite.rb -n ‘test: admin: create a study’ – (rest of test parameters)
More information on usage & test configuration can be found in the comments at the top of the test suite.
¶ ↑
UNIT & INTEGRATION TESTSThere is a small unit & integration test framework for the Single Cell Portal that is run using the built-in test Rails harness, which uses Test::Unit and minitest-rails. These unit & integration tests only cover specific functionality that requires integration with portal models and methods, and therefore cannot be run from the UI test suite.
These tests are run via the usual rake test
command after seeding the test database. This is handled via a shell script (bin/run_unit_tests.sh
that will seed the test database, run the tests, and then destroy all created records to clean up after running.
To run the unit & integration test suite, the boot_docker
script can be used:
bin/boot_docker -e test -k (service account key path)
This will boot a new instance of the portal in test mode and run all associated tests, not including the UI test suite.
It is also possible to run individual unit tests suites by booting the portal as normal (in development mode) and then running the following command after attaching to the running instance (with docker exec -it single_cell bash
):
ruby -I test path/to/test/suite.rb
For instance, to run the FireCloudClient unit test suite:
ruby -I test test/models/fire_cloud_client_test.rb
¶ ↑
GOOGLE DEPLOYMENT¶ ↑
PRODUCTIONThe official production Single Cell Portal is deployed in Google Cloud Platform. The project name is broad-singlecellportal. Only Broad Institute Single Cell Portal team members have access to this instance. If you are a collaborator and require access, please email single_cell_portal@broadinstitute.org.
To access the production instance for maintenance purposes:
-
Go to the broad-singlecellportal GCP page
-
Select “Compute Engine” from the top-left nav dropdown
-
At the bottom of the page is the entry for the production VM (called singlecell-production)
-
Click the SSH button under the Connect heading (this will launch an SSH tunnel in a browser window)
Once you have connected, the portal is running in this directory: /home/docker-user/single_cell_portal
. All source files are owned by docker-user
, so if you need to pull from git, make sure you sudo -u docker-user -Hs
first to preserve the correct file permissions. Any docker commands need to be run as root
, so exit out of the shell before running them.
The production URL is: portals.broadinstitute.org/single_cell
If you are deploying your own production instance in a different project, the following VM/OS configurations are recommended:
-
VM: n1-highmem-4 (4 vCPUs, 26 GB memory)
-
OS: Ubuntu 15.10 (Wily) or later
-
Disks: Two standard persistent disks, one for the portal/database, and a second for upload tmp space mounted at
[/path/to/portal/root]/data
For more information on formatting and mounting additional persistent disks to a GCP VM, please read the GCP Documentation.
¶ ↑
PRODUCTION DOCKER COMMANDS-
To bounce MongoDB:
docker restart mongodb
-
To bounce the portal:
docker restart single_cell
Note: MongoDB must be running to restart the portal, otherwise it cannot instantiate the link to the database. -
To connect to the running MongoDB container:
docker exec -it mongodb mongo
-
To connect to the running portal container:
docker exec -it single_cell bash
If you have pulled changes from source that require re-building the container, follow this checklist from inside the project directory as root
:
-
Rebuild the docker image:
docker build -t single_cell_docker -f Dockerfile .
-
Stop the portal:
docker stop single_cell
-
Remove the container instance:
docker rm single_cell
-
Launch a new instance of the portal with the updated container:
bin/boot_docker -u (sendgrid username) -P (sendgrid password) -e production -p (prod database password) -h (production hostname) -k (service account key path) -o (oauth client id) -S (oauth client secret)
You will need to rebuild the docker image if you do any of the following:
-
Edit the Dockerfile
-
Edit any scripts or configuration files listed in the Dockerfile:
-
Gemfile (including Gemfile.lock if versions have changed)
-
set_user_permissions.bash
-
rails_startup.bash
-
nginx.conf
-
webapp.conf
-
When you launch a new instance of the portal, you should get a response that is looks like a giant hexadecimal string - this is the instance ID of the new container. Once the container is running, you can connect to it with the docker exec
command and perform various Rails-specific actions, like:
-
Re-index the database:
bin/rake RAILS_ENV=production db:mongoid:create_indexes
-
Launch the Rails console (to inspect database records, for instance):
bin/rails console production
¶ ↑
STAGINGThere is also a staging instance of the Single Cell Portal used for testing new functionality in a production-like setting. This is running inside the same project on the singlecell-staging
VM. The staging instance URL is single-cell-staging.broadinstitute.org/single_cell
The run command for staging is identical to that of production, with the exception of passing -e staging
as the environment, and any differing values for hostnames/client secrets/passwords as needed.
¶ ↑
FIRECLOUD INTEGRATIONThe Single Cell Portal stores uploaded study data files in FireCloud workspaces, which in turn store data in GCP buckets. This is all managed through a GCP service account which in turn owns all portal workspaces and manages them on behalf of portal users. All portal-related workspaces are within the single-cell-portal
namespace, which should be noted is a separate project from the one the portal operates out of.
When a study is created through the portal, a call is made to the FireCloud API to provision a workspace and set the ACL to allow owner access to the user who created the study, and read/write access to any designated shares. Every FireCloud workspace comes with a GCP storage bucket, which is where all uploaded files are deposited. No ACLs are set on individual files as all permissions are inherited from the workspace itself. Files are first uploaded temporarily locally to the portal (so that they can be parsed if needed) and then sent to the workspace bucket in the background after uploading and parsing have completed.
If a user has not signed up for a FireCloud account, they will receive and invitation email from FireCloud asking them to complete their registraiton. While they will be able to interact with their study/data through the portal without completing their registration, they will not be able to load their FireCloud workspace or access the associated GCP bucket until they have done so.
Deleting a study will also delete the associated workspace, unless the user specifies that they want the workspace to be persisted. New studies can also be initialized from an existing workspace (specified by the user during creation) which will synchronize all files and permissions.
¶ ↑
OTHER FEATURES¶ ↑
ADMIN CONTROL PANEL, DOWNLOAD QUOTAS & ACCESS REVOCATIONAll portal users are required to authenticate before downloading data as we implement daily per-user quotas. These are configurable through the admin control panel which can be accessed only by portal admin accounts (available through the profile menu or at /single_cell/admin).
There are currently 2 configuration options:
-
Daily download quota limit (defaults to 2 terabytes, but is configurable to any amount, including 0)
-
Disabling all FireCloud access
Disabling all FireCloud access is achieved by revoking all access to studies directly in FireCloud and using the portal permission map (study ownership & shares) as a backup cache. This will prevent anyone from downloading data either through the portal or directly from the workspaces themselves. This will have the side effect of disallowing any edits to studies while in effect, so this feature should only be used as a last resort to curtail runaway downloads. While access is disabled, only the portal service account will have access to workspaces.
Re-enabling FireCloud access will restore all permissions back to their previous state.
¶ ↑
MAINTENANCE MODEThe production Single Cell portal has a defined maintenance window every Monday from 9:30-10:30AM EST. To minimize user dispruption when doing updates during that window (or hot fixes any other time) the portal has a ‘maintenance mode’ feature that will return a 503 and redirect all incoming traffic to a static maintenance HTML page.
To use this feature, run the bin/enable_maintenance.sh [on/off]
script accordingly.