/experimenter

A web application for managing user experiments for Mozilla Firefox.

Primary LanguagePythonMozilla Public License 2.0MPL-2.0

Mozilla Experimenter

CircleCI

Experimenter is a platform for managing experiments in Mozilla Firefox.

Important Links

Check out the 🌩 Nimbus Documentation Hub or go to the repository that house those docs.

Link Prod Staging Local Dev (Default)
Legacy Home experimenter.services.mozilla.com stage.experimenter.nonprod.dataops.mozgcp.net https://localhost
Nimbus Home /nimbus /nimbus /nimbus
Nimbus REST API /api/v6/experiments/ /api/v6/experiments/ /api/v6/experiments/
GQL Playground /api/v5/nimbus-api-graphql /api/v5/nimbus-api-graphql /api/v5/nimbus-api-graphql
Storybook Storybook Directory https://localhost:3001
Remote Settings settings-writer.prod.mozaws.net/v1/admin settings-writer.stage.mozaws.net/v1/admin http://localhost:8888/v1/admin

Installation

General Setup

On all platforms:

  1. Clone the repo

    git clone <your fork>
    
  2. Copy the sample env file

    cp .env.sample .env
    
  3. Set DEBUG=True for local development

    vi .env
    
  4. Create a new secret key and put it in .env

    make secretkey
    
  5. Run tests

    make check
    
  6. Setup the database

    make refresh
    

Fully Dockerized Setup (continuation from General Setup 1-7)

  1. Run a dev instance

    make up
    
  2. Navigate to it and add an SSL exception to your browser

    https://localhost/
    

Semi Dockerized Setup (continuation from General Setup 1-7)

One might choose the semi dockerized approach for:

  1. faster startup/teardown time (not having to rebuild/start/stop containers)
  2. better ide integration

Notes:

Semi Dockerized Setup

  1. Pre reqs (macOs instructions)

    brew install postgresql llvm openssl yarn
    
    echo 'export PATH="/usr/local/opt/llvm/bin:$PATH"' >> ~/.bash_profile
    export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/opt/openssl/lib/
    
  2. Install dependencies

    source .env
    
    poetry install (cd into app)
    
    yarn install
    
  3. env values

    .env (set at root):
    DEBUG=True
    DB_HOST=localhost
    HOSTNAME=localhost
    
  4. Start postgresql, redis, autograph, kinto

    make up_db
    
  5. Django app

    # in app
    
    poetry shell
    
    yarn workspace @experimenter/nimbus-ui build
    yarn workspace @experimenter/core build
    ./manage.py runserver 0.0.0.0:7001
    

Pro-tip: we have had at least one large code refactor. You can ignore specific large commits when blaming by setting the Git config's ignoreRevsFile to .git-blame-ignore-revs:

git config blame.ignoreRevsFile .git-blame-ignore-revs

Google Credentials for Jetstream

On certain pages an API endpoint is called to receive experiment analysis data from Jetstream to display visualization tables. To see experiment visualization data, you must provide GCP credentials.

  1. Generate a GCP private key file.
  • Ask in #experimenter for the GCP link to create a new key file.
  • Add Key > Create New Key > JSON > save this file.
  • Do not lose or share this file. It's unique to you and you'll only get it once.
  1. Rename the file to google-credentials.json and place it anywhere inside the /app directory.
  2. Update your .env so that GOOGLE_APPLICATION_CREDENTIALS points to this file. If your file is inside the /app directory it would look like this:
    GOOGLE_APPLICATION_CREDENTIALS=/app/google-credentials.json
    

Google Cloud Bucket for Media Storage

We support user uploads of media (e.g. screenshots) for some features.

In local development, the default is to store these files in /app/media using Django's FileSystemStorage class and the MEDIA_ROOT and MEDIA_URL settings.

In production, a GCP bucket and credentials are required.

The bucket name is configured with the UPLOADS_GS_BUCKET_NAME setting. For example:

UPLOADS_GS_BUCKET_NAME=nimbus-experimenter-media-dev-uploads

For local testing of a production-like environment, The credentials should be configured using the GOOGLE_APPLICATION_CREDENTIALS environment variable as described in the previous section on Google Credentials for Jetstream.

In the real production deployment, credentials are configured via workload identity in Google Kubernetes Engine.

Usage

Experimenter uses docker for all development, testing, and deployment.

Building

make build

Build the application container by executing the build script

make compose_build

Build the supporting services (nginx, postgresql) defined in the compose file

make ssl

Create dummy SSL certs to use the dev server over a locally secure connection. This helps test client behaviour with a secure connection. This task is run automatically when needed.

make kill

Stop and delete all docker containers. WARNING: this will remove your database and all data. Use this to reset your dev environment.

make migrate

Apply all django migrations to the database. This must be run after removing database volumes before starting a dev instance.

make load_dummy_experiments

Populates the database with dummy experiments of all types/statuses using the test factories

make refresh

Run kill, migrate, load_locales_countries load_dummy_experiments. Useful for resetting your dev environment when switching branches or after package updates.

Running a dev instance

make up

Start a dev server listening on port 80 using the Django runserver. It is useful to run make refresh first to ensure your database is up to date with the latest migrations and test data.

make up_db

Start postgresql, redis, autograph, kinto on their respective ports to allow running the Django runserver and yarn watchers locally (non containerized)

make up_django

Start Django runserver, Celery worker, postgresql, redis, autograph, kinto on their respective ports to allow running the yarn watchers locally (non containerized)

make up_detached

Start all containers in the background (not attached to shell). They can be stopped using make kill.

make update_kinto

Pull in the latest Kinto Docker image. Kinto is not automatically updated when new versions are available, so this command can be used occasionally to stay in sync.

Running tests and checks

make check

Run all test and lint suites, this is run in CI on all PRs and deploys.

make py_test

Run only the python test suite.

make bash

Start a bash shell inside the container. This lets you interact with the containerized filesystem and run Django management commands.

Helpful Python Tips

You can run the entire python test suite without coverage using the Django test runner:

./manage.py test

For faster performance you can run all tests in parallel:

./manage.py test --parallel

You can run only the tests in a certain module by specifying its Python import path:

./manage.py test experimenter.experiments.tests.api.v5.test_serializers

For more details on running Django tests refer to the Django test documentation

To debug a test, you can use ipdb by placing this snippet anywhere in your code, such as within a test method or inside some application logic:

import ipdb
ipdb.set_trace()

Then invoke the test using its full path:

./manage.py test experimenter.some_module.tests.some_test_file.SomeTestClass.test_some_thing

And you will enter an interactive iPython shell at the point where you placed the ipdb snippet, allowing you to introspect variables and call methods

For coverage you can use pytest, which will run all the python tests and track their coverage, but it is slower than using the Django test runner:

pytest --cov --cov-report term-missing

You can also enter a Python shell to import and interact with code directly, for example:

./manage.py shell

And then you can import and execute arbitrary code:

from experimenter.experiments.models import NimbusExperiment
from experimenter.experiments.tests.factories import NimbusExperimentFactory
from experimenter.kinto.tasks import nimbus_push_experiment_to_kinto

experiment = NimbusExperimentFactory.create_with_status(NimbusExperiment.Status.DRAFT, name="Look at me, I'm Mr Experiment")
nimbus_push_experiment_to_kinto(experiment.id)
Helpful Yarn Tips

You can also interact with the yarn commands, such as checking TypeScript for Nimbus UI:

yarn workspace @experimenter/nimbus-ui lint:tsc

Or the test suite for Nimbus UI:

yarn workspace @experimenter/nimbus-ui test:cov

For a full reference of all the common commands that can be run inside the container, refer to this section of the Makefile

make integration_test_legacy

Run the integration test suite for experimenter inside a containerized instance of Firefox. You must also be already running a make up dev instance in another shell to run the integration tests.

make integration_test_nimbus

Run the integration test suite for nimbus inside a containerized instance of Firefox. You must also be already running a make up dev instance in another shell to run the integration tests.

make integration_vnc_up

First start a prod instance of Experimenter with:

make refresh&&make up_prod_detached

Then start the VNC service:

make integration_vnc_up

Then open your VNC client (Safari does this on OSX or just use VNC Viewer) and open vnc://localhost:5900 with password secret. Right click on the desktop and select Applications > Shell > Bash and enter:

cd app
sudo mkdir -m 0777 tests/integration/.tox/logs
tox -c tests/integration/

This should run the integration tests and watch them run in a Firefox instance you can watch and interact with.

Integration Test options

  • TOX_ARGS: Tox commandline variables.
  • PYTEST_ARGS: Pytest commandline variables.

An example using PYTEST_ARGS to run one test.

make integration_test_legacy PYTEST_ARGS="-k test_addon_rollout_experiment_e2e"

Accessing Remote Settings locally

In development you may wish to approve or reject changes to experiments as if they were on Remote Settings. You can do so here: http://localhost:8888/v1/admin/

There are three accounts you can log into Kinto with depending on what you want to do:

  • admin / admin - This account has permission to view and edit all of the collections.
  • experimenter / experimenter - This account is used by Experimenter to push its changes to Remote Settings and mark them for review.
  • review / review - This account should generally be used by developers testing the workflow, it can be used to approve/reject changes pushed from Experimenter.

The admin and review credentials are hard-coded here, and the experimenter credentials can be found or updated in your .env file under KINTO_USER and KINTO_PASS.

Any change in remote settings requires two accounts:

  • One to make changes and request a review
  • One to review and approve/reject those changes

Any of the accounts above can be used for any of those two roles, but your local Experimenter will be configured to make its changes through the experimenter account, so that account can't also be used to approve/reject those changes, hence the existence of the review account.

For more detailed information on the Remote Settings integration please see the Kinto module documentation.

Storybook in CircleCI

This project uses Storybook as a tool for building and demoing user interface components in React.

For most test runs in CircleCI, a static build of Storybook for the relevant commit is published to a website on the Google Cloud Platform using mozilla-fxa/storybook-gcp-publisher. Refer to that tool's github repository for more details.

You can find the Storybook build associated with a given commit on Github via the "storybooks: pull request" details link accessible via clicking the green checkmark next to the commit title.

Capture

The Google Cloud Platform project dashboard for the website can be found here, if you've been given access:

For quick reference, here are a few CircleCI environment variables used by storybook-gcp-publisher that are relevant to FxA operations in CircleCI. Occasionally they may need maintenance or replacement - e.g. in case of a security incident involving another tool that exposes variables.

  • STORYBOOKS_GITHUB_TOKEN - personal access token on GitHub for use in posting status check updates

  • STORYBOOKS_GCP_BUCKET - name of the GCP bucket to which Storybook builds will be uploaded

  • STORYBOOKS_GCP_PROJECT_ID - the ID of the GCP project to which the bucket belongs

  • STORYBOOKS_GCP_CLIENT_EMAIL - client email address from GCP credentials with access to the bucket

  • STORYBOOKS_GCP_PRIVATE_KEY_BASE64 - the private key from GCP credentials, encoded with base64 to accomodate linebreaks

Frontend

Experimenter has two front-end UIs:

  • core is the legacy UI used for Experimenter intake which will remain until nimbus-ui supersedes it
  • nimbus-ui is the Nimbus Console UI for Experimenter that is actively being developed

Learn more about the organization of these UIs here.

Also see the nimbus-ui README for relevent Nimbus documentation.

API

API documentation can be found here

Contributing

Please see our Contributing Guidelines

License

Experimenter uses the Mozilla Public License