Web app providing api endpoints and serving a SPA frontend.
NOTE: Active development on this project has moved to https://gitlab.com/upchieve/subway, no more pushes should go straight to the Github repo.
This repository is the backend API server and queue worker only. To work on the frontend, you also need to follow the readme for the frontend repo.
Table of Contents
- UPchieve API Server and Worker
The recommended tool for runtime version management is nvm
. To use nvm
on Windows, first install the appropriate Linux shell distribution using WSL
(Windows Subsystem for Linux). We currently run on Node v20.10.0, you can switch to this using
$ nvm install v20.10.0 && nvm use v20.10.0
After switching npm versions using nvm, you will need to run $ npm install
. Next install Docker
and start according to their instructions for your operating system.
On Linux systems you may need to install docker-compose
manually; on Windows and MacOS it ships with base docker. A docker-compose yaml specifies how to spin up Mongo, Redis, PostgreSQL, and PGAdmin containers to support the server, and also seeds the PostgreSQL database with test data.
- Run the following command to start the containers
$ docker-compose --profile dev up -d
- Confirm PostgreSQL is running and the database is properly seeded by making a query in a DB admin tool
- connect via
$ psql --host 127.0.0.1 --port 5432 --username admin --dbname upchieve
and passwordPassword123
OR - use PGAdmin at
http://localhost:80
with usernameadmin@upchieve.org
and passwordPassword123
- connect via
When you want to stop and remove the containers, run:
$ docker-compose --profile dev down
The below steps are tested on a Macintosh.
- Confirm the database container dependencies above are running with
docker ps
- Custom properties are currently required for the server to connect data sources on a desktop computer, so run or add to your profile the below commands:
export SUBWAY_REDIS_HOST=localhost
export SUBWAY_DB_HOST=localhost
- (optional) If you want to test Twilio voice calling functionality, set the
host
property to[your public IP address]:3000
(minus the brackets), and configure your router/firewall to allow connections to port 3000 from the Internet. Twilio will need to connect to your system to obtain TwiML instructions. - (optional) Run
npm run dev:worker
to start the worker process. The dev worker will automatically attempt to connect to your local Redis instance and read jobs from there. Additionally, you can runnpm run add-cron-jobs
to add all repeatable jobs to the job queue.
This repository is the backend API server and queue worker only. To work on the frontend, you also need to follow the readme for the frontend repo.
Once you have the dependencies running and installed, you can run the following
$ npm run dev:backend
to start the dev server and a watch for changes to the server code. Once you also have the frontend running you can visit http://localhost:8080 and you're good to go!
If you change anything in the .sql
files in server/models
, run npm run pgtyped
to pick up the changes and regenerate the associated .ts
files. This generates typescript versions of the queries that can be referenced in code, as well as entity types.
All database administration files live in /database
. The db_init
directory houses files used to bring up a fresh database for local dev or staging. To bring up a fresh database run:
schema.sql
to create the UPchieve schemaauth.sql
to create relevant app roles and grant them permission over the schematest_seeds.sql
to fill the db with static seeds and test data for local developmentseed_migrations.sql
to populate thepublic.seed_migrations
table so we know how to apply future seed migrations
db:reset
: resets the local postgres container'spchieve
database to an empty statedb:schema
: appliesdb_init/schema.sql
to the local dbdb:auth
: appliesdb_init/auth.sql
to the local tbdb:seeds
: appliesdb_init/test_seeds.sql
to the local dbdb:reset-schema
: runs 1-3 abovedb:reset-seeds
: runs 1-4 above; equivalent to restarting the docker containerdb:build-seeds
: runs 1-3, builds static seeds fromseeds/static
, and applies all seed migrationsdb:dump
: dumps contents of local db toschema.sql
,test_seeds.sql
, andseed_migrations.sql
db:schema-new
: creates a new blank schema migration (same asdbmate new
)db:seeds-new
: creates a new blank seed migration (same asdbmate new
)db:schema-up
: applies any pending schema migrations without writing out the new schemadb:schema-down
: rollsback the most recent applied schema migration without writing out the new schemadb:seeds-up
applies any pending seed migrations
When writing a schema migration include both rollout and rollback instructions - for example:
-- migrate:up
ALTER TABLE upchieve.schools
ADD COLUMN legacy_city_name text;
-- migrate:down
ALTER TABLE upchieve.schools
DROP COLUMN legacy_city_name;
Test that the rollback script actually works by running npm run db:schema-down
. Note that a seeds-down
script does not exist because writing reversible seed migrations is often mroe trouble than it's worth.
Notes:
- If the database/schema end up in an irrecoverable state, you can drop everything with
npm run db:reset-seeds
to get the database to a fresh state (alternatively destroy and rebuild the container) - After verifying the migrations are good dump the schema and data for the next developer with
npm run db:dump
- Everything in
db_init
is programmatically generated and can be ignored in diff examinations
The database is populated with the following users for local development:
password | properties | |
---|---|---|
student1@upchieve.org |
Password123 |
approved school |
student2@upchieve.org |
Password123 |
partner student, approved school |
student3@upchieve.org |
Password123 |
partner student, no school |
volunteer1@upchieve.org |
Password123 |
approved, onboarded, partner volunteer |
volunteer2@upchieve.org |
Password123 |
approved, onboarded, gets special reporting |
volunteer3@upchieve.org |
Password123 |
approved, onboarded, open sign up, different time zone |
volunteer4@upchieve.org |
Password123 |
approved, not onboarded |
volunteer5@upchieve.org |
Password123 |
not approved, not onboarded |
volunteer6@upchieve.org |
Password123 |
admin |
The app is split into two components, the server/worker, and the frontend Vue SPA.
Server code is found in the server
directory, and SPA code in a separate repository.
The server
folder of the repository provides the bootstrap file main.ts
and a
package definitions file.
config.ts
contains a map of configuration keys for running the server. All
keys and sensitive information should be placed in this file.
Model definitions that map to database models, along with related methods to act on those models, such as parsing, validation, and data transformations. Ideally models are encapsulated in their files, and only expose methods that return interfaces of data, rather than actual Mongo Documents. We are in the process of migrating to this practice across all models.
Directory structure mimics the endpoint structure exposed by the server. Each file provides one or more endpoint routes, responsible for request acceptance/rejection and error handling.
Routes use services to perform the business logic of the server, providing separation of concerns: the services have no need to be aware of how the endpoints work. Instead, a controller provides ways to allow the routes to trigger something (a user update, e.g.).
See all current endpoints in the Swagger UI documentation
If you have the local backend dev running you can go here for a local version.
Our application runs several asynchronous jobs. These jobs are put into a queue, managed by BullMQ, that lives in a Redis instance. These jobs might be triggered programmatically, on a schedule, or manually using BullMQ's UI called TaskForce.sh.
- Job definitions live in
worker/jobs
- Jobs need to be registered in the jobProcessors list in
worker/jobs/index.ts
There are three ways to enqueue jobs:
- Schedule them in
scripts/add-cron-jobs.ts
, which will insert them into the local Redis database. Do this for jobs that need to repeat regularly. - Programmatically (example: search code base for
QueueService.add(...)
) - Manually using the TaskForce.sh UI (Ask a teammate to add you!)
- Go to TaskForce -> Dashboard -> Production queue -> "Add a new job"
You can run the worker queue locally as well as enqueue specific jobs:
- To run the queue, do
npm run dev:worker
- To enqueue jobs locally, we use the script
server/scripts/testing-jobs.ts
- Simply edit the value of
jobToQueue
with the job you want
- Simply edit the value of
- Then, in another terminal, run the script to enqueue the job:
npx ts-node server/scripts/testing-jobs.ts
- To connect your local Redis instance to Taskforce.sh, copy the connection token here then run the following command:
npx taskforce --team "Engineering" -n "<your name> local" -t connection-token-goes-here
You can visualize the state of the application's sockets using the Socket IO Admin UI. In non-development environments, you will need to authenticate to view the UI. See 1Pass for login instructions.