Pycroft is the current user management system of the AG DSN student network. It is based on Flask and expects a Postgres database making use of the SQLAlchemy ORM.
A basic understanding of git is advisable.
The first step should be to clone this repository via git clone <url>
, using what clone url shows you above
this very readme
An easy way of doing the setup is by using docker-compose.
Follow the
guides here
and here.
You will need at least docker engine 17.06.0+
and a docker compose 1.16.0+
.
Also, note that you might have to add your user to the docker
group
for running docker as a non-root:
sudo usermod -aG docker $(whoami)
After adding yourself to a new group, you need to obtain a new session, by e.g. logging out and in again.
You should now be able to run docker-compose config
and see the
current configuration.
We provide three different container environments for the project:
- dev: Development environment. The container images contain helpful tools, the containers uses persistent volumes and your local project directory of your machine is mounted inside the container.
- test: Test environment. Almost identical to the development environment. Persistent volumes are replaces by tmpfs file systems for improved performance and ephemerality.
- prod: Production environment. Contains only what is required to run Pycroft without development tools.
For each environment a docker-compose file is provided. The following diagram shows all services/containers, images and volumes at a glance:
- A
base
service/container for creating the base imageagdsn/pycroft-base
based on the Debian variant of Docker's official Python imagepython
. The service/container is not actually needed, it's only used to build the base image. The base image contains basic system software required to run Pycroft. Apycroft
user and group with UID and GID specified as build arguments is created in the image. The UID and GID of this user should match with your user on your development machine, because the development service bind mounts the project directory on your local machine in the container. The home directory of thepycroft
user is created user at/opt/pycroft
. A virtual environment (venv) is created at/opt/pycroft/venv
and automatically activated by the image's entrypoint. - A
dev-app
service/container based onagdsn/pycroft-dev
derived fromagdsn/pycroft-base
. The development image contains additional packages for development, e.g.gcc
,yarn
. The service uses two persistent volumes:- the home directory
/opt/pycroft
of thepycroft
user, that contains among other things, the virtual environment, the pip cache, and the.bash_history
. - the Pycroft sources on your local machine at
/opt/pycroft/app
.
- the home directory
- A
test-app
service/container based on theagdsn/pycroft-dev
image, that runs unit and integration tests. The database tests are run against an optimized in-memory database. - A
prod-app
service/container based onagdsn/pycroft-prod
, which is based onagdsn/pycroft-base
that contains only the basics that are required for running Pycroft without development tools, such asgcc
oryarn
. Pycroft and its dependencies are build using an instance of theagdsn/pycroft-develop
image using the multi-stage builds feature of Docker. - A
dev-db
andtest-db
service/container based on the officialpostgresql
image, that provides a development and test database respectively. The test database usestmpfs
for the data directory to improve performance. The dev database uses a persistent volume for the data directory. - A
dev-ldap
andtest-ldap
service/container based on thedinkel/openldap
image, that provides a development and test LDAP server respectively. - A
dev-mq
andtest-mq
service/container based on the officialrabbitq
image, that provides a development and test message queue respectively.
The separate services for dev and test are mainly for isolation (you don't
want tests to affect your development instance and vice versa) and also
for performance (unit tests should be quick).
There are no prod-
services for db
, ldap
, and mq
, because the production
instances of these services are typically managed outside of Pycroft.
All services of the same type (dev and test) share the same network
namespace, i.e. you can reach the database server on 127.0.0.1
from dev-app
although it's running in a different container.
The services are put into different compose files for convenience:
docker-compose.base.yml
: Common definitions of servicesdocker-compose.dev.yml
: Development servicesdocker-compose.test.yml
: Test servicesdocker-compose.prod.yml
: Production services
The dev environment is default environment.
The default compose file docker-compose.yml
is a symlink to
docker-compose.dev.yml
.
To set the UID
and GID
build arguments of the agdsn/pycroft-base
image
with docker-compose
, use an docker-compose
.env
file:
UID=<your-uid>
GID=<your-gid>
An .env
template is included as example.env
in the project root.
Copy the example to .env
and set the correct values for your user,
docker-compose
will automatically pick up the contents of this file.
The example also includes other useful environment variables, such as
COMPOSE_PROJECT_NAME
.
You can also use environment variables from your shell to specify the UID/GID
build arguments when invoking docker-compose
.
The docker-compose files pass the UID
and GID
environment variables as build
arguments to docker.
Don't be fooled by your shell however by executing the following command and
feeling safe, if it outputs your UID:
echo $UID
Bash and zsh automatically define this variable, but do not export it:
python3 -c 'import os; print(os.getenv("UID"))'
You have to explicitly export the variable:
export UID
# Bash does not set GID, zsh does, so you can omit the assignment with zsh:
export GID=$(id -g)
You should put these lines somewhere in your shell's startup script (e.g.
.profile
in your $HOME
), so that it is always defined, if you want to rely on these
variables instead of an .env
file.
You could also the --build-arg
option of docker-compose build
,
but this is not advised as it can easily be forgotten.
docker-compose
uses the name of the directory, the compose file resides in, as
the project name.
This name is used as a prefix for all objects (containers, volumes, networks)
created by docker-compose
by default.
To use a different project name, use the COMPOSE_PROJECT_NAME
environment
variable.
The tag of the images created by docker-compose
can be specified with the
TAG
environment variable, which defaults to latest
, e.g.:
TAG=1.2.3 docker-compose -f docker-compose.prod.yml build
This will tag all generated images with the tag 1.2.3
.
A complete environment can be started by running
docker-compose up -d
This will start all dev environment.
docker-compose
will build necessary images if not already present,
it will not however automatically rebuild the images if the Dockerfile
s or
any files used by them are modified.
If you run this command for the first time, this might take a while, as a series of packages and image are downloaded, so grab a cup of tea and relax.
All services, except base
, which is only used to build the
agdsn/pycroft-base
image, should now be marked as UP
, if you take a look at
docker-compose ps
.
There you see which port forwardings have been set up (remember the port web
has been exposed!)
Because you started them in detached mode, you will not see what they print to stdout. You can inspect the output like this:
docker-compose logs # for all services
docker-compose logs dev-app # for one service
docker-compose logs -f --tail=50 dev-app # Print the last 50 entries and follow the logs
The last command should tell you that the server spawned an instance at 0.0.0.0:5000 from inside the container.
But don't be too excited, pycroft will fail after the login – we have to set up the database.
To start another enviroment, run docker-compose
with the-f
flag to specify a
different compose file, e.g.:
docker-compose -f docker-compose.test.yml up -d
This would start the test environment.
You can (re-)build/pull a particular service/image (or all of them if no service is specified) by running:
docker-compose build --force-rm --pull [service]
In order to integrate the setup into PyCharm, make sure that you are using the Professional edition, because the Docker integration feature is only available in the Professional edition of PyCharm. Also make sure that you have updated to a recent version, there were important bug fixes with regards to the Docker integration.
The dev and test environments should be added to PyCharm as project interpreters.
Go to “Settings” → “Project: Pycroft” → “Project Interpreter” → Gear icon → “Add remote” → “Docker Compose”.
Create a new server for your local machine (use the default settings for that),
if none exists yet.
Select the config file docker-compose.dev.yml
in the project root,
select the the service: dev-app
,
and type in the following path for the python interpreter:
/opt/pycroft/venv/bin/python
.
Repeat the same thing for test environment defined in
docker-compose.test.yml
.
Save, and make sure the correct interpreter (dev, not test) is selected as default for the project (“Project settings” → “Project interpreter”). As a proof of concept, you can run a “Python Console” inside PyCharm.
A few run configurations are already included in the project's .idea
folder.
If you have created the project interpreters according to the above steps,
the appropriate interpreters should have been autoselected for each run configuration.
You can access databases with PyCharm if you are so inclined. First, you need to obtain the IP address of the database container. If you didn't change the project name, the following command will yield the IP address of the database development container:
docker inspect pycroft_dev-db_1 -f '{{ .NetworkSettings.Networks.pycroft_dev.IPAddress }}'
Make sure that database container is started, show the database pane in PyCharm,
and add a new data source.
PyCharm may complain about missing database drivers.
Install any missing driver files directly through PyCharm or your
distribution's package manager (whatever you prefer).
The password is password
.
For this section, double check that every container is up and running
via docker-compose ps
, and if necessary run docker-compose up -d
again.
Pycroft needs a PostgreSQL database backend. The unit tests will generate the schema and data automatically, but usually you want to run your development instance against a recent copy of our current production database.
Importing the production database into Pycroft is a three-step process:
-
A regular dump is published in our internal gitlab.
Clone this repository to your computer.
-
Copy the
pycroft.sql
file to the database container:docker cp ~/.../pycroft-data/pycroft.sql $(docker ps -aqf "name=pycroft_dev-db"):/pycroft.sql
-
Import the dump:
docker-compose exec --user postgres dev-db psql -d pycroft -f /pycroft.sql
After all that, you should be able to log in into your pycroft
instance with the username agdsn
at localhost:5000
. All users have the password password
.
Congratulations!
For the testing setup, there exists a separate docker-compose file:
# get the stack up and running
docker-compose -f docker-compose.test.yml up -d
# run all the tests
docker-compose -f docker-compose.testing.yml run --rm web nosetests -v
# run only the frontend tests
docker-compose -f docker-compose.testing.yml run --rm web nosetests -v tests.frontend
Pycroft uses Alembic to manage changes to its database schema. On startup Pycroft invokes Alembic to ensure that the database schema is up-to-date. Should Alembic detect database migrations that are not yet applied to the database, it will apply them automatically.
To get familiar with Alembic it is recommended to read the official tutorial.
Migrations are python modules stored under pycroft/model/alembic/versions/
.
A new migration can be created by running:
docker-compose run --rm dev-app alembic revision -m "add test table"
Alembic also has the really convenient feature to autogenerate migrations, by comparing the current status of the database against the table metadata of the application.
docker-compose run --rm dev-app alembic revision --autogenerate -m "add complex test table"
Due to laziness, prefix every bash-snippet with
alias drc=docker-compose
Re-Install everything using yarn, and re-run the webpack entrypoint.
drc run --rm dev-app shell yarn install
drc run --rm dev-app webpack
Reinstall the pip requirements
drc run --rm dev-app pip install -r requirements.txt
drc run --rm dev-app alembic downgrade $hash