A project management platform where your community collaborates and gets stuff done.
The original codebase was developed by Animorph Co-operative under community engagement lead by the consortium partners: Co-operation Ireland, Belfast Interface Project, University of Essex and Donegal Youth Service. This project was supported by the European Union’s PEACE IV Programme, managed by the Special EU Programmes Body (SEUPB).
This is a Django + Wagtail + Postgres + Redis + Celery stack. We use Docker and Docker Compose to setup a development environment.
First, we need to setup some environment variables:
- Run
cp .env.example .env
- Edit
.env
with your settings and password
Continue with Docker section below.
This will start the stack in dev mode, though without frontend assets. After this, continue in the section below to setup styles and JavaScript.
Linux & MINGW64 on Windows:
# build containers
USER_ID=$(id -u) GROUP_ID=$(id -g $whoami) docker compose up --build
macOS:
# Mac has obfuscated groups for Docker, so we use user ID
# for Dockerfile group instead of group ID
USER_ID=$(id -u) GROUP_ID=$(id -u) docker compose up --build
Windows (running Linux containers):
USER_ID=$(1000) GROUP_ID=$(1000) docker compose up --build
We use vite for bundling our typescript.
Make sure you have npm and nodejs installed, then install the dependencies:
npm install
To run in dev mode:
npm run dev
(or make watch
will also run the same thing)
This will start a server that’s only serving static files. To see the app locally go to http://127.0.0.1:9000
To build the files run:
npm run build
It will build the assets into sfs/vite-build
.
Inside the build directory it'll put:
manifest.json
used to connect ts paths to build js files- all the build
.js
files with their full path (+ a file hash)
In production vite_asset <path>
will use manifest.json
to lookup the path to
the built .js
file.
This directory is registered as a django static dir, so when collectstatic is run, it will include those resources.
The built files will be served as normal django static assets.
Enter shell:
docker compose exec app sh
Run Django-related administrative commands:
docker compose exec app django-admin startapp healerapp
# OR
docker compose exec app python3 manage.py startapp healerapp
Create superuser:
docker compose exec app python3 manage.py createsuperuser
Collect static:
docker compose exec app python3 manage.py collectstatic
Migrations:
docker compose exec app python3 manage.py makemigrations && docker compose exec app python3 manage.py migrate
Running Tests:
# all tests
docker compose exec app pytest tests
# a specific one
docker compose exec app pytest tests/test_account.py
# add `-s` flag to display output
docker compose exec app pytest -s tests/test_account.py
Python black code formatting:
docker compose exec -it app make format
docker compose run --rm app make format
Python code linting:
docker compose exec -it app make lint
Fill a development database with demo content:
# load avatar data
docker compose exec app python3 manage.py loaddevdata autoupload/avatars.json
# load all data: avatars, areas, resources, organisations, users
docker compose exec app python3 manage.py loaddevdata autoupload/devdata.json
Have a look at devdata/devdata.json
for some user accounts you can log in as.
Deploy your own instance of Shared Futures in 7 simple steps!
We use Ansible to provision production machines. To setup:
-
Get a Debian 11 (bullseye) machine and its IP address.
-
Get a domain name and configure its DNS with the machine’s IP address.
-
Install ansible:
cd ansible/
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
-
Run
cp envs/example.envrc envs/production.envrc
while inside theansible/
directory. -
Edit
envs/production.envrc
with IP, domain name, and other settings. -
Verify acquiring SSH access has worked:
source envs/production.envrc
ansible all -m ping
ansible-inventory --list
- Run the ansible playbook with:
ansible-playbook playbooks/base.yaml --verbose
ansible-playbook playbooks/production.yaml --verbose
You should now be able to access your Shared Future Space instance at your domain name!
This of "rivers" as work projects, "swimmers" as project members, and "springs" as areas where work can take place.
Each river is at a specific stage (envison, plan, act, reflect) at any given time. Rivers progress directionally from one stage to another.
"Resources" are knowledge resources as links. "Salmon" is our helper bot.
apps/
: includes all Django appssearch/
: Wagtail search viewssfs/
: Django project roottemplates/
: includes all Django HTML templates used across all Django appstests/
: Django tests using pytest
When adding celery task, restarting its container is required.
We need to ensure that UID and GID from the system (host) are mapped onto the container user. The containers carry over User and Group IDs as they share one kernel. So we need to ensure that IDs of the user and group of the host match these of the container user. This behaviour is different in macOS as it automatically maps container file ownership to that of the host.
References:
- https://medium.com/@mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf
- https://blog.dbi-services.com/how-uid-mapping-works-in-docker-containers/
- https://stackoverflow.com/a/61009413/5631104
Mapping the UID and GID will change depending on the environment, we don't want to change the Dockerfile each time. Does look like it's challenging/dangerous to include shell in env variables, e.g. Ref 1. Hence need to pass the user variables via CLI when building the containers.
USER_ID=$(id -u) GROUP_ID=$(id -g $whoami) docker compose up --build
Tailwind is include in the vite config, via PostCSS.
Note: Styles passed dynamically from views are not automatically applied to tailwind classes (which are exported as static classes at the time of save/build). So even if the classes are on the list in tailwind.config.js, but they are not used by any html element at the time of running the app you cannot refer to them.