This repo dockerizes OpenG2P ERP.
“OpenG2P” is a set of digital building blocks opensourced to support large scale cash transfer programs digitize key cogs in their delivery chain: 1) beneficiary targeting and enrollment, 2) beneficiary list management, 3) payment digitization, and 4) recourse.
The CRM provides an interface with rich set of tools for managing beneficiaries, payments, complaints, and more. It is built on the OCB port of Odoo
git clone git@github.com:OpenG2P/openg2p-crm-docker.git
cd openg2p-crm-docker
To boot this environment, these files must be present before you run docker-compose:
./.docker/odoo.env
must defineADMIN_PASSWORD
../.docker/db-access.env
must definePGPASSWORD
../.docker/db-creation.env
must definePOSTGRES_PASSWORD
(must be equal toPGPASSWORD
above)../.docker/smtp.env
(OPTIONAL) must defineRELAY_PASSWORD
(password to access the real SMTP relay). Only if you are using backups./.docker/backup.env
(OPTIONAL) must defineAWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
(obtained from S3 provider) andPASSPHRASE
(to encrypt backup archives). Only if you are using backups
Use the provided example docker-example folder and set your passwords as explained above
mv dot-docker-example .docker
Once secrets are in place start the reverse proxy, that boots traefik.
docker-compose -p inverseproxy -f inverseproxy-none-ssl.yaml up -d
@TODO - Need to document how the SSL variant can be used
Then run the docker.
docker-compose -f prod.yaml up -d
If installing for the first time, you will need to initialize the database
docker-compose run --rm odoo odoo --stop-after-init -i openg2p
Point your browwer to http://domain and log in with the default credentails:
username: admin
password: admin
Change your password immediately after login!
Backups are only available in the production environment. They are provided by tecnativa/duplicity:postgres-s3. The structure of the backed up folder:
├── prod.sql
└── odoo/
├── addons/
└── filestore/
└── prod/
├── ...
└── ...
To make backup immediatly execute following command:
# Executes all jobs scheduled for daily run.
# With default configuration it's equal to making full backup
docker-compose exec backup /etc/periodic/daily/jobrunner
To restore backup:
# stop odoo if it's running
docker-compose stop odoo
# start backup and db
docker-compose up -d backup
# switch to some version
docker-compose exec backup restore --time TIME_IN_BACKUP_NAME --force
# ⚠️ DELETE PRODUCTION database
#docker-compose backup dropdb
# create new empty database
docker-compose exec backup createdb
# restore database
docker-compose exec backup sh -c 'psql -f $SRC/$PGDATABASE.sql'
# start odoo
docker-compose up -d
To pull in updates to the CRM run
docker-compose -f prod.yaml build --pull # Updates your image
docker-compose -f prod.yaml run --rm odoo odoo --stop-after-init -u base # Updates addons
docker-compose -f prod.yaml up -d
If installing for development, you will need the following tools installed:
- [copier][] v3.0.6 or newer
- git 2.24 or newer
- invoke installed in Python 3.6+ (and the binary must be
called
invoke
— beware if your distro installs it asinvoke3
or similar). - pre-commit
- python 3.6+
Install non-python apps with your distro's recommended package manager. The recommended way to install Python CLI apps is pipx:
python3 -m pip install --user pipx
pipx install copier
pipx install invoke
pipx install pre-commit
pipx ensurepath
Clone or Download the repository to your target machine
git clone git@github.com:OpenG2P/openg2p-crm-docker.git
Get the OpenG2P CRM code with:
invoke git-aggregate
invoke img-build --pull
Initialize a database:
docker-compose run --rm odoo odoo --stop-after-init -i openg2p
Above will by default use devel.yaml
if installing for production please use
prod.yaml
Start OpenG2P with:
invoke start
List other tasks shipped with this project:
invoke --list
Clean out project if we invoked git-aggregate or used setup-devel.yaml. Run this before git add
git clean -ffd
To browse OpenG2P go to http://localhost:12069
and log in with the default
credentails:
username: admin
password: admin
git pull
invoke git-aggregate
invoke img-build --pull
docker-compose run --rm odoo odoo --stop-after-init -u base # Updates your database
docker-compose -f up -d
We use MailHog to provide a fake SMTP server that intercepts all mail sent by OpenG2P and displays a simple interface that lets you see and debug all that mail comfortably, including headers sent, attachments, etc.
- For [development][], it's in http://localhost:8025
- For [testing][], it's in http://$DOMAIN_TEST/smtpfake/
- For [production][], it's not used.
All environments are configured by default to use the bundled SMTP relay. They are configured by these environment variables:
SMTP_SERVER
SMTP_PORT
SMTP_USER
SMTP_PASSWORD
SMTP_SSL
EMAIL_FROM
For them to be useful, you need to remove any ir.mail_server
records in your database.
The Docker network is in --internal
mode, which means that it has no access to the
Internet. This feature protects you in cases where a [production][] database is restored
and OpenG2P tries to connect to SMTP/IMAP/POP3 servers to send or receive emails. Also
when you are using connectors,
mail trackers or any
API sync/calls.
If you still need to have public access, set internal: false
in the environment file,
detach all containers from that network, remove the network, reatach all containers to
it, and possibly restart them. You can also just do:
docker-compose down
invoke start
Usually a better option is whitelisting.
wdb
is one of the greatest Python debugger available,
and even more for Docker-based development, so here you have it preinstalled.
To use it, write this in any Python script:
import wdb
wdb.set_trace()
It's available by default on the [development][] environment, where you can browse http://localhost:1984 to use it.
A good rule of thumb is test in testing before uploading to production, so this environment tries to imitate the [production][] one in everything, but removing possible pollution points:
-
It has a fake
smtp
service based on MailHog, just like development. -
It has no
backup
service. -
It is isolated.
To use it, you need to add secrets files just like for production, although secrets for smtp and backup containers are not needed because those don't exist here. Also, start the global inverse proxy before running the test environment.
Test it in your machine with:
docker-compose -f test.yaml up -d
Since the testing environment is network-isolated, this can change some deadlocks or big timeouts in code chunks that are not ready for such situation. OpenG2P happens to have some of them.
The [development][] environment includes the default recommended whitelist proxies, but
for [testing][], it is recommended to have a separate docker compose project running
along in the same server that provides a globalwhitelist_default
network where all
whitelist proxies exist. This is a better practice for a testing environment where many
services might coexist, because it will let you save lots of processing power and IP
addresses.
Recommended globalwhitelist/docker-compose.yaml file
version: "2.1"
networks:
public:
driver_opts:
encrypted: 1
shared:
internal: true
driver_opts:
encrypted: 1
services:
cdnjs_cloudflare_com:
image: tecnativa/whitelist
restart: unless-stopped
networks:
public:
shared:
aliases:
- "cdnjs.cloudflare.com"
environment:
TARGET: "cdnjs.cloudflare.com"
PRE_RESOLVE: 1
fonts_googleapis_com:
image: tecnativa/whitelist
restart: unless-stopped
networks:
public:
shared:
aliases:
- "fonts.googleapis.com"
environment:
TARGET: "fonts.googleapis.com"
PRE_RESOLVE: 1
fonts_gstatic_com:
image: tecnativa/whitelist
restart: unless-stopped
networks:
public:
shared:
aliases:
- "fonts.gstatic.com"
environment:
TARGET: "fonts.gstatic.com"
PRE_RESOLVE: 1
www_google_com:
image: tecnativa/whitelist
restart: unless-stopped
networks:
public:
shared:
aliases:
- "www.google.com"
environment:
TARGET: "www.google.com"
PRE_RESOLVE: 1
www_gravatar_com:
image: tecnativa/whitelist
restart: unless-stopped
networks:
public:
shared:
aliases:
- "www.gravatar.com"
environment:
TARGET: "www.gravatar.com"
PRE_RESOLVE: 1
At times we find ourselves having to blow away docker and start from scratch. Only use the command below if you know what you are doing as it blows away all the volumes and images on your edockerngine.
./scripts/docker_blow_away.sh
@TODO (limit the above to only openg2p crm)
In examples below I will skip the -f <environment>.yaml
part and assume you know which
environment you want to use.
Also, we recommend to use run
subcommand to create a new container with same settings
and volumes. Sometimes you may prefer to use exec
instead, to execute an arbitrary
command in a running container.
docker-compose run --rm odoo psql
You will need to restart it whenever any Python code changes, so to do that:
docker-compose restart odoo
In development mode odoo restarts by itself thanks to --dev=reload
option.
modules=addon1,addon2
# Install their dependencies first
docker-compose run --rm odoo addons init --dependencies $modules
# Test them at install
docker-compose run --rm odoo addons init --test $modules
# Test them again at update
docker-compose run --rm odoo addons update --test $modules
* Note: This replaces the old deprecated unittest
script.
For all services in the environment:
docker-compose logs -f --tail 10
Only OpenG2P's:
docker-compose cc
docker-compose run --rm odoo odoo -i addon1,addon2 --stop-after-init
docker-compose run --rm odoo odoo -u addon1,addon2 --stop-after-init
Just run:
docker-compose run --rm odoo click-odoo-update --watcher-max-seconds 30
This script is part of [click-odoo-contrib
][]; check it for more details.
* Note: --watcher-max-seconds
is available because we ship a
patched version. Check that PR
for docs.
* Note: This replaces the old deprecated autoupdate
script.
docker-compose run --rm odoo pot addon1[,addon2]
Now copy the relevant parts to your addon1.pot
file.
docker-compose run --rm odoo odoo shell
docker-compose run --rm -p 127.0.0.1:$SomeFreePort:8069 odoo
Then open http://localhost:$SomeFreePort
.
This project is a Doodba scaffolding. Check upstream docs on the matter:
Doodba scaffolding is maintained by:
Also, special thanks to our dear community contributors.