Scripts and deployment configurations for deployment of SCV2 software.
All containers should be run via Docker compose. Docker compose is the preferred way to handle deployments. That is to say, it is not recommended to use the individual deployment scripts.
There are a handful of profiles that can be enabled/disabled to run all or only a subset of services. The profiles are listed below, and are explained in the update script.
Notes for deployment
- Ensure the following programs are installed: Docker >= 19.03.0, Docker-compose >= 3.8, Git >= 2.x.x
- Clone this repository
- Create and edit the
.env
file from.env.example
file in the root directory of this repository
- Change the tags according to the release branches you want to run, replacing the value after the
=
sign - E.g. for Modatek:
SOCIAL_WEB_APP_TAG=release-modatek-milton
- Run the update script
- On Mac/Linux systems (or within WSL on Windows), run
./update.sh
- On Windows (e.g. Powershell or similar), run
./update.ps1
- On Mac/Linux systems (or within WSL on Windows), run
- Read the information and answer the prompts accordingly
Instructions for deployments (installations and updates) are as follows
To install on a fresh machine, first ensure that the requisite programs are installed:
- Docker >= 19.03.0
- Docker-compose >= 3.8
- Git >= 2.x.x
- (optional) ffmpeg
Once these requirements are satisfied, proceed with the following steps
- Chose a directory in which to keep a copy of this repository. The standard is
~/scv2/git_clones/
Mac/Linux/WSL
mkdir ~/scv2/git_clones
cd ~/scv2/git_clones
Windows
mkdir ~/scv2/git_clones
cd ~/scv2/git_clones
- Clone this repo (https://github.com/pacefactory/deployment-scripts.git)
Mac/Linux/WSL
git clone https://github.com/pacefactory/deployment-scripts.git
Windows
git clone https://github.com/pacefactory/deployment-scripts.git
- Move into the directory
Mac/Linux/WSL
cd deployment-scripts
Windows
cd deployment-scripts
- Create and edit the
.env
file from.env.example
file in the root directory of this repository
- Change the tags according to the release branches you want to run, replacing the value after the
=
sign - E.g. for Modatek:
SOCIAL_WEB_APP_TAG=release-modatek-milton
- Run the update script
Mac/Linux/WSL
./update.sh
Windows
./update.ps1
Mac/Linux/WSL
./down.sh
Windows
./down.ps1
One could build images and then use the scripts in this repo to bring the services online. An example process for this could look like:
- Pull source code for a given repo (will require privileged GitHub access). See below for repository listing for all services.
git clone https://github.com/pacefactory/scv2_dbserver.git
-
Build the image. This will be different depending on the repo, but there will be a
build_image.sh
script present in the repo somewhere. You should then be able to see the new image withdocker images
-
Use the
run_container.sh
script in the appropriate subdirectory to bring the container online. IMPORTANT: Make sure to overwrite the image name when prompted in the script, if needed
NOTE: THIS INFO IS OUTDATED. THE SINGLE-CONTAINER RUN SCRIPTS ARE NOT UP-TO-DATE. PROCEED WITH CAUTION.
IMPORTANT: The single container run scripts (update_from_dockerhub.sh
and run_container.sh
) are not configured to use Docker volumes. Moreover, volumes and bind mounts are not interchangeable. This can result in possible data loss if using both docker-compose and single container scripts
To deploy by bringing each container online separately (manually), you can pull an image from DockerHub and spin up a container using the update_from_dockerhub.sh
script, e.g.
./scv2_dbserver/scripts/update_from_dockerhub.sh
Alternatively, if the image is already located on the machine, you can choose to run a new container directly with run_container.sh
, e.g.
./scv2_dbserver/scripts/run_container.sh
Performing this for all services (and changing scv2_dbserver
appropriately for each) will bring the entire suite online.
Profiles are used when running any docker compose
command. To enable one or more profiles, run
docker compose --profile <profile_name> <command>
By default on new Ubuntu installations, docker ...
commands cannot be ran without sudo permissions. To change this, perform the following in a terminal:
- Create the
docker
group:sudo groupadd docker
- Add your user to the group:
sudo usermod -aG docker $USER
- Log out/in of your session to see changes OR run the following:
newgrp docker
- Verify you can run
docker ...
commands without sudo:docker run hello-world
For more info, see Docker Linux Post-install
If Docker is used without setting up non-root user access, data may be stored in the root home directory, /root
. This may result in migration scripts not working properly and/or the illusion of missing data. Be sure to check this directory.
If using the WSL2 backend for Docker on Windows, resources need to be managed through the WSL container system.
To configure the allocation of physical resources (CPU, memory, etc.), one needs to create a .wslconfig
in the location %UserProfile%\.wslconfig
.
Example config file:
[wsl2]
kernel=C:\\temp\\myCustomKernel
memory=8GB # Limits VM memory in WSL 2 to 8 GB
processors=4 # Makes the WSL 2 VM use four virtual processors
For more info, see Windows WSL Config
Docker Desktop creates two WSL2 containers by default, docker-desktop
and docker-desktop-data
. The virtual disk images are stored by default in %USERPROFILE%\AppData\Local\Docker\wsl\distro\ext4.vhdx
and %USERPROFILE%\AppData\Local\Docker\wsl\data\ext4.vhdx
, respectively. Any Docker images, container file systems, and volumes will be stored within these virtual disks.
To move the storage location (e.g. to another drive), one can recreate the WSL2 containers elsewhere AND preserve any existing Docker data whilst doing so. Be sure to replace C:\path\to\Docker
with the appropriate destination storage path. The process is as follows:
- Stop Docker Desktop (including in the system tray)
- Check the installed WSL containers and verify they are stopped.
- In PowerShell, run
wsl --list -v
- You should see output similar to that of WSL Container List with both containers being 'Stopped' before proceeding
- In PowerShell, run
- Make sure the new directories exist in your destination storage path
- Make sure the base directory,
C:\path\to\Docker
, exists (create it if it does not exist) - Make sure the WSL subdirectory,
C:\path\to\Docker\wsl
, exists (create it if it does not exist) - Make sure the data subdirectory,
C:\path\to\Docker\wsl\data
, exists (create it if it does not exist) - Make sure the distro subdirectory,
C:\path\to\Docker\wsl\distro
, exists (create it if it does not exist)
- Make sure the base directory,
- Export, remove, and recreate the
docker-desktop
container- Export existing container in PowerShell:
wsl --export docker-desktop "C:\path\to\Docker\wsl\distro\docker-desktop.tar"
- Remove existing container in PowerShell:
wsl --unregister docker-desktop
- Import the container in PowerShell:
wsl --import docker-desktop "C:\path\to\Docker\wsl\distro" "C:\path\to\Docker\wsl\distro\docker-desktop.tar" --version 2
- Export existing container in PowerShell:
- Export, remove, and recreate the
docker-desktop-data
container- Export existing container in PowerShell:
wsl --export docker-desktop-data "C:\path\to\Docker\wsl\data\docker-desktop-data.tar"
- Remove existing container in PowerShell:
wsl --unregister docker-desktop-data
- Import the container in PowerShell:
wsl --import docker-desktop-data "C:\path\to\Docker\wsl\data" "C:\path\to\Docker\wsl\data\docker-desktop-data.tar" --version 2
- Export existing container in PowerShell:
- Start Docker Desktop again, and ensure there are no issues during Docker Engine startup
- If everything appears to be working again, you may delete the exported container archives
- Delete
docker-desktop
data in PowerShell:rm C:\path\to\Docker\wsl\distro\docker-desktop.tar
- Delete
docker-desktop-data
data in PowerShell:rm C:\path\to\Docker\wsl\data\docker-desktop-data.tar
- Delete
For more info, see Change WSL Docker Location and Docker WSL Volume Locations
NAME STATE VERSION
* docker-desktop Stopped 2
docker-desktop-data Stopped 2
All services not listed below will always be put online, regardless of profiles used
Profile name | Services |
---|---|
social | social_video_server; social_web_app |
ml | service_classifier |
proc | service_processing |
rdb | relational_dbserver |
Service | Repo |
---|---|
dbserver | https://github.com/pacefactory/scv2_dbserver.git |
realtime | https://github.com/pacefactory/scv2_realtime.git |
relational_dbserver | https://github.com/pacefactory/scv2_relational_dbserver.git |
services_dtreeserver | https://github.com/pacefactory/scv2_services_dtreeserver.git |
service_gifwrapper | https://github.com/pacefactory/scv2_services_gifwrapper.git |
service_classifier | https://github.com/pacefactory/scv2_services_classifier.git |
services_processing | https://github.com/pacefactory/scv2_services_processing.git |
webgui | https://github.com/pacefactory/scv2_webgui.git |
social_web_app | https://github.com/pacefactory/social_web_app.git |
social_video_server | https://github.com/pacefactory/social_video_server.git |
There are a few oddities in the repo:
- Container
webgui
is referred to assafety-gui2-js
on DockerHub (and, by extension, in the image name too)