/noizwaves.cloud

The cloud at noizwaves.cloud

Primary LanguageShell

noizwaves.cloud

A self hosted cloud

Requirements

  1. Install Ansible locally via:
    • brew install ansible ansible-lint
    • pacman -S ansible
  2. Install Ansible packages via:
    1. ansible-galaxy install willshersystems.sshd

Setup

  1. ansible-playbook -i inventory.yaml playbook.yaml
  2. SSH into nodes and manage individual applications directly

Adding new machines

For each new machine:

  1. Install Ubuntu LTS
    1. Username: cloud
    2. Enable SSH server
    3. Import SSH keys from GitHub
  2. Bootstrap as root using bash <(curl https://raw.githubusercontent.com/noizwaves/noizwaves.cloud/main/ansible/bootstrap.sh)
  3. Connect to Tailscale
  4. Run anisble playbook
  5. Populate .envrc
  6. Setup crontabs
  7. Add to the inventory and ssh config

Automation

  1. $ crontab -e
  2. Enable automated daily 1am hot backups with this cron config:
SHELL=/bin/bash
MAILTO=""
HOME=/home/cloud

0 1 * * * ./cloud-config/backups/hot/backup.sh >cloud-config/hot_backup.log 2>&1
  1. Enable DNS healthchecks with cron config:
*/2 * * * * host www.google.com 127.0.0.1 && <healthchecks.io health check>

K3s (Kubernetes)

See here

Traefik

  1. $ cd traefik
  2. $ mkdir -p ~/cloud-data/traefik/letsencrypt
  3. $ cp .env.tmpl .env
  4. Input appropriate values
  5. $ docker-compose up -d
  6. Open Traefik dashboard at https://traefik.odroid.noizwaves.cloud

Watchtower

  1. $ cd watchtower
  2. $ cp .env.tmpl .env
  3. Input appropriate values
  4. $ docker-compose up -d

Pi-hole

  1. $ cd pihole
  2. $ mkdir -p ~/cloud-data/pihole/config ~/cloud-data/pihole/dnsmasq
  3. $ cp .env.tmpl .env
  4. Input appropriate values
  5. $ docker-compose up -d

Authelia

  1. $ cd authelia
  2. $ mkdir -p ~/cloud-data/authelia/mariadb ~/cloud-data/authelia/redis
  3. Create a user database
    1. $ cp config/users_database.yml.tmpl config/users_database.yml
    2. Follow steps in comments to create users
  4. Create configuration
    1. $ cp config/configuration.yml.tmpl config/configuration.yml
    2. Replace all ${CLOUD_DOMAIN} with desired value
  5. Create secrets
    1. $ mkdir -p .secrets
    2. $ openssl rand -base64 32 > .secrets/jwt.txt
    3. $ openssl rand -base64 32 > .secrets/session.txt
    4. $ openssl rand -base64 32 > .secrets/mysql_password.txt
    5. $ cp .env.tmpl .env
    6. Fill in appropriate values
  6. $ docker-compose up -d

Minio

  1. $ cd minio
  2. $ mkdir -p ~/cloud-data/minio/data
  3. $ cp .env.tmpl .env
  4. Fill in appropriate values
  5. $ docker-compose up -d
  6. Navigate to Minio Console
  7. Use S3-compatible API using:
    • Bucket Name: s3://BUCKET_NAME
    • Endpoint URL: https://s3.dell.noizwaves.cloud
    • Region Name: dell

InfluxDB

  1. $ cd influxdb
  2. $ mkdir -p ~/cloud-data/influxdb/data
  3. $ docker-compose up -d
  4. Visit influxdb and set up data collection

Bitwarden

  1. $ cd bitwarden
  2. $ mkdir -p ~/cloud-data/bitwarden/data
  3. $ cp .env.tmpl .env
  4. Input appropriate values
  5. Change value of SIGNUPS_ALLOWED to 'true'
  6. $ docker-compose up -d
  7. Create user
  8. Change value of SIGNUPS_ALLOWED to 'false'
  9. $ docker-compose up -d

Speedtest

  1. $ cd speedtest
  2. $ docker-compose up -d
  3. Visit Speedtest

UpSnap

  1. $ cd upsnap
  2. $ mkdir -p ~/cloud-data/upsnap/data
  3. $ docker-compose up -d
  4. Visit UpSnap

Seafile

  1. $ cd seafile
  2. $ cp .env.tmpl .env
  3. Input appropriate values
  4. $ docker-compose up -d
  5. Log in, navigate to System Admin > Settings and update SERVICE_URL & FILE_SERVER_ROOT
  6. Edit config files at SEAFILE_DATA=$(docker volume inspect seafile_data --format '{{ .Mountpoint }}')
    1. Edit the value of FILE_SERVER_ROOT in $SEAFILE_DATA/seafile/conf/seahub_settings.py
    2. Edit the value of enabled in $SEAFILE_DATA/seafile/conf/seafdav.conf
    3. Edit $SEAFILE_DATA/seafile/conf/ccnet.conf
  7. Restart memcached $ docker-compose restart memcached

Nextcloud

  1. $ cd nextcloud
  2. $ mkdir -p ~/cloud-data/nextcloud/data ~/cloud-data/nextcloud/config ~/cloud-data/nextcloud/mariadb
  3. $ cp .env.tmpl .env
  4. Input appropriate values
  5. $ docker-compose up -d
  6. Open Nextcloud
  7. Configure application to use MySQL with the following settings:
    1. Database name: nextcloud
    2. Username: nextcloud
    3. Password: value from .env
    4. Host: mariadb
  8. Edit /config/www/nextcloud/config/config.php
    1. Add trusted_proxies array that includes web network CIDR ($ docker network inspect web)

Resilio Sync

  1. $ cd resilio-sync
  2. $ mkdir -p ~/cloud-data/resilio-sync/data ~/cloud-data/resilio-sync/config
  3. $ docker-compose up -d
  4. Open Resilio Sync
  5. Configure application

FreshRSS

  1. $ cd freshrss
  2. $ mkdir -p ~/cloud-data/freshrss/config ~/cloud-data/freshrss/mariadb
  3. $ cp .env.tmpl .env
  4. Input appropriate values
  5. $ docker-compose up -d
  6. Open FreshRSS
  7. Configure application
    1. Database type: MySQL
    2. Host: mariadb
    3. Database username: freshrss
    4. Database password: <value from .env file>
    5. Database: freshrss
    6. Table prefix: `` (empty string)

Standard Notes

  1. $ cd ~/cloud-config/standardnotes
  2. $ mkdir -p ~/cloud-data/standardnotes/mariadb
  3. $ cp .env.tmpl .env
  4. Input appropriate values
  5. $ docker-compose up -d
  6. Open Standard Notes web
  7. Create account
  8. Install extensions (via Extensions > Import Extension > url > Enter > Install):

Fotos

  1. $ cd fotos

  2. $ mkdir -p ~/cloud-data/fotos/thumbnails/v2 ~/cloud-data/fotos/normals

  3. Create WebDAV credentials via $ htpasswd -c credentials.list <username> and then enter the password.

  4. $ docker-compose up -d

  5. $ cd .../fotos-lauren

  6. $ mkdir -p ~/cloud-data/fotos-lauren/thumbnails/v2 ~/cloud-data/fotos-lauren/normals

  7. Create WebDAV credentials via $ htpasswd -c credentials.list <username> and then enter the password.

  8. $ docker-compose up -d

Firefly III

  1. $ cd firefly-iii
  2. $ mkdir -p ~/cloud-data/firefly-iii/export ~/cloud-data/firefly-iii/upload ~/cloud-data/firefly-iii/mariadb
  3. $ cp .env.tmpl .env
  4. Input appropriate values
  5. $ docker-compose up -d
  6. Open Firefly III

PhotoStructure

  1. $ cd photostructure
  2. $ mkdir -p ~/cloud-data/photostructure/library ~/cloud-data/photostructure/tmp ~/cloud-data/photostructure/config ~/cloud-data/photostructure/logs
  3. $ docker-compose up -d
  4. Open Photostructure

Photoprism

  1. $ cd photoprism
  2. $ mkdir -p ~/cloud-data/photoprism/storage
  3. $ docker-compose up -d
  4. Open Photoprism

Tandoor

  1. $ cd tandoor
  2. $ mkdir -p ~/cloud-data/tandoor/postgres ~/cloud-data/tandoor/staticfiles ~/cloud-data/tandoor/mediafiles
  3. $ cp .env.tmpl .env
  4. Input appropriate values
  5. $ docker-compose up -d
  6. Open Tandoor
  1. $ cd plex
  2. $ mkdir -p ~/cloud-data/plex/config ~/cloud-data/plex/data ~/cloud-data/plex/transcode
  3. Generate a claim
  4. $ docker-compose up -d
  5. Open Plex

AV1 Direct Play to AppleTV

This moves AV1 transcoding from the server onto the AppleTV.

  1. $ curl -Ls -o ~/cloud-data/plex/config/Library/Application\ Support/Plex\ Media\ Server/Profiles/tvOS.xml https://raw.githubusercontent.com/currifi/plex_av1_tvos/main/tvOS.xml
  2. $ docker-compose restart

Registry (Docker container registry)

  1. $ cd registry
  2. $ mkdir -p ~/cloud-data/registry/data
  3. $ docker compose up -d
  4. $ docker login registry.noizwaves.cloud

Focalboard

  1. $ cd focalboard
  2. $ mkdir ~/cloud-data/focalboard/files
  3. $ touch ~/cloud-data/focalboard/focalboard.db
  4. $ cp config.json.tmpl ~/cloud-data/focalboard/config.json
  5. Input appropriate values
  6. $ docker-compose up -d
  7. Open Focalboard

Filebrowser

  1. $ cd filebrowser
  2. $ mkdir ~/cloud-data/filebrowser
  3. $ touch ~/cloud-data/filebrowser/database.db
  4. $ cp filebrowser.json.tmpl ~/cloud-data/filebrowser/filebrowser.json
  5. Input appropriate values
  6. Initialize configuration by running
    $ docker run --rm \
        -v /home/cloud/cloud-data/filebrowser/filebrowser.json:/.filebrowser.json \
        -v /home/cloud/cloud-data/filebrowser/database.db:/database.db \
        filebrowser/filebrowser \
        config init
    
  7. Switch to Proxy Header based authentication method by running
    $ docker run --rm \
        -v /home/cloud/cloud-data/filebrowser/filebrowser.json:/.filebrowser.json \
        -v /home/cloud/cloud-data/filebrowser/database.db:/database.db \
        filebrowser/filebrowser \
        config set --auth.method=proxy --auth.header=Remote-User
    
  8. Create admin users by running
    $ docker run --rm \
        -v /home/cloud/cloud-data/filebrowser/filebrowser.json:/.filebrowser.json \
        -v /home/cloud/cloud-data/filebrowser/database.db:/database.db \
        filebrowser/filebrowser \
        users add $USERNAME $PASSWORD --perm.admin=true --perm.execute=false
    
  9. $ docker-compose up -d
  10. Open Filebrowser

Adguard Home

For private network DNS resolution

  1. $ cd adguard
  2. $ mkdir -p ~/cloud-data/adguard/work ~/cloud-data/adguard/conf ~/cloud-data/adguard/tailscale
  3. $ docker-compose up -d
  4. Open Adguard Home and set it up

Running

  1. $ cd running
  2. $ docker-compose up -d
  3. Open Running

Vikunja

  1. $ cd vikunja
  2. $ mkdir -p ~/cloud-data/vikunja/files ~/cloud-data/vikunja/mariadb
  3. $ cp .env.tmpl .env
  4. Input appropriate values
  5. Clone source code
    1. $ mkdir ~/workspace
    2. $ git clone https://kolaente.dev/vikunja/api.git ~/workspace/vikunja-api
    3. $ git clone https://kolaente.dev/vikunja/frontend.git ~/workspace/vikunja-frontend
  6. $ docker-compose up -d
  7. Open Vikunja

Syncthing

  1. $ cd syncthing
  2. $ mkdir -p ~/cloud-data/syncthing/config
  3. $ docker-compose up -d
  4. Open Syncthing
  5. Open Vikunja

Gitea

  1. $ cd gitea
  2. $ mkdir -p ~/cloud-data/gitea/data
  3. $ docker-compose up -d
  4. Open Gitea and complete initialization
  5. Update configuration
    1. webhook.ALLOWED_HOST_LIST to *.noizwaves.cloud

Drone

  1. $ cd drone
  2. $ mkdir -p ~/cloud-data/drone/data
  3. $ cp .env.tmpl .env
  4. Update values in .env
  5. $ docker-compose up -d
  6. Open Drone and complete installation

Trilio

  1. $ cd trilio
  2. $ mkdir -p ~/cloud-data/trilio
  3. $ docker-compose up -d
  4. Open Trilio

Matrix (Synapse)

  1. $ cd matrix
  2. $ mkdir -p ~/cloud-data/matrix/data ~/cloud-data/matrix/postgres ~/cloud-data/matrix/telegram
  3. $ cp .env.tmpl .env
  4. Input appropriate values
  5. $ docker-compose run --rm -e SYNAPSE_SERVER_NAME=matrix.noizwaves.cloud -e SYNAPSE_REPORT_STATS=no synapse generate
  6. Edit Synapse config using $ vim ~/cloud-data/matrix/data/homeserver.yaml
  7. Generate Telegram config using $ docker-compose run --rm telegram
  8. Edit Telegram config using $ vim ~/cloud-data/matrix/telegram/config.yaml 1 Generate Telegram appservice registration using $ docker-compose run --rm telegram
  9. $ docker-compose up -d
  10. Register users by running $ docker-compose exec synapse register_new_matrix_user -c /data/homeserver.yaml http://localhost:8008
  11. Open Synapse

iMessage Bridge

Based upon the instructions.

Prepare synapse on server:

  1. $ cd matrix
  2. $ mkdir -p ~/cloud-data/matrix/imessage plugins
  3. $ wget -o plugins/shared_secret_authenticator.py https://raw.githubusercontent.com/devture/matrix-synapse-shared-secret-auth/master/shared_secret_authenticator.py
  4. Edit ~/cloud-data/matrix/data/homeserver.yaml and add a new item to password_providers:
    - module: "shared_secret_authenticator.SharedSecretAuthenticator"
        config:
        sharedSecret: "${SHARED_SECRET_AUTH_SECRET}"
    Where:
    • SHARED_SECRET_AUTH_SECRET = $ pwgen -s 128 1

Prepare bridge on mac:

  1. Identify mac to use to run bridge and setup iCloud (Messages and Contacts)
  2. Download latest release of mautrix-imessage to mac
  3. Extract to a folder
  4. $ cp example-config.yaml config.yaml and edit values:
    • homeserver.address to https://matrix.noizwaves.cloud
    • homeserver.websocket_proxy to wss://matrix-wsproxy.noizwaves.cloud
    • homeserver.domain to noizwaves.cloud
    • bridge.user to @adam:noizwaves.cloud
    • bridge.login_shared_secret to ${SHARED_SECRET_AUTH_SECRET}
  5. $ ./mautrix-imessage -g
  6. Ensure that config.yaml contains appservice.as_token and appservice.hs_token from registration.yaml
  7. Copy registration.yaml from mac to ~/cloud-data/matrix/imessage/registration.yaml on server

Prepare wsproxy on server:

  1. cp .env_wsproxy.tmpl .env_wsproxy
  2. Input appropriate values (from ~/cloud-data/matrix/imessage/registration.yaml)
  3. Ensure that the ~/cloud-data/matrix/imessage:/bridges/imessage volume mount is present for synapse
  4. $ docker-compose up -d synapse wsproxy

Start iMessage bridge on mac:

  1. $ ./mautrix-imessage (and if required, grant permission to read Contacts)

Private SSH-based proxy

Server

  1. Create instance
  2. ufw enable 22
  3. Configure NGINX with noizwaves-cloud-private-proxy.conf:
    stream {
        upstream web_server {
            server 192.168.196.57:443;
        }
    
        server {
            listen 8443;
            proxy_pass web_server;
        }
    }
    

Client

  1. $ cd proxy_client
  2. $ cp .env.tmpl .env
  3. Input appropriate values for the server
  4. $ docker network create noizwaves_cloud_proxy
  5. $ docker-compose up -d
  6. Find proxy container's IP address ($IP_ADDRESS) in $ docker-compose logs -f
  7. Add entry to /etc/hosts that resolve to $IP_ADDRESS, ie:
    $IP_ADDRESS    bitwarden.noizwaves.cloud nextcloud.noizwaves.cloud seafile.noizwaves.cloud authelia.noizwaves.cloud traefik.noizwaves.cloud
    

Maintenance

Upgrading to newer images

  1. Update tags to desired newer value
  2. Recreate containers via $ docker-compose up --force-recreate --build -d

Disaster Recovery

  1. $ cd cloud-config
  2. $ cp backups/backup.env.tmpl backups/backup.env
  3. Set appropriate values

Hot

Setup

  1. $ cd cloud-config
  2. $ cp backups/hot/hot.env.tmpl backups/hot/hot.env
  3. Set appropriate values

Backup

  1. Install crontab as mentioned above

Restore (partial data loss)

  1. $ cd ~/cloud-config
  2. Edit backups/hot/restore.sh to specify path to restore
  3. Restore backup by $ ./backups/hot/restore.sh
  4. $ git restore backups/hot/restore.sh

Restore (disaster recovery)

  1. Set up restore device
    1. Follow steps for adding new node
    2. Install Docker using sudo apt-get install docker.io
  2. $ git clone https://github.com/noizwaves/noizwaves.cloud.git ~/cloud-config-recovery
  3. $ cd ~/cloud-config-recovery
  4. Obtain secrets pack
  5. Populate config
    1. $ cp backups/hot/hot.env.tmpl backups/hot/hot.env
    2. Set secrets and RESTORE_DIR
  6. $ .backups/hot/restore.sh
  7. $ mv ~/recovery/cloud-config ~/
  8. $ mv ~/recovery/cloud-data ~/
  9. Update configuration in ~/cloud-config/.envrc
  10. Manually add DNS entry for Adguard to Cloudflare
  11. Start foundational services (traefik, authelia, adguard)
  12. Manually add DNS entries to Adguard
  13. Start applications, adding DNS entries to Adguard

Cold

Setup

  1. $ cd cloud-config
  2. $ cp backups/cold/cold.env.tmpl backups/cold/cold.env
  3. Set appropriate values

Backup

  1. SSH into noizwaves.cloud
  2. Connect cold backup USB drive to host
  3. Mount backup drive via $ pmount /dev/sda backup
  4. Mount bigbackup drive via $ pmount /dev/sdb bigbackup
  5. Run a backup via $ ~/cloud-config/backups/cold/backup.sh
  6. Unmount drives via $ pumount backup and $ pumount bigbackup

Restore (partial data loss)

  1. $ cd ~/cloud-config
  2. Edit backups/cold/restore.sh to specify path to restore
  3. Connect cold backup drive
  4. Mount drive via $ pmount /dev/sda backup
  5. Restore backup by $ ./backups/cold/restore.sh
  6. Unmount drive via $ pumount backup
  7. $ git restore backups/cold/restore.sh

Restore (disaster recovery)

  1. Set up restore device
    1. Follow steps for adding new node
    2. Install Docker using sudo apt-get install docker.io
  2. $ git clone https://github.com/noizwaves/noizwaves.cloud.git ~/cloud-config-recovery
  3. $ cd ~/cloud-config-recovery
  4. Obtain secrets pack
  5. Connect cold backup USB drive to restore target
  6. Obtain secrets pack
  7. $ pmount /dev/sda backup
    1. $ cp backups/cold/cold.env.tmpl backups/cold/cold.env
    2. Set RESTORE_DIR
  8. $ ./backups/cold/restore.sh
  9. $ pumount backup
  10. Disconnect drive
  11. $ mv ~/recovery/cloud-config ~/
  12. $ mv ~/recovery/cloud-data ~/
  13. Update configuration in ~/cloud-config/.envrc
  14. Manually add DNS entry for Adguard to Cloudflare
  15. Start foundational services (traefik, authelia, adguard)
  16. Manually add DNS entries to Adguard
  17. Start applications, adding DNS entries to Adguard

Recover from disaster

How to recover from total hardware failure/destruction

  1. Obtain secrets pack from secure storage
  2. Prepare restore target device
  3. Perform either a cold or hot restore

Misc

Useful diagnostic tools

iotop

  1. sudo apt install iotop
  2. iotop

iozone

  1. sudo apt install iozone3
  2. sudo iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2

hdparm

  1. sudo apt install hdparm
  2. sudo hdparm -t /dev/nvme0n1

inxi

  1. sudo apt install inxi
  2. inxi -Dxxx