shlinkio/shlink-docker-image

SQLite persistent storage

Closed this issue · 3 comments

Hi there,
I'm looking for a way to use the Shlink with the SQLite but with the persistent storage for at least a DB (database.sqlite). Normally it's stored on the docker container space, which might be dangerous when you do messing around the stack.

Also I've created a docker-compose.yml with additional nginx setup (with let's encrypt and so on), so it would be great to have a possibility to define a mountpoint (a volume) with data like this (see the commented part):

version: '3'
services:
  nginx:
    build: .
    ports:
     - "80:80"
     - "443:443"
    volumes:
     - ./data/nginx:/etc/nginx/conf.d
     - ./data/certbot/conf:/etc/letsencrypt
     - ./data/certbot/www:/var/www/certbot
    command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
  certbot:
    image: "certbot/certbot"
    volumes:
      - ./data/certbot/conf:/etc/letsencrypt
      - ./data/certbot/www:/var/www/certbot
    entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
  shlink:
    image: "shlinkio/shlink"
    # volumes:
    #   - ./data/shlink/:/etc/shlink/data/
    environment:
    - SHORT_DOMAIN_HOST=1y.yt
    - SHORT_DOMAIN_SCHEMA=https
    - DB_DRIVER=sqlite

Unfortunately it doesn't work, because this volume mountpoint have to be there before the build, so probably a change in main Dockerfile is needed.

Do you have any ideas how to do this right?

Cheers,
Matt

I will try make the change so that Docker expects that file to be mounted, but I'm not sure if the migrations tool is going to like the file to exist. I will do some tests and see how can I solve this.

However, I would recommend not to use SQLite in production. It might be a bottle neck.

Thank you for the quick response. I know that the SQLite is not so fast, but this is my personal URL shortener, just for me, and I'm not expecting so heavy load.

Hello @mac-iek. I have been doing some testing and it doesn't seem to be an easy solution for this.

If the database file is exposed via a volume, then docker creates a file when the container is run, and then, when the database tool tries to populate the databse, it fails because it is not actually an SQLite-compliant file.

I have tried different approaches in order to automate this, but none of them seem to really work.

I would recommend you following one of these workarounds instead:

  • Extract a valid database from the container once, and then share it with a volume:

    docker cp shlink_testing:/etc/shlink/data/database.sqlite ./database.sqlite
    docker run --name shlink -p 8080:8080 -e SHORT_DOMAIN_HOST=doma.in -e SHORT_DOMAIN_SCHEMA=https -v ${PWD}/database.sqlite:/etc/shlink/data/database.sqlite shlinkio/shlink
    
  • Use a more suitable engine for production which will simplify sharing the database

    You could add a service like this to your docker-compose definition:

    version: '3'
    services:
        nginx:
            // ...
        certbot:
            // ...
        shlink:
            image: "shlinkio/shlink"
            # volumes:
            #   - ./data/shlink/:/etc/shlink/data/
            environment:
                - SHORT_DOMAIN_HOST=1y.yt
                - SHORT_DOMAIN_SCHEMA=https
                - DB_DRIVER=mysql
                - DB_HOST=shlink_db
                - DB_USER=root
                - DB_PASSWORD=mypass
            links:
                - shlink_db
        shlink_db:
            container_name: shlink_db
            image: mysql:5.7.25
            environment:
                MYSQL_DATABASE: shlink
                MYSQL_ROOT_PASSWORD: mypass
            volumes:
                - ./database:/var/lib/mysql

    This will share the mysql database with a volume just in case you need to regenrate it, but you could even skip that.