spliit-app/spliit

Store images somewhere else than S3

scastiel opened this issue · 17 comments

Since #63, users can attach images to expenses, but they can be stored only on S3. If someone has the need to store images somewhere else (locally, on an FTP server…), feel free to comment here with your need and we’ll see how to implement it.

A relatively simple change could be to support S3 providers other than AWS and allow configuring the endpoint via an additional environment variable like S_UPLOAD_ENDPOINT.

Support for additional storage methods may not be necessary beyond this. There are FOSS options for S3 compatible storage so there would be no vendor lock-in. Focusing storage options on a single method may help keeping complexity lower, too.

Looks like next-s3-upload has support for the endpoint already:
https://next-s3-upload.codingvalue.com/bucket-config#bucket-endpoint

support for storage on a local file system might be useful for homelab deployments

support for storage on a local file system might be useful for homelab deployments

it could, but you should also just be able to use for example Minio in order to reach the same functionality, and since you are running multiple containers already (postgres+app) i don't think the overhead of adding for example Minio would hurt too much.
Then you can choose where to store the Minio volumes if you wish

EDIT: it will complicate serving the images (you would need to expose your minio instance to the internet), while something like storing locally in a static folder might work well with only leaving the spliit app open to the internet

I tried getting Minio running locally and got it to work with uploading the image to the bucket, but not with then showing it.

I think you would have to make it work with serving images via presigned URLs (in addition to uploading via presigned URLs) in order to get it to work. I can continue looking into it another day if this would be a suitable solution?

It seems we both gave it a try in parallel. 🙂

I'd be glad to hear your feedback to my attempt: #71

It seems we both gave it a try in parallel. 🙂

I'd be glad to hear your feedback to my attempt: #71

Looks very similar to my try! Except yours handles the next config parts way better than my solution.
It seems my issue was that I did not configure my Minio bucket correctly, so the only thing I would maybe say could be added is a hint in the documentation about Minio and how to setup the bucket. But I'm also unsure if that actually should be in the documentation for Spliit… so I did not leave it as a comment on the PR.

Very nice work!

I tried getting Minio running locally and got it to work with uploading the image to the bucket, but not with then showing it.

I think you would have to make it work with serving images via presigned URLs (in addition to uploading via presigned URLs) in order to get it to work. I can continue looking into it another day if this would be a suitable solution?

Regarding exposing to the internet, one of my concerns is having too many ports open on a home server so I have been trying to embed spliit as an add-on for homeassistant so that I can use a single open port as the entry point or their nabucasa proxy (see #56). The app would then be available from the homeassistant app and this along with all of my other tools are available in one place.

This would solve the public URL problem too as the MinIO url would be considered local.

I tried getting Minio running locally and got it to work with uploading the image to the bucket, but not with then showing it.
I think you would have to make it work with serving images via presigned URLs (in addition to uploading via presigned URLs) in order to get it to work. I can continue looking into it another day if this would be a suitable solution?

Regarding exposing to the internet, one of my concerns is having too many ports open on a home server so I have been trying to embed spliit as an add-on for homeassistant so that I can use a single open port as the entry point or their nabucasa proxy (see #56). The app would then be available from the homeassistant app and this along with all of my other tools are available in one place.

This would solve the public URL problem too as the MinIO url would be considered local.

Yeah, I run everything behind a reverse proxy, so I have only port 443 open to the internet, but using different sub-domains. But I do see the use case of being able to just source them from a static file directory.

I would also prefer local storage for home-lab environments. MinIO makes it more complex overall and another service that has to be hardened against external threats.

Should look to keep documents stored locally as for home-lab users looking to keep everything in-house would make the most sense.

Once the docker hub image is available, you could look at integrating with other applications.
As most home-lab users are using paperless-ngx, could look to use the integration for this with Spliit?

If instead of uploading attachments into spliit, if we had the option to link to an outside URL, this would let me utilize my choice of document repository. Using paperless can create a document link, when I create an "Attachment" on a receipt in spliit, I could just put that paperless shared doc URL as value for that attachment type.

This would be versatile because spliit users could then easily store these documents anywhere, (google drive, Paperless-ngx, nextcloud, ETC).

@justcallmelarry could you please share how you configured Minio so the images are actually displayed in Spliit (PUT works fine).

Any updates on this?

FYI I am currently in the process of adding local file support (see my fork) so that S3 is not needed anymore.

@swiftbird07 I had the problem of intergrating the minio docker image with the spliit docker image also. I could upload an image with PUT, but when the split application tried to do a GET it failed.

For me, one of the solutions was to open the port 9000 ( The port minio was using for the buckets ) on my docker machine using UFW. There could have been other configuration changes that contributed to the application working I can share if it would help? I modified the composer file to use the minio image so they share a network.

If you are still interested in fixing this, I'm happy to share what I have done! :) It took me a while to trouble shoot this.

If you are still interested in fixing this, I'm happy to share what I have done! :) It took me a while to trouble shoot this.

Yes I would appreciate that. Especially how did you set up the permissions etc?

The steps I took for setup were as follows:

  1. I created the user access key and secret key on the admin account by selecting access keys under the user menu on the left. Don't forget to set an expiry for less than a year.
  2. I then created the bucket. I did not enable versioning, object locking or the quota.
  3. After creating the bucket, make sure to change the access policy from private to public. The default bucket options should be to allow all commands.
  4. Then select configuration in the administrator menu, and I changed the region to us-east-1
  5. I then cloned the spliit project, and following the instructions I Copied the file container.env.example as container.env and modified the following:
These are the keys generated within minio:
S3_UPLOAD_KEY=Name-of-access-key-generated-in-minio
S3_UPLOAD_SECRET=Name-of-access-key-generated-in-minio

This is the name of the bucket you created in minio:
S3_UPLOAD_BUCKET=name-of-s3-bucket

This is a setting you can set when you click on the settings page in minio:
S3_UPLOAD_REGION=us-east-1

I used the ip address of my docker machine here:
S3_UPLOAD_ENDPOINT=http://123.123.123.123:9000
  1. Then I modified the compose.yaml to add the following into it after the db entry:
  s3-bucket:
   container_name: minio
   image: quay.io/minio/minio:RELEASE.2024-05-01T01-11-10Z-cpuv1
   ports:
     - "9000:9000"
     - "9001:9001"
   environment:
      MINIO_ROOT_PASSWORD: [Add a password there that will be used for the admin account]
      MINIO_ROOT_USER: [Add a username to use for the admin account]
   tty: true
   command: server /data --console-address ":9001" #{ "minio", "server", "/data", "--console-address", ":9001" }
   volumes:
    - type: bind
      source: [full path of a folder you would like to use]
      target: /data
   restart: unless-stopped

You can change the image to use the latest like this:

 image: quay.io/minio/minio:latest

So my full compose.yaml looks like this:

services:
  app:
    restart: unless-stopped
    image: spliit2:latest
    ports:
      - 3000:3000
    env_file:
      - container.env
    depends_on:
      db:
        condition: service_healthy

  db:
    restart: unless-stopped
    image: postgres:latest
    ports:
      - 5432:5432
    env_file:
      - container.env
    volumes:
      - ./postgres-data:/var/lib/postgresql/data
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U postgres']
      interval: 5s
      timeout: 5s
      retries: 5

  s3-bucket:
   container_name: minio
   image: quay.io/minio/minio:RELEASE.2024-05-01T01-11-10Z-cpuv1
   ports:
     - "9000:9000"
     - "9001:9001"
   environment:
      MINIO_ROOT_PASSWORD: [Add a password there that will be used for the admin account]
      MINIO_ROOT_USER: [Add a username to use for the admin account]
   tty: true
   command: server /data --console-address ":9001" #{ "minio", "server", "/data", "--console-address", ":9001" }
   volumes:
    - type: bind
      source: [full path of a folder you would like to use]
      target: /data
   restart: unless-stopped

I needed to use an older image due to the age of the machine running the images 😅
I have also used a bind for the data as it's easier to modify the files within a bound folder. You can subsitute this with a volume if you wish.

  1. I then opened the ports to allow my minio instance to be accessable using ufw allow 9000 ( i'm using ubuntu )
  2. lastly, run docker compose -f compose.yaml up

I've changed the relevent options to improve security like not mapping the db ports, and creating users for the bucket instead of using the admin and locking down permissions now that it's all working, but to start with the above instructions is all I used.

Hopefully it helps!