volweb-platform "SSLError(SSLCertVerificationError) certificate verify failed: unable to get local issuer certificate" With self signed certificates
Pierredesez opened this issue ยท 4 comments
Hello,
Tried everyway to launch volweb on production but I can't seem to figure out what the issue with the SSL certificates.
I'm using an existing entrypoint of the minio instance, a reverse proxy subdomaine with a let's encrypt certificate, this vhost is already exposed and proxy pass directly to the minio . So that being said, the minio could be http or https , it's the same for me.
But I keep getting the following errors whenever I try to create a case:
volweb-platform | 2024-06-10 00:44:44,994 WARNING Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)'))': /71a048d5--4b26-8e7c-
volweb-platform | 2024-06-10 00:44:45,899 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)'))': /71a048d5--4b26-8e7c-
volweb-platform | 2024-06-10 00:44:47,616 WARNING Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)'))': /71a048d5--4b26-8e7c-
volweb-platform | 2024-06-10 00:44:50,935 WARNING Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)'))': /71a048d5--4b26-8e7c-
volweb-platform | 2024-06-10 00:44:51,006 ERROR Internal Server Error: /api/cases
I'm lost in the installation process , I do not understand wether we should keep generating a selfsigned certificate for the local part or not, even if we already have an ssl certificate.
If I don't do it, the nginx docker doesnt fire up anyway, so I guess that means we should do it.
Even the part when we should accept the risk in the browser and add it in your browser's trusted CA is not necessary for this case I guess.
Also in the nginx configuration I'm not sure if we should replace volweb-platform by the actual hostname, I just replaced the servername part.
So the question for this issue would be : Should we create a selfsigned certificate for minio AND nginx ? Because in production, the volweb instance is rarely directly exposed, it's , most of time, behind a reverse proxy that handles the HTTPS certificates.
I also notice that in my reverse proxy nginx log I don't see any request to the minio plateform that I put in the .env file. That means that the volweb plateform doesnt even try to reach minio in https before throwing that error. So the issue is definetly because of the self signed certificated on the local plateform.
I guess the issue comes from the minio certificate, but I have no way to be sure. I tried to look at the code to understand which query is failing and To fix this issue I tried to add ssl=false in voltoos.py s3fs part, but I'm not sure if it's a good idea.
Thank you, and sorry if my questions are dumb :)
Hello @Pierredesez,
First, thank you for your interest in this project!
I think I understand your issue. Let's take a look at the architecture together.
In the proposed production architecture, the MinIO instance is not behind the Nginx proxy, and MinIO itself has the console and the API exposed directly via HTTPS. This choice was made because:
- The web browser of the user needs to perform HTTPS requests to the S3 storage API (uploading and binding evidences).
- The volweb-platform and worker need to talk to MinIO via HTTPS without passing through the Nginx proxy to create new buckets and perform remote analysis with Volatility3.
This is because the storage solution (MinIO or AWS) is considered to be a dedicated "external" component. Inside the proposed production docker-compose
file, the MinIO instance is the default storage solution chosen so that the user can get started more easily, and therefore the HTTPS is managed on the MinIO server.
In your case, you are looking to perform something like this (if I understand correctly):
In order to achieve the following setup, you'll need to modify NGINX and set up the appropriate server blocks so each is configured to handle requests for two subdomains:
volweb.yourdomain.com:443
-> to thevolweb-platform:80
HTTPminio.yourdomain.com:443
-> to thevolweb-minio:9001
(if you want to access the console)api.minio.yourdomain.com:443
-> to thevolweb-minio:9000
(for the API used by volweb and the browser)
With this setup, you should modify the compose file to not expose MinIO on the host and not handle HTTPS on MinIO. Here is an example of what the MinIO service can look like:
[OTHER SERVICES]
volweb-minio:
container_name: volweb-minio
image: minio/minio
restart: always
volumes:
- minio_storage:/data
ports:
- "9000:9000"
- "9001:9001"
environment:
- MINIO_ROOT_USER=${AWS_ACCESS_KEY_ID}
- MINIO_ROOT_PASSWORD=${AWS_SECRET_ACCESS_KEY}
command: server --console-address ":9001" /data
As you are not using a self-signed certificate for MinIO, you should also remove the following environment variables in the docker-compose
of the volweb-workers and platform:
REQUESTS_CA_BUNDLE=/etc/ssl/certs/minio.pem
and SSL_CERT_FILE=/etc/ssl/certs/minio.pem
, as you are using Let's Encrypt and no longer need them to be trusted. The volume ./minio/fullchain.pem:/etc/ssl/certs/minio.pem
is also no longer needed.
Finally, update the .env
file accordingly:
CSRF_TRUSTED_ORIGINS=https://volweb.yourdomain.com
WEBSOCKET_URL=wss://volweb.yourdomain.com
AWS_ENDPOINT_URL=https://api.minio.yourdomain.com
AWS_ENDPOINT_HOST=api.minio.yourdomain.com
This is not an exhaustive answer, but I hope it helps. Let me know if you'd like me to provide this use case in the documentation. I could try to provide a compose example using Let's Encrypt.
Let me know how it went. And do not hesitate to share your solution ๐
First of all thanks to you for the devlopement of this masterpiece. I used it in dev and it saves a lot of time in forensic investigations !!
And secondly thank you for the answer, it is crystal clear, I don't know why I did not think of changing the compose ! It's so obvious.
So basically I edited the compose file like you said, and that's it. It's working great.
Modifications are quite clear but for the lazy people like me :
services:
volweb-postgresdb:
container_name: volweb-postgresdb
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
image: postgres:14.1
restart: always
ports:
- 5432:5432
volumes:
- postgres-data:/var/lib/postgresql/data
volweb-redis:
container_name: volweb-redis
image: "redis:latest"
restart: always
command: ["redis-server", "--appendonly", "yes"]
volumes:
- "redis-data:/data"
ports:
- "6379:6379"
volweb-minio:
container_name: volweb-minio
image: minio/minio
network_mode: "host"
restart: always
ports:
- "9000:9000"
- "9001:9001"
volumes:
- minio_storage:/data
environment:
- MINIO_ROOT_USER=${AWS_ACCESS_KEY_ID}
- MINIO_ROOT_PASSWORD=${AWS_SECRET_ACCESS_KEY}
command: server --console-address ":9001" /data
volweb-platform:
container_name: volweb-platform
environment:
- DATABASE=${DATABASE}
- DATABASE_HOST=${DATABASE_HOST}
- DATABASE_PORT=${DATABASE_PORT}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
- DJANGO_SECRET=${DJANGO_SECRET}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_ENDPOINT_URL=${AWS_ENDPOINT_URL}
- AWS_ENDPOINT_HOST=${AWS_ENDPOINT_HOST}
- AWS_REGION=${AWS_REGION}
- WEBSOCKET_URL=${WEBSOCKET_URL}
- BROKER_HOST=${BROKER_HOST}
- BROKER_PORT=${BROKER_PORT}
- CSRF_TRUSTED_ORIGINS=${CSRF_TRUSTED_ORIGINS}
image: "forensicxlab/volweb:2.1.1"
command: daphne -u /tmp/daphne.sock -b 0.0.0.0 -p 8000 VolWeb.asgi:application
expose:
- 8000
depends_on:
- volweb-postgresdb
- volweb-redis
- volweb-minio
restart: always
volumes:
- staticfiles:/home/app/web/staticfiles
- media:/home/app/web/media
volweb-workers:
environment:
- DATABASE=${DATABASE}
- DATABASE_HOST=${DATABASE_HOST}
- DATABASE_PORT=${DATABASE_PORT}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
- DJANGO_SECRET=${DJANGO_SECRET}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_ENDPOINT_URL=${AWS_ENDPOINT_URL}
- AWS_ENDPOINT_HOST=${AWS_ENDPOINT_HOST}
- AWS_REGION=${AWS_REGION}
- WEBSOCKET_URL=${WEBSOCKET_URL}
- BROKER_HOST=${BROKER_HOST}
- BROKER_PORT=${BROKER_PORT}
- CSRF_TRUSTED_ORIGINS=${CSRF_TRUSTED_ORIGINS}
image: "forensicxlab/volweb:2.1.1"
command: celery -A VolWeb worker --loglevel=INFO
depends_on:
- volweb-redis
- volweb-postgresdb
- volweb-minio
restart: always
volumes:
- staticfiles:/home/app/web/staticfiles
- media:/home/app/web/media
deploy:
mode: replicated
replicas: 1 # Warning: see the documentation if you want to add more replicas.
nginx:
container_name: volweb_nginx
image: nginx:mainline-alpine
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx:/etc/nginx/conf.d
- ./nginx/ssl/:/etc/nginx/certs/
- staticfiles:/home/app/web/staticfiles
- media:/home/app/web/media
depends_on:
- volweb-platform
volumes:
minio_storage:
postgres-data:
redis-data:
staticfiles:
media:
Make sure to increase the size of the upload limit on nginx and that's it.
Thanks again for the answers !
Wonderful !
Thank you for your feedback and sharing your solution!
I will update the documentation with your solution and this use case in the coming days and close the issue then ๐