thsmi/sieve

Web App not working under docker

Closed this issue · 3 comments

koehn commented

Prerequisites

  • [x ] Tried the most recent nightly build
  • [ x] Checked if your issue is already reported.
  • [ x] Answered all the questions in this template (Or provide a working crystal ball).

What happened?

I'm trying to run the web app from with a docker container generated from this Dockerfile:

FROM python:latest

ARG VERSION=0.6.1

RUN mkdir -p /opt/sieve && \
    chown daemon:daemon /opt/sieve

WORKDIR /opt/sieve
USER daemon

RUN mkdir -p /opt/sieve && \
    cd /opt/sieve && \
    curl -OL https://github.com/thsmi/sieve/releases/download/$VERSION/sieve-$VERSION-web.zip && \
    unzip sieve-$VERSION-web.zip && \
    rm sieve-$VERSION-web.zip && \
    python -m venv .venv

CMD bash -c "source .venv/bin/activate && python main.py"
  1. Build above Dockerfile into image docker build -t koehn/sieve .
  2. Configure working config.ini, cert, and key:
[DEFAULT]
ServerPort = 8765
ServerAddress = 0.0.0.0
ServerCertFile = /opt/sieve/sieve.cert
ServerKeyFile = /opt/sieve/sieve.key
SieveHost = mail.example.com
SievePort = 4190
SieveSecure = no
AuthType = client
AuthUserHeader = X-Forwarded-User
  1. Launch docker container with the above mapped into the image and forwarding a port: docker run -v $PWD/sieve.cert:/opt/sieve/sieve.cert -v $PWD/sieve.key:/opt/sieve/sieve.key -v $PWD/config.ini:/opt/sieve/config.ini -p 8765:8765 koehn/sieve
  2. Point web browser at e.g., http://localhost:8765

What did you expect to happen?

Expected app to launch. Instead, HTTP GET doesn't return

Logs and Traces

curl -v http://localhost:8765
*   Trying ::1:8765...
* connect to ::1 port 8765 failed: Connection refused
*   Trying 127.0.0.1:8765...
* Connected to localhost (127.0.0.1) port 8765 (#0)
> GET / HTTP/1.1
> Host: localhost:8765
> User-Agent: curl/7.74.0
> Accept: */*
> 

(GET never returns)

No logs from Docker container.

Nothing logged from Dovecot.

Which Version

web-sieve-0.6.1
Dovecot/Pigeonhole 2.3.18

I don't know what's wrong; if I send an invalid config the app correctly errors out. If I run the same curl command inside the container it also hangs. I'd expect some kind of timeout if the app wasn't able to connect to the managesieve server.

thsmi commented

I tried to reproduce your issues and stumbled upon the following issues:

HTTP vs HTTPS

According to your logs you are trying to connect via http to a service running https. This endsup in a TCP timeout and is typically in the multi minutes range.

Most webserver automatically detect incoming http and redirecting https. But the current implementation is very minimalistic and does not support such goodies.

Configuration file

You need to configure at least one account. This means there has to be at least one section beside the default section.

[DEFAULT]

ServerPort = 8765
ServerAddress = 0.0.0.0
ServerCertFile = /opt/sieve/sieve.cert
ServerKeyFile = /opt/sieve/sieve.key
SieveHost = mx2fea.netcup.net
SievePort = 4190
HttpRoot = /opt/sieve/static

[Client Side Example]
AuthType = client
AuthUser = me@example.com
# Or in case you use a revere proxy, which injects the auth header use:
# AuthUserHeader = X-Forwarded-User

The section header, in this example it is named "Client Side Example" is displayed in the UI as title for the corresponding account:

image

You can set it to whatever you want, but as said at least one account needs to be configured. And yes there is room for improvement in the documentation.

Furthermore the is a bug in resolving the HttpRoot folder thus you need to configure the http root directory manually.

Docker file

In case you want to see the output form the docker container you need to start it with -it or redirect the python output into a file. Have not checked the docker documentation but it seems to me as if docker dumps by default stderr but not stdout, thus you see the error message in case of a missing config file.

Creating a venv inside a docker is a bit overkill, because it is a virtualization inside a virtualization. Does not harm but also does not offer any benefit.

This is my dockerfile I used:

FROM alpine:latest

ARG VERSION=0.6.1

RUN apk add python3

RUN mkdir -p /opt/sieve && \
    cd /opt/sieve && \
    wget https://github.com/thsmi/sieve/releases/download/$VERSION/sieve-$VERSION-web.zip -O sieve.zip && \
    unzip sieve.zip && \
    rm sieve.zip

EXPOSE 8765/tcp

WORKDIR /opt/sieve
CMD python3 main.py

This is my docker command, the -it is of course optional and makes it chatty.

docker run -it -v d:\python.cert:/opt/sieve/sieve.cert -v d:\python.key:/opt/sieve/sieve.key -v D:\projekte\sieve\core\tools\Docker\webapp\config.ini:/opt/sieve/config.ini -p 8765:8765/tcp core:latest
koehn commented

Thanks for the reply! I thought the app would work better in a hosted environment; basically like the desktop app served over the web, allowing you to use the same authentication to the sieve server and such.

The lack of http support and common authentication is probably a deal-breaker for me using the web app; I’ve already got primitive sieve support in rainloop that is good enough for my users, and I can use the desktop app for my more sophisticated needs.

thsmi commented

Well did not get what you mean with common authentication.
Though it would be interesting to understand your use case.

The webapp was designed for usage with a reveres proxy and SSO. Thus you configure your reverse proxy to do the authentication. If authenticated the proxy adds the username as a header to the request and the webapp trusts this header and relies upon this username. If not authenticated the proxy will reject or challenge the client. So that on the client side in the browser there is nothing like a username.

Depending on you need you can then do a proxy authentication or a client side authentication. The latter shows a dialog asking for the password the former will use a proxy authentication mechanism which is fully transparent to the browser.

The reverse proxy has the huge advantage that the webapp only needs to implement a very basic user authentication logic as this is crucial part is done by the reverse proxy. If you want e.g. GSSAPI, Kerberos, OAuth or two factor stuff, you just need to activate and configure this on your reveres proxy. Which dramatically reduces complexity and the webapp will support whatever authentication you reverse proxy supports. And the also reduces risk of a security flaw. The common reverse proxies can provide way quicker security updates than a one man Open Source project.

The webapp basically depends on a reverse proxy to do the authentication. Without the reverse proxy it indeed feels strange.

Concerning the "https only" is because of websocket related limitation. There are some scenarios, where browser allow only websocket connections via https. Also all of the reverse proxies I am aware of do not like to downgrade an incoming secure connection to a non secure one. You can do this but it is typically more painful than getting a free let's encrypt certificate.