Passing the --admin-password option to the portainer command in docker-compose.yml
davidwhthomas opened this issue ยท 17 comments
Hi there,
Firstly, awesome app.
I've got a question about adding the admin password in the docker-compose.yml
file
I've generated the test password "password" as follows:
htpasswd -nb -B admin password | cut -d ":" -f 2
$2y$05$tv5/3s.O.w0UW08zU6CpO.U.Z9xlchoOetGO91N4z9ZoZjwY/4VOi
I then enter that password hash in the docker-compose service definition as follows:
portainer:
image: portainer/portainer
command: --admin-password '$$2y$$05$$tv5/3s.O.w0UW08zU6CpO.U.Z9xlchoOetGO91N4z9ZoZjwY/4VOi' -H unix:///var/run/docker.sock
Note: Single quotes, with double $$ to escape that char as per https://docs.docker.com/compose/compose-file/#variable-substitution
However, on Portainer startup, I get the auth dialog yet the password is always incorrect, returning only
"Invalid credentials"
It works with the unsecured command: --no-auth
option there
I've tried various things, such as env variable, quotes, no quotes, escaped double $$ and just $ etc.. - to no avail.
The question is, how can I secure portainer with the admin password in docker-compose for deployment?
Thoughts appreciated.
Heya, I was able to successfully create a Portainer instance using a generated password password
.
Here is what I did:
- Generate the hash for the password
password
:
docker run --rm httpd:2.4-alpine htpasswd -nbB admin password | cut -d ":" -f 2
$2y$05$arC5e4UbRPxfR68jaFnAAe1aL7C1U03pqfyQh49/9lB9lqFxLfBqS
- Use the following service definition in a
docker-compose.yml
file:
version: '2'
services:
portainer:
image: portainer/portainer
ports:
- "9000:9000"
command: --admin-password "$$2y$$05$$arC5e4UbRPxfR68jaFnAAe1aL7C1U03pqfyQh49/9lB9lqFxLfBqS"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Have you tried to use double quotes instead of single quotes ?
Thanks for the reply. Good to hear it's possible!
Yes, I tried the double-quotes earlier too.
Unfortunately, I wasn't able to login with "admin" and "password" using your provided portainer docker-compose.yml
file, same "Invalid credentials" error
I can see another error in the JS console, perhaps related to this?
app.f319a0fb.js:3 POST http://localhost:9000/api/auth 422 (Unprocessable Entity)
Here's some more details on that erroneous POST request
Technical details:
- Portainer version: portainer/portainer
- Target Docker version (the host/cluster you manage): 17.09.0-ce-mac35 (19611)
- Platform (windows/linux): OSX
- Command used to start Portainer:
docker-compose -f docker-compose-portainer.yml up -d
- Browser: Chrome 62.0.3202.94 (Official Build)
I can see another error in the JS console, perhaps related to this?
Nope, this error code is returned by the API when credentials are invalid. This behavior is normal.
Question, how are you restarting the stack each time? Are you sure the container is re-created ? The --admin-password
flag is only taken into account the first time Portainer is started. If the container is restarted, the flag will be ignored (this is to avoid overwritting the password each time the container is restarted).
Try to cleanup your compose environment (docker-compose down
) and then to apply the instructions I posted above.
The --admin-password flag is only taken into account the first time Portainer is started. If the container is restarted, the flag will be ignored
That's it!
I had the Portainer already started, then stopped with docker-compose stop
I adjusted the config and ran docker-compose up -d
which didn't set the password.
However, it appears yes, one needs to set the password for the first container init.
After running docker-compose down
I was able to recreate the Portainer container with the admin password set from the docker-compose.yml
file!
Many thanks for the help and thanks again for the hard work on the excellent Docker administration app, great stuff.
Great, I'll close the issue.
Just to help someone that get's here, as I did, I had to remove the stack, remove the volume configured in portainer-agent-stack.yml and restart the stack. Only then the password worked.
Hi,
I cannot get --admin-password
to work, on a clean install on docker swarm, with no prior portainer installation authentication always fail. This is my compose file:
version: '3.7'
services:
agent:
image: portainer/agent
environment:
# REQUIRED: Should be equal to the service name prefixed by "tasks." when
# deployed inside an overlay network
AGENT_CLUSTER_ADDR: tasks.agent
AGENT_PORT: 9001
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
deploy:
mode: global
placement:
constraints: [node.platform.os == linux]
portainer:
image: portainer/portainer
container_name: portainer
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
restart: unless-stopped
command: ["-H", "tcp://tasks.agent:9001", "--tlsskipverify", "--admin-password='$2y$05$Nnetxx5WCP1d44Jwq/dYy.YI6aamFyMYIrW2akzQ.6sh.6Gdch1Hi'"]
ports:
- "9000:9000"
- "8000:8000"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- /var/run/docker.sock:/var/run/docker.sock
- /mnt/efs/web-prod/config/portainer/data:/data
I have also tested by escaping $ characters with double $$, removing the single quotes around the password or adding double quotes around it as I have seen in some suggestions but I cannot get it to work. I can only get it to work by removing --admin-password and setting it up on the first run.
Hi @juanluisbaptiste I am having the same issue ATM, have you maybe managed to solve this?
Hi once again,
I've managed to solve this problem. It turned out that by using node.role == manager
I did not properly identify the host that stored the volume.
When I made sure that I run the command without any volume it worked.
The docker compose I used:
version: '3.2'
services:
agent:
image: portainer/agent
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
- /home/ubuntu/.docker:/certs
networks:
- agent_network
deploy:
mode: global
placement:
constraints: [node.platform.os == linux]
portainer:
image: portainer/portainer
command: --admin-password "$$2y$$05$$arC5e4UbRPxfR68jaFnAAe1aL7C1U03pqfyQh49/9lB9lqFxLfBqS" -H tcp://tasks.agent:9001 --tlsskipverify
ports:
- "9000:9000"
- "8000:8000"
volumes:
- portainer_data:/data
networks:
- agent_network
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.hostname == ip-10-10-0-136]
networks:
agent_network:
driver: overlay
attachable: true
volumes:
portainer_data:
Good luck!
Hi @wsromek ,
I will test it out, but this sounds like a bug, what does @deviantony thinks ?
@juanluisbaptiste this is a matter of default behaviour for docker stack
and docker compose
. More here: moby/moby#29158
@wsromek on my case I'm using bind volumes on a shared EFS mount point (take a look at my previous post for my docker-compose file), not named volumes that are specific to one host, so the configuration files should be found in any of the manager nodes.
Yeah, mea culpa. Missed that. Sorry :-)
@juanluisbaptiste I just retried the instructions I gave in this comment #1506 (comment) and it's working fine on my side.
I would recommend you to try without persisting any data first to isolate the cause of the problem.
@deviantony yes, those instructions work, the issue is when running on a swarm cluster as mentioned by @wsromek and me.
Could you test it out with my docker-compose.yml file from my comment ?
@juanluisbaptiste been succesfully deploying Portainer inside a Swarm cluster via the following docker-compose file (adapted from your comment):
version: '3.7'
services:
agent:
image: portainer/agent
environment:
# REQUIRED: Should be equal to the service name prefixed by "tasks." when
# deployed inside an overlay network
AGENT_CLUSTER_ADDR: tasks.agent
AGENT_PORT: 9001
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
deploy:
mode: global
placement:
constraints: [node.platform.os == linux]
portainer:
image: portainer/portainer
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
command: --admin-password "$$2y$$05$$arC5e4UbRPxfR68jaFnAAe1aL7C1U03pqfyQh49/9lB9lqFxLfBqS" -H tcp://tasks.agent:9001 --tlsskipverify
ports:
- "9000:9000"
- "8000:8000"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- /var/run/docker.sock:/var/run/docker.sock
What worked for me:
version: "3.2"
services:
agent:
image: portainer/agent
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
networks:
- agent_network
deploy:
mode: global
placement:
constraints: [node.platform.os == linux]
portainer:
image: portainer/portainer
command: -H tcp://tasks.agent:9001 --tlsskipverify --admin-password-file /run/secrets/PORTAINER_PASS
ports:
- "9000:9000"
- "8000:8000"
volumes:
- portainer_data:/data
networks:
- agent_network
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
secrets:
- PORTAINER_PASS
networks:
agent_network:
driver: overlay
attachable: true
volumes:
portainer_data:
secrets:
PORTAINER_PASS:
external: true
First, add password to docker secrets: printf 'super_secret_password' | docker secret create PORTAINER_PASS -
And then deploy: docker stack deploy --compose-file=portainer-agent-stack.yml portainer
, that's it.