garethr/garethr-docker

Use docker::swarm as an exported resource

vide opened this issue · 5 comments

vide commented

docker::Swarm is a define so it makes it a perfect target (with some boiler plate code in the profile) for an exported resource to be imported by every node assigned to the same cluster so we can autojoin nodes via Puppet with no manual intervention. But currently there's no way to retrieve the cluster token from the master node(s), so this is not possible.

You can always treat the token like a password and pass it as a variable. You then only have the problem during the initial init when tokens are generated.

If you want to get fancier you'd need something like consul you can use as an external keystore. Then you can code a custom function to read/export the tokens into.

vide commented

@Justin-DynamicD if I have to treat it like a password, then docker::swarm should not be a define, but rather a class (singleton) and everything would be easier. Being a define makes me think about exporting it from the master (where a fact can expose the tokens) and importing them in the nodes.

Which is why I made the 2nd suggestion of consul. You can setup consul as a hiera backend and then upload/gather from consul k/v with a lookup. I was just trying to give you an option to get things going (it's what we did here until our hiera was setup). I endedup writing a ruby function that pulled said tokens out of docker swarm joint-token statement.

vide commented

@Justin-DynamicD setting up a consul instance just for bringing up one (or more) swarm cluster seems a little bit overkill to me, at least for my needs. Thanks for your feedback anyway! :)

I've done something similar, adding a custom external fact to expose the worker token, using it in an exported resource and then just realising that on the worker nodes. It feels a bit ugly having the secret exposed as a fact, but as it's generated on the fly it's pretty tricky to do anything else.