Vagrantfile: The "ansible" node has no way of connecting the "web" nodes without manually installing the ssh key.
tima opened this issue · 11 comments
Testing #239 using the current Vagrantfile I was unable to connect to the nodes in my lab without manually copying and configuring the use of the insecure_private_key file. We need to make this easier by installing the private key on the ansible node and providing an static inventory file that uses it.
Currently this is fine for the Vagrant file, however we may need to add a conditional task to provision ssh keys to the remote hosts that contain both the vagrant insecure key and the keys provided by the person provisioning.
This line is relevant:
cluster.vm.box = "centos/7"
cluster.ssh.insert_key = false
Don't install your own key (you might not have it)
Use this: $HOME/.vagrant.d/insecure_private_key
Insert key false is correct, I use that here:
https://github.com/dfederlein/ansible-tower-demo/blob/master/Vagrantfile
NOTE: My implementation isn't as elegant as the one in lightbulb but may be more readable.
We also avoided the problems that may come up using Tower vagrant images on Hashi's registry because the only box we use is the centos7 minimal. We will need, however to provision via ansible to the remote boxes things that set up "stuff" (keys, etc.) like I did in the linked vagrantfile.
I'll need to review when I am back at my desk.
I'll have more on whether I'm right in this after I test tomorrow.
Not sure I'm totally following you @dfederlein. Let me recap and see.
So you are suggesting that we keep cluster.ssh.insert_key as false and then run an ansible playbook that installs the private key on the controller? Glancing at the site.yml
and the vagrant-common role in your example implementation it installs the public keys on every VM but not the private key on the controller so it can easily remote into the web nodes. Am I mistaken?
Since I'm a n00b with vagrant let me ask one more thing I'm not clear on -- why did we have to disable cluster.ssh.insert_key in the first place? It worked well and made things easy for getting up and running. What was the problem? Is that somehow not best practice?
Cluster.ssh.insert_key does not handle the private key. it only places the public key. The issue is that we do not provision the private key at all.
weird. i thought the previous Vagrantfile did have the private key that is just worked after your node are provisioned.
I don't have the history, but at one point with Vagrant in lightbulb we did place the private key if the host provisioned was in a tower group. That may not be there now.
I can create a PR for this soon, but it's private key placement we need to solve. That's easy enough.
BTW I think this was lost when we back in 2015 ripped vagrant out for AWS heavy focus on the teaching labs.
Yeah I must be having a senior moment on this one.
Going thru the history this is the first one that mentions private keys.
That looks like we wanted to keep with the insecure vagrant private key, which means we set that so that the public key was in place, but Vagrant sort of assumes you have that private key added. It doesn't add the private key itself, to my knowledge. So we just need to add in a provisioner task based on vagrant group membership that places the private key as needed.
Two places: 1) at CLI for vagrant user on control node (or awx user, whichever if we install tower) and 2) add it to Tower as a credential (which is what is I do on the linked demo repo above.)
I'm an idiot that should have his BASH prompt revoked. I figured it out. I don't think you have to do anything but not be an idiot like me.
LOL!!! Ok let me know if you need me to prototype anything from my demo to lightbulb and do a PR.