Force rerun of proxy configuration
datosh opened this issue · 6 comments
I am currently facing an "issue" when working with Ansible & vagrant-proxyconf. In quotes, since the vagrant-proxyconf behaves as described, and the workaround I have works fine. So let me describe my use case:
I am provisioning an Ubuntu based VM using Vagrant (with the vagrant-proxyconf plugin), and then use Ansible to install packages and bootstrap a k8s cluster. I have wirtten an Ansible playbook which:
- Installs Docker-CE
- Installs kubeadm
- Bootstraps the cluster
The problem is that the proxy configuration only runs before and after the provisioners, as stated in documentation: "The proxy configurations are written just before provisioners are run."
This leads to the problem that my bootstrapping of the cluster fails, since the proxy is not configured in docker and it is not possible to pull images from internet.
My current solution is to split my Ansible playbooks into two. This allows the provisioner to run in between and setup docker proxy as required.
Is there another way around this so I don't have to split my playbook / provisioning step?
Cheers,
And thanks for the great plugin!
Hi @datosh,
I hope you are well.
I do appreciate the time you put into explaining your challenge with the current state of the plugin. However, I wont lie, I'm having a hard time trying to grasp the inception that is happening when bootstrapping for the first time or if this happens on the second run and every run afterwards?
If it is the first run, then there isn't much we can do that I'm aware of to fix this as the provisioner has to install and configure the service(s) i.e Docker in this case before we can consume that service. So all steps that would require a working docker daemon like image pulling or building containers would have to happen after the service has been installed, configured and restarted.
To clarify in the code, if the VM supports systemd we attempt to restart the service after the proxy is configured here for the first time. However, I won't lie, mileage here will vary as it's difficult to instrument this properly given the way this plugin is currently written as we mostly just "shell" out to the commands.
I've had dreams of replacing all the logic with a true system configuration tool like Puppet, Bolt or Ansible but I haven't had time to test this theory. Perhaps we could create a separate issue that contains enhancements and better future for this plugin? I don't want to digress to much on this though in this thread.
The only other alternative I can think of is to have the the this plugin install, configure and setup docker to work behind a proxy. You would then need to test image pulling and creation works. Once validated, you could then halt the instance and repackage it as a new box so that you don't have to worry about the first time bootstrapping again and you could just consume your ansible playbooks afterwards.
This is the method I use when I'm an environment that forces me to use a proxy but it also has drawbacks like maintaining a whole seperate image and versioning, patching... etc.
Hey @codylane
First of all, thanks for the in depth feedback!
I'm having a hard time trying to grasp the inception that is happening when bootstrapping for the first time or if this happens on the second run and every run afterwards?
This only happens on the first run, and simply running vagrant up
or provision
again solves the issue, since vagrant-proxyconf
now detects Docker and configures proxy settings as desired! Therefore it is not a big problem.
you could [...] halt the instance and repackage it as a new box so that you don't have to worry about the first time bootstrapping again
True, and I have though about that angle as well, but then proxy configuration is hard coded into that machine image, and it will only work in a proxy environment, or even worse, only in this proxy environment with the provided proxy configuration. I prefer to have proxy configuration in vagrant.d
and only have proxy configuration on the machines where it is actually required.
Perhaps we could create a separate issue that contains enhancements and better future for this plugin?
Sure, I am open to reviewing such an issue or provide feedback, but I'm not sure that I can really formulate how to enhance the plugin, since I am rather clueless about how it is actually implemented and how it is working in the background.
hey @datosh,
Sorry for delay. I'm still not certain there is an easy way to fix this and we may just have to live with this uderstanding for now. I've been doing orchestration deployments for more than 15+ years now and I've found the must success breaking up my code into chunks like you recommended.
- Configure the state for the first run, knowing that other dependent services cannot be configured until after we are in a known good state first.
From here i've done some creative things in the past like write webhooks or have processes block until a new event is received to continue building but those all get pretty complex at some point and become brittle over time.
So, I've got what I think is an easier solution that I'm brewing up for you and I'm wondering if you can help me? Please provide your Vagrantfile
. I'll show you what I do in our tests so that you can replicate that behavior on your own system. That way, you should have the best of both worlds using your VMs behind your proxy or at home without making any changes.
Here's an example if this helps. This example was a copy and paste plus modify so maynot be complete or 100% accurate, but it should be a good starting ground for you.
The magic happens in the first three lines, basically it's saying If you have HTTP_PROXY environment variables already set in the users environment that is running vagrant then these variables will pass through to vagrant-proxyconf automatically or they will be set to nil
which the plugin would un-configure all the proxies it supports on the next vagrant provision
.
ENV['HTTP_PROXY'] = ENV.fetch('HTTP_PROXY', nil)
ENV['HTTPS_PROXY'] = ENV.fetch('HTTPS_PROXY', nil)
ENV['NO_PROXY'] = ENV.fetch('NO_PROXY', $PROXY_NO_PROXY.join(","))
puts "HTTP_PROXY = '#{ENV["HTTP_PROXY"]}'"
puts "HTTPS_PROXY = '#{ENV["HTTPS_PROXY"]}'"
puts "NO_PROXY = '#{ENV["NO_PROXY"]}'"
Vagrant.configure("2") do |config|
config.vm.define 'default' do |c|
c.vm.box = nil
if Vagrant.has_plugin?('vagrant-proxyconf')
c.proxy.http = ENV['HTTP_PROXY']
c.proxy.https = ENV['HTTPS_PROXY']
c.proxy.no_proxy = ENV['NO_PROXY']
c.proxy.enabled = {
:apt => {
:enabled => true,
:skip => false,
},
:env => {
:enabled => false,
:skip => false,
}
}
end
c.vm.provider "docker" do |d|
d.build_dir = "."
d.dockerfile = "Dockerfile.bionic"
d.has_ssh = true
end
end
end
hi @datosh - Just wanted to follow up with you to see if this last recommendation might help you or if you would like to continue discussing other alternatives? I'm open to additional conversation.
The other thing I've done in the past is create snapshots for different states of my application which does work nicely if you need to rollback to a specific version or test env. I also sometimes repackage my snapshots as new boxes and version them so that I can create linked clones or save the provisioning time. This is especially nice when your code exists on a shared mount using a plugin like vagrant-sshfs
so you can have snapshots serve as different states of our test environment without sacrificing your latest dev work.