`stack level too deep` when trying to use 'ansible_local' to provision
Closed this issue · 5 comments
Hello,
I am having a strange issue. I wanted to try the new ansible_local provisioner in my LXC boxes and I ran into the following issue :
± vagrant up
Bringing machine 'default' up with 'lxc' provider...
==> default: Importing base box 'jessie64-lxc'...
==> default: Setting up mount entries for shared folders...
default: /vagrant => /home/krtek/Repositories/liip/rawbot-virtualization/playground
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
/opt/vagrant/embedded/gems/gems/log4r-1.1.10/lib/log4r/loggerfactory.rb:76:in `module_eval': stack level too deep (SystemStackError)
from /opt/vagrant/embedded/gems/gems/log4r-1.1.10/lib/log4r/loggerfactory.rb:76:in `set_log'
from /opt/vagrant/embedded/gems/gems/log4r-1.1.10/lib/log4r/loggerfactory.rb:39:in `block in toggle_methods'
from /opt/vagrant/embedded/gems/gems/log4r-1.1.10/lib/log4r/loggerfactory.rb:35:in `each'
from /opt/vagrant/embedded/gems/gems/log4r-1.1.10/lib/log4r/loggerfactory.rb:35:in `toggle_methods'
from /opt/vagrant/embedded/gems/gems/log4r-1.1.10/lib/log4r/loggerfactory.rb:19:in `define_methods'
from /opt/vagrant/embedded/gems/gems/log4r-1.1.10/lib/log4r/logger.rb:37:in `initialize'
from /opt/vagrant/embedded/gems/gems/vagrant-1.8.1/lib/vagrant/action/warden.rb:21:in `new'
from /opt/vagrant/embedded/gems/gems/vagrant-1.8.1/lib/vagrant/action/warden.rb:21:in `initialize'
... 1554 levels...
from /opt/vagrant/embedded/lib/ruby/2.2.0/timeout.rb:32:in `catch'
from /opt/vagrant/embedded/lib/ruby/2.2.0/timeout.rb:103:in `timeout'
from /opt/vagrant/embedded/gems/gems/vagrant-1.8.1/plugins/communicators/ssh/communicator.rb:42:in `wait_for_ready'
from /opt/vagrant/embedded/gems/gems/vagrant-1.8.1/lib/vagrant/action/builtin/wait_for_communicator.rb:16:in `block in call'
± vagrant destroy
/opt/vagrant/embedded/gems/gems/vagrant-1.8.1/plugins/provisioners/ansible/config/guest.rb:40:in `initialize_dup': stack level too deep (SystemStackError)
from /opt/vagrant/embedded/gems/gems/vagrant-1.8.1/plugins/provisioners/ansible/config/guest.rb:40:in `initialize'
from /opt/vagrant/embedded/gems/gems/vagrant-1.8.1/plugins/provisioners/ansible/config/guest.rb:40:in `new'
from /opt/vagrant/embedded/gems/gems/vagrant-1.8.1/plugins/provisioners/ansible/config/guest.rb:40:in `check_path'
from /opt/vagrant/embedded/gems/gems/vagrant-1.8.1/plugins/provisioners/ansible/config/guest.rb:53:in `check_path_is_a_file'
from /opt/vagrant/embedded/gems/gems/vagrant-1.8.1/plugins/provisioners/ansible/config/base.rb:78:in `validate'
from /opt/vagrant/embedded/gems/gems/vagrant-1.8.1/plugins/provisioners/ansible/config/guest.rb:32:in `validate'
from /opt/vagrant/embedded/gems/gems/vagrant-1.8.1/plugins/kernel_v2/config/vm.rb:730:in `block in validate'
from /opt/vagrant/embedded/gems/gems/vagrant-1.8.1/plugins/kernel_v2/config/vm.rb:720:in `each'
... 8275 levels...
from /opt/vagrant/embedded/gems/gems/vagrant-1.8.1/plugins/commands/destroy/command.rb:30:in `execute'
from /opt/vagrant/embedded/gems/gems/vagrant-1.8.1/lib/vagrant/cli.rb:42:in `execute'
from /opt/vagrant/embedded/gems/gems/vagrant-1.8.1/lib/vagrant/environment.rb:302:in `cli'
from /opt/vagrant/embedded/gems/gems/vagrant-1.8.1/bin/vagrant:174:in `<main>'
What I don't really understand is that the exception is raised in two different places when doing an up
or a destroy
. AFAIK, provision
generate the same error as destroy
. The file and lines are consistent across multiple runs.
FYI, here's a minimal Vagrantfile
that triggers the error for me :
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.require_version ">= 1.8.1"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "glenux/jessie64-lxc"
config.vm.provision 'ansible_local' do |ansible|
ansible.playbook = 'playbook.yml'
end
end
I tested using the VirtualBox provider, and everything worked fine, so I am assuming the error is some sort of incompatibility between vagrant-lxc and the new provisioner.
Here are some more information about my setup :
$ vagrant plugin list
vagrant-hostmanager (1.6.1)
vagrant-lxc (1.2.1)
vagrant-share (1.1.5, system)
$ vagrant -v
Vagrant 1.8.1
$ lxc-start --version
1.0.8
Full debug log of vagrant provision
: http://pastebin.com/jUmz9zpw
Any input on this ? Can I help in any way ?
What happens if you try bringing up the box without the ansible block in your Vagrantfile?
It works fine... in fact it works great with :
- no provisioner
- the
ansible
provisioner - the
shell
provisioner
Probably others, the only one having the issue is the ansible_local
provisioner.
I should also add that the ansible_local
provisioner work great on VirtualBox, I only have the issue with the LXC provider, this is why I opened this here and not and Vagrant itself.
Same issue with aws
provider (vagrant-aws
plugin). See hashicorp/vagrant#6984
Seems this was fixed on the Vagrant side in the referenced issue.
I am then closing this :)