fgrehm/vagrant-lxc

could not get static ip working

ebayer opened this issue · 1 comments

On Opensuse Tumbleweed, I could not set a static ip with a CentOS7 container. Here are the relevant info (I think):

$ lsb_release -a
LSB Version:    core-2.0-noarch:core-3.2-noarch:core-4.0-noarch:core-2.0-x86_64:core-3.2-x86_64:core-4.0-x86_64:desktop-4.0-amd64:desktop-4.0-noarch:graphics-2.0-amd64:graphics-2.0-noarch:graphics-3.2-amd64:graphics-3.2-noarch:graphics-4.0-amd64:graphics-4.0-noarch
Distributor ID: openSUSE
Description:    openSUSE Tumbleweed
Release:        20190305
Codename:       n/a

$ sudo lxc-info --version
2.0.9

$ vagrant --version
Vagrant 2.2.2

$ vagrant plugin list
vagrant-cachier (1.2.1, global)
vagrant-libvirt (0.0.45, global)
vagrant-lxc (1.4.3, global)

Using the following Vagrantfile and starting the container with debug:

Vagrant.configure("2") do |config|
  config.vm.define "test1", primary: true do |test1|
    test1.vm.hostname = "test1"
    test1.vm.provider :lxc do |lxc|
      lxc.customize 'network.ipv4', '10.0.3.5/24'
    end
  end
end
$ REDIR_LOG=1 LXC_START_LOG_FILE=/tmp/lxc-start.log VAGRANT_LOG=debug vagrant up --provider=lxc test1
 INFO global: Vagrant version: 2.2.2
 INFO global: Ruby version: 2.6.1
 INFO global: RubyGems version: 3.0.1
 INFO global: VAGRANT_LOG="debug

Started container gets the DHCP ip address, not the one I give:

$ vagrant ssh                                                                                        
Last login: Wed Mar 13 11:31:57 2019 from gateway
[vagrant@test1 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
130: eth0@if131: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether aa:c7:39:ce:31:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.155/24 brd 10.0.3.255 scope global dynamic eth0
       valid_lft 3525sec preferred_lft 3525sec
    inet6 fe80::a8c7:39ff:fece:31e2/64 scope link 
       valid_lft forever preferred_lft forever

Full debug output is attached.

vagrant-output.txt

Instead if I start the container with the following config:

Vagrant.configure("2") do |config|
  config.vm.define "test1", primary: true do |test1|
    test1.vm.hostname = "test1"
    test1.vm.provider :lxc do |lxc|
      lxc.customize "network.type", "veth"
      lxc.customize "network.link", "lxcbr0"
      lxc.customize 'network.ipv4', '10.0.3.5/24'
    end
  end
end

A new interface is created inside the container with the correct ip address set:

$ vagrant ssh
Last login: Wed Mar 13 11:35:52 2019 from gateway
[vagrant@test1 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
132: eth0@if133: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:ac:18:28:62:cf brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.208/24 brd 10.0.3.255 scope global dynamic eth0
       valid_lft 3544sec preferred_lft 3544sec
    inet6 fe80::acac:18ff:fe28:62cf/64 scope link 
       valid_lft forever preferred_lft forever
134: eth1@if135: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 8a:7d:f2:cd:a3:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.5/24 brd 10.0.3.255 scope global eth1
       valid_lft forever preferred_lft forever

How can I get the static ip working on primary interface?

Hey, sorry for the silence here but this project is looking for maintainers 😅

As per #499, I've added the ignored label and will close this issue. Thanks for the interest in the project and LMK if you want to step up and take ownership of this project on that other issue 👋