BBVA/kvm

Clean macvtap0 / macvlan0 after container is removed?

pwFoo opened this issue · 4 comments

pwFoo commented

Is it possible to remove interfaces after the container is stopped and removed?

Tried to attach host interfaces:

docker run -p 5900:5900 -p 2222:22 -p 4444:4444 -td --name kvm --privileged -v /home/rancher/image.qcow2:/image/image.qcow2 -e AUTO_ATTACH=n -e ATTACH_IFACES=eth1 --net=host bbvainnotech/kvm:latest

The created interfaces are persistent after container is removed. Will this cause problems with new created KVM containers?

23: macvtap0@eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state LOWERLAYERDOWN group default qlen 500
    link/ether d6:e7:a3:65:21:04 brd ff:ff:ff:ff:ff:ff
24: macvlan0@eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1
    link/ether aa:16:25:51:22:11 brd ff:ff:ff:ff:ff:ff

Yes: by design the container entrypoint is not idempotent. We are not sure to implement this feature, as it could have side effects when container is launched with net=host.

Thus, by now, if you want to launch the container twice, you need to remove it first.

How useful do you see this feature? in which cases you find that it is a problem to destroy de container?

pwFoo commented

I have a use case which would work with one or more host interfaces (internet connection mapped to a container), but it isn't important at the moment.

Maybe it's possible to work around the need of --net=host (host interfaces) by use docker network macvlan / ipvlan and use docker interfaces instead of the host interfaces?

Haven't tested it yet... Just an idea...
Maybe it's possible to map host interfaces with docker network macvlan driver?

pwFoo commented

It works to map a host interface to a docker macvlan network (docker 1.12+).

docker network create -d macvlan --subnet=192.168.41.0/24 --gateway=192.168.41.1 -o parent=eth1 macvlan-eth1
docker run --rm -ti --net=macvlan-eth1 alpine sh

The alpine container gets the second ip address 192.168.41.2.
Connected my notebook to eth1 with the gateway ip 192.168.41.1.

Ping works fine in both directions between notebook -> docker host eth1 -> alpine container (docker network "macvlan-eth1").

Your're right. We discussed internally this usage mode some weeks ago and it's definetly the way to go if you have docker 1.12+ and need to connect a docker container to the underlying network.

Unfortunately we haven't tested it yet, but this container should work with no changes at all.

Thanks to your comments, I have renamed the network devices that are created by the container from macvlan# / macvtap# to kvm-macvlan# / kvm-macvtap# in #d5287d6

Feel free to reopen or create a new enhancement issue if you find that the feature described in the first comment has enough use cases to be considered in future versions.