BBVA/kvm

Exposed ports?

pwFoo opened this issue · 16 comments

pwFoo commented

Hi,

I tried to use it with exposed ports to reach the VM (-p 2222:22 -p 8080:8080 -p 5900:5900), but it doesn't work. So I can't reach / enter the VM.

Any hints how to reach the VM ports from outside?

pwFoo commented

I also doesn't work with the container ip.

telnet 172.17.0.8 22
telnet: can't connect to remote host (172.17.0.8): Connection refused

telnet 172.17.0.8 5900
telnet: can't connect to remote host (172.17.0.8): Connection refused
pwFoo commented

Looks like the ip isn't bind to the VM? ip a output inside of the kvm container:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: macvtap0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether ee:a3:af:c6:06:9b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::eca3:afff:fec6:69b/64 scope link 
       valid_lft forever preferred_lft forever
3: macvlan0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1
    link/ether f2:d2:15:21:95:b2 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::f0d2:15ff:fe21:95b2/64 scope link 
       valid_lft forever preferred_lft forever
77: eth0@if78: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:08 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.8/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:8/64 scope link 
       valid_lft forever preferred_lft forever

The container ip is bind to the eth0 interface inside of the container and should be moved to the VM, right?

What the startvm (entrypoint) does is just "steal" the container IP and give it to the internal dhcp (dnsmasq), which serves that IP to the VM.
It also creates a new non-conflicting IP and assigns it to the macvlan# device. This new IP is used to serve the initial container IP via DHCP to the VM, and is not reachable from outside the container. Moreover, this IP is not reachable by the VM, it's only to serve dhcp from the docker container to the guest VM. (I can provide more details if you want so)
Eth# devices' IP addresses are removed, as they are not reachable by the VM.

Also, the VM image OS needs to be configured to get its IP via DCHP.

If everything is working right, you should be able to access to the guest VM using the initial docker comtainer IP, using any allowed method by the VM (ssh, rdp...) from the docker host.

pwFoo commented

I don't know how I could verify the packet flow to the VM. It looks like the IP is just in use by the container itself and not the VM.

docker host

? (172.17.0.8) at 02:42:ac:11:00:08 [ether]  on docker0

kvm container

21:51:22.554798  In 02:42:5d:38:c6:16 ethertype IPv4 (0x0800), length 76: 172.17.0.1.37654 > 172.17.0.8.4444: Flags [S], seq 1883933213, win 29200, options [mss 1460,sackOK,TS val 7404165 ecr 0,nop,wscale 7], length 0
21:51:22.554952 Out 02:42:ac:11:00:08 ethertype IPv4 (0x0800), length 56: 172.17.0.8.4444 > 172.17.0.1.37654: Flags [R.], seq 0, ack 1883933214, win 0, length 0

kvm container

87: eth0@if88: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:08 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.8/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:8/64 scope link 
       valid_lft forever preferred_lft forever

If the IP is used by the VM I should see another mac address, right?

First things first: If you see an IP address configured in eth0 interface and none configured in the macvlan device, then something failed. IMHO this could be due to the dnsmasq bug specified in #4 . It should be already gone with the fix I published some minutes ago.

I have added some debug info when launching startvm script, so you should be able to see if the IP is being correctly configured in dnsmasq. If so, the guest OS should be getting that IP.

Please, try again with the latest release and write back with feedback if needed. I will try it also from my side and I will close this issue if all my test succeed.

pwFoo commented

Pulled latest again and it looks like the dnsmasq bug is fixed!

INFO: DHCP configured to serve IP 172.17.0.8/16 via macvtap0
INFO: Lauching dnsmasq                                --dhcp-range=172.17.0.8,172.17.0.8                                    --dhcp-host=4e:fd:6f:a5:5e:6c,,172.17.0.8,infinite                          --dhcp-option=option:netmask,255.255.0.0                                  --dhcp-option=option:dns-server,8.8.8.8,8.8.4.4      --dhcp-option=option:router,172.17.0.1        
INFO: Launching /usr/libexec/qemu-kvm -enable-kvm   -drive file=/image/image.qcow2,if=none,id=drive-ide0-0-0,format=qcow2,cache=writethrough   -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-1,bootindex=1      -machine rhel6.0.0,accel=kvm,usb=off   -nodefaults   -no-acpi   -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x2   -realtime mlock=off   -msg timestamp=on   -chardev pty,id=charserial0   -device isa-serial,chardev=charserial0,id=serial0   -serial stdio    -m 1024 -smp 4,sockets=4,cores=1,threads=1    -device virtio-net-pci,netdev=net0,mac=4e:fd:6f:a5:5e:6c -netdev tap,id=net0,vhost=on,fd=3 3<>/dev/macvtap0
char device redirected to /dev/pts/0 (label charserial0)
VNC server running on `::1:5900'

IP is moved to macvlan0

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: macvtap0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether 4e:fd:6f:a5:5e:6c brd ff:ff:ff:ff:ff:ff
    inet6 fe80::4cfd:6fff:fea5:5e6c/64 scope link 
       valid_lft forever preferred_lft forever
3: macvlan0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1
    link/ether da:94:61:9e:c8:aa brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.8/15 scope global macvlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::d894:61ff:fe9e:c8aa/64 scope link 
       valid_lft forever preferred_lft forever
59: eth0@if60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:08 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::42:acff:fe11:8/64 scope link 
       valid_lft forever preferred_lft forever

Ping from host

--- 172.17.0.8 ping statistics ---
129 packets transmitted, 0 packets received, 100% packet loss

So the bug should be fixed, but can't reach the exposed ports / vm ip.

pwFoo commented

Exposed ports are still a problem with the macvlan docker network and AUTO_ATTACH=y via docker bridge.

With the macvlan setup I connected a notebook with a simple alpine container

notebook eth0 -> docker host eth1 -> docker network (driver macvlan) -> alpine container (container ip)

Ping works fine

Replaced the alpine container with a KVM container and the connection doesn't work. Also exposed ports (docker host ip) are not available.
Exposed ports not reachable with AUTO_ATTACH=y / docker bridge too.

Is it a bug or am I missing something?

Hi,

Probably, the guest VM is not booting at all, because it hasn't a VGA adapter.

We experienced this problem today with a Centos VM, and we managed to get it working adding some missing parameters to qemu/kvm when launching the guest VM.

Until now, we only used this container to launch a special VM that only needed a serial console, thus this problem was hidden to us.

I've created issue #10 to track this new feature, that will be added in less than one day.

Stay tuned!

ahoeg commented

Tested with latest release today. VM seems to boot up. RAM usage increases from 0 to 725MB, but exposed ports / VM via container ip is still not available.

ahoeg commented

I have network problems with alpine and centos based versions.
No ping or telnet to tcp ports possible. Exposed ports and the container / vm ip isn't reachable.

Any hint how to debug it inside the container?

Hi @ahoeg

Try to configure your host VM to use serial port as console. Then you will be able to access it directly from the container in interactive (-ti) mode. We have tested this method with CentOS and works perfectly.

Also check that your qcow is configured to get its IP via dhcp.

ahoeg commented

Hi @methadata,

qcow is configured to get its IP via dhcp. Could you share your prepared CentOS qcow to test it and see how the serial configuration is done?

Hi @ahoeg,

I will prepare a lightweight cirros or similar VM to share it with you.

Meanwhile I suggest you to launch the Alpine version with -courses flag. Unfortunately, there is no such option in CentOS version (this is a limitation with CentOS).

Example:

docker run --privileged -ti --name myVM -v $PWD/image.qcow2:/image/image.qcow2 -e AUTO_ATTACH=yes bbvainnotech/kvm:alpine -curses
pwFoo commented

@methadata

I will prepare a lightweight cirros or similar VM to share it with you.
Great, thanks!!!

I tested it with alpine / -curses, but output stops because of a boot splash image (graphical mode) I think...
That's why I'd try another image which should work (like your prepared one).

Awesome work with this project! I love it!

ahoeg commented

Looks like the network problem is RancherOS related.

Networking and exposed ports working fine with a centos docker host. A virt-builder generated centos-7.2 and my own qcow image boot up with network connected. Also serial output is prepared with a virt-builder image.

Would be great to use it with RancherOS too. But no idea what's wrong running your image at a RancherOS host...
I'll test centos-7.2 with my RancherOS host soon.

Hi @ahoeg, @pwFoo

I have just released an update related to this that fixes DHCP problems with some guest VMs (see #13 for further info).
I've tested succesfuly it with CentOS, Debian and Cirros guest VMs.

Please notice that:

  1. it doesn't work with Alpine container.
  2. Another update related to #14 makes that disk image needs to be in /image/image, and not /image/image.qcow2

I will close this issue now. Feel free to open this issue again if something related with it fails.