Unable to find interface members
bartrail opened this issue · 16 comments
Actually its just a question - when starting dlite I get the message unable to find interface members
- I have a freshly installed mbp with mac os sierra. So I think something (dependencies) might be missing?
- dlite version in use (run
dlite --version
): dlite version 2.0.0-beta9 - expected behavior: start the virtual machine
- actual behavior: stops with error message
Starting the virtual machine: ERROR!
Unable to find interface members
- steps to reproduce: run
dlite start
that error is caused by the service trying to create routing entries to allow direct connections to your containers. let's do a little bit of debugging.
first, run:
route -n get local.docker
(assuming local.docker
is the hostname you used when you installed)
in the output of that you'll find a line that says interface:
and will have something after that, likely en0
or similar, can you confirm that line exists?
if so, run:
ifconfig en0
(or whatever interface you found in the above step)
in the output of this command should be a line that says member:
, again followed by an interface name similar to en0
. does this line also exist?
thanks for the quick reply.
by running route -n get local.docker
the only interface i get is lo0
:
~ > route -n get local.docker
route to: 127.0.0.1
destination: 127.0.0.1
interface: lo0
flags: <UP,HOST,DONE,LOCAL>
recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire
49152 49152 0 0 0 0 16384 0
second, the output for ifconfig lo0
looks like this - no member
:
~ > ifconfig lo0
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
nd6 options=201<PERFORMNUD,DAD>
oh shoot, we have to get the vm to start before these will be meaningful, i forgot. i'll need you to edit your config file at ~/.dlite/config.yaml
and change route: true
to route: false
you should then be able to start the vm, and run those commands again
first try running dlite start
failed with a Conflict!
error message, after restarting my machine dlite is starting up. Thanks!
Interestingly, running ifconfig lo0
doesn't change from the above.
the interface should be different now that the vm is running, you'll have to start with the route
command again
negative.. its not showing anything different.
anyways. when I just try ifconfig
there are plenty of adapters, the bridges have member
entries. is docker maybe using a wrong interface?
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
nd6 options=201<PERFORMNUD,DAD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
en5: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether ac:de:48:00:11:22
inet6 fe80::aede:48ff:fe00:1122%en5 prefixlen 64 scopeid 0x4
nd6 options=281<PERFORMNUD,INSECURE,DAD>
media: autoselect
status: active
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 78:4f:43:60:53:cb
inet6 fe80::1813:a49e:fb49:9caf%en0 prefixlen 64 secured scopeid 0x5
inet 192.168.2.108 netmask 0xffffff00 broadcast 192.168.2.255
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
en1: flags=963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX> mtu 1500
options=60<TSO4,TSO6>
ether 12:00:18:f8:6b:00
media: autoselect <full-duplex>
status: inactive
en2: flags=963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX> mtu 1500
options=60<TSO4,TSO6>
ether 12:00:18:f8:6b:04
media: autoselect <full-duplex>
status: inactive
en3: flags=963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX> mtu 1500
options=60<TSO4,TSO6>
ether 12:00:18:f8:6b:01
media: autoselect <full-duplex>
status: inactive
en4: flags=963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX> mtu 1500
options=60<TSO4,TSO6>
ether 12:00:18:f8:6b:05
media: autoselect <full-duplex>
status: inactive
p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304
ether 0a:4f:43:60:53:cb
media: autoselect
status: inactive
awdl0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1484
ether 36:e4:73:4f:22:8f
inet6 fe80::34e4:73ff:fe4f:228f%awdl0 prefixlen 64 scopeid 0xb
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=63<RXCSUM,TXCSUM,TSO4,TSO6>
ether 12:00:18:f8:6b:00
Configuration:
id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
ipfilter disabled flags 0x2
member: en1 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 6 priority 0 path cost 0
member: en2 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 7 priority 0 path cost 0
member: en3 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 8 priority 0 path cost 0
member: en4 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 9 priority 0 path cost 0
nd6 options=201<PERFORMNUD,DAD>
media: <unknown type>
status: inactive
utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 2000
inet6 fe80::ad21:ef4:464c:2f7%utun0 prefixlen 64 scopeid 0xd
nd6 options=201<PERFORMNUD,DAD>
en8: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
ether fe:17:5d:a8:9a:d9
media: autoselect
status: active
bridge100: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=3<RXCSUM,TXCSUM>
ether ae:de:48:00:33:64
inet 192.168.64.1 netmask 0xffffff00 broadcast 192.168.64.255
Configuration:
id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
ipfilter disabled flags 0x2
member: en8 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 14 priority 0 path cost 0
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
edit: Also I can start a container successfully but cannot access it via localhost:8080 (which is possible using the native docker client)
what does route -n get local.docker
give you?
still, even after a dlite is started
~ > route -n get local.docker
route to: 127.0.0.1
destination: 127.0.0.1
interface: lo0
flags: <UP,HOST,DONE,LOCAL>
recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire
49152 49152 0 0 0 0 16384 0
very strange, that's definitely the problem. if you ping local.docker
what ip does it resolve to?
yes its working
Oh wait. I had local.docker
in my /etc/hosts configured to point to 127.0.0.1 because I am used this on the native client too. now commented it out and now route -n get local.docker
gives this:
~ > route -n get local.docker
route to: 192.168.64.2
destination: 192.168.64.2
interface: bridge100
flags: <UP,HOST,DONE,LLINFO,WASCLONED,IFSCOPE,IFREF>
recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire
0 0 0 0 0 0 1500 1171
Edit: I think that was the issue the whole time. :/
ah ha! looking at your output from ifconfig above, i think if you were to turn the route
config item back on you'd be back in business
your choice though, direct container routing is by no means necessary
Thank you very much for your help. What does the route
option actually do? I re-enabled it and dlite starts normally now ;)
loading local.docker:8080
works in the browser. but localhost:8080
doesn't.. I was hoping the route option takes somehow care of this.
edit: if you're curious, its just a simple webserver thats running
routing localhost to your docker vm is likely not what you want to do, and is generally probably a bad idea. most modern unix based systems (including osx) rely on localhost to point to the actual localhost, and all sorts of weird/bad things can happen.
pointing your browser to local.docker:8080
is the better solution, honestly.
the route
option allows you to connect directly to your containers, rather than having to expose ports to the vm. for example, if you were to run an nginx container named nginx
like so: docker run --name nginx nginx
then dlite's built in dns server, combined with the direct routing option, would allow you to go to nginx.docker
in your browser without having to expose the port or use the local.docker
hostname for every service.
it does have a few limitations right now (there's some weirdness with docker-compose, see the open issues for details), but i'm working on improving it
great! happy to help