automaticserver/lxe

Flannel pod deployment failing.

Closed this issue · 3 comments

Hello Team ,

Greetings.
Please find below our stack set-up / our problem statement and provide a suitable guidance for the same.

Stack Overview
Kubernetes master with Docker Runtime in one node and LXC/LXD worker in a separate node and it is attached to the master node with the help of LXE.

Problem Statement :
we could not able to successfully establish Flannel based pod network across this Master and Worker node stack. We presume the issue is related to differences in yaml template which varies between docker and LXC runtimes.

What we are trying to do ?
Our ultimate intention is to have LXC container's run on top of CRIO runtime. And for that we are trying to come past small milestones as like above.

Kindly guide us for proceeding further in our path.

Kindly guide us through this issue. Thanks.

The initial issue is, that lxd uses different images than docker does. So since this project is pretty fresh, there is no help yet (like finished images ready to use). I'm not aware of a lxd remote offering community images like docker hub does, but that would help to share images.

So LXE does not provide an LXC Runtime for CRIO to use - as of now. CRIO explicitly handles OCI images which LXD can't. Instead, LXE is used in place of CRIO and directly attached to kubelet.

It is not impossible to make lxd images based on OCI images, but someone needs to do it. And this lxc-create and lxc-oci is not available when you install lxd (lxd only uses a subset of what lxc provides - intentionally by their project goals)

So to be clear, as of now, you can't use docker images so you can't use CRIO, and you need additional work. You have the following options:

  • You can make an lxd image out of a oci image using plain lxc to make use of the lxc-create and lxc-oci functions. After that, continue with step "lxc remote".
  • You make an lxd image yourself containing your wished binary. Continue with step "lxc remote".
  • You install your network tooling like flannel directly onto the host, instead of a container. (This is probably the fastest variant, but not so reusable for the community)
  • You use plain docker for your network tooling. LXE is still configured as runtime in kubelet, so you can't use docker shim or cri-o. But that does not prevent you from using docker directly so you can profit from their images.

lxc remote:
Some steps require now, that you offer the images somehow to your kubelet nodes with lxe. If that image is built, you save that image onto a lxd remote and configure that remote. I assume you have not provided --lxd-remote-config and env $LXD_CONF is empty, so the file is probably in $HOME/.config/lxc of the user running it. To modify it, you can use lxc remote {add|list|...} for that with that user (which might be root anyway).

Hint: You can use hostNetwork in lxe, so no reason to not containerize core network binaries for the kubelet host if you want.

Thank you @dionysius for a detailed suggestion. we will check on to it and get back if incase of any further queries. Thanks Again !!!

feel free to reopen if something is missing