alexellis/k8s-on-raspbian

question - One node cluster - memory footprint

pierreozoux opened this issue · 1 comments

Hello!

Thanks for sharing this nice guide. I have an idea, and I have some questions, maybe you can help me.
There is a project called YunoHost and it is aimed at helping home users self host services like WordPress or Nextcloud on an arm computer at home.
It is nice, but based on bash let's say, and as I love k8s api, I'm wondering about k8s mem footprint on such small devices.

In a context of a one node cluster, what is the mem footprint of:

  • etcd ~ 22Mb I guess
  • kube-api
  • controller
  • scheduler
  • kubelet
    I think our main constraint here is mem and not cpu.

Another question, do we need also a network plugin? Or as there is only one node, it is enough?

And then, what about optimization, I think there are many rooms for improvements in this context:

  • remove lot of unecessary code at compilation (like aws integration and so on)
  • remove the scheduler, or replace it with a dummy one
  • tune the controller for home user usage instead of thousands nodes, millions pods
  • tune etcd
  • reimplement some functions like cronjobs?
  • socket activation for all the services

I'm just thinking out loud here, if you have any ideas, please share them here, and do not hesitate to close the issue once you answered!

Thanks again and have a nice day :)

Hi, thanks for your suggestion.

An RPi3 is not suitable for this task with Kubernetes (I have tried), but with Docker Swarm it is responsive and useful. The Asus Tinkerboard has 2GB RAM and can run as a single tainted master with Kubernetes, but may be a little overloaded.

Kubernetes is an event-driven distributed system designed for use in production rather than on a single-node, constrained 32-bit ARM device.

You may also want to look at k3s from @ibuildthecloud which is a cut-down version of Kubernetes. In my experience of trying it with an RPi2 it was still very slow, but may improve in the future.

For a small foot-print you should really just use Docker Swarm or docker containers run with --restart=always.

I'll close your issue but feel free to keep commenting.

Alex