Users can access information such as cpuinfo and memory usage of the host machine.
Closed this issue · 4 comments
Describe the bug
Users can access information such as memory usage of the host machine:
or memory information of the host:
even though I explicitly limit the amount of memory available to the user's container to 1G:
The fact that the user can get host state from the container is a dangerous thing that we need to avoid as much as possible.
Expected behavior
We need reasonable and accurate isolation to limit user permissions to protect against certain container risks as much as possible.
Cluster provider
Minikube
Version
0.8.1.MS3
Additional information
No response
Hi, thank your for the report. A Kuberntes Cluster consists of multiple Nodes, which are the actual/virtual machines the Pods/IDEs will run on in the end.
Multiple pods may be started on the same Node, if there is enough space. When you use commands like top
or lscpu
, this will show the information from the node. This is the same behavior as starting your IDE directly using docker on your machine.
The resource limits are used by Kubernetes to schedule the pods on the nodes, so that they can get enough cpu/memory.
If a pod uses more memory or CPU than requested, Kubernetes will kill this pod at some point, so the limits are enforced by monitoring.
So I am not sure we can do much from Theia Cloud side.
When setting up your production cluster, you could use Nodes that can only fit one IDE to ensure separation.
Hi @jfaltermeier ,
Thanks for your reply!
Multiple pods may be started on the same Node
You are right. In Kubernetes, multiple Pods can run on a single node, and these Pods are automatically scheduled by Kubernetes based on the available resources on the current node and the resources required by the Pods.
But when the provider of Theia-Cloud services charges for the specifications of the CPU/MEM resources required by the IDE Pod as a commodity, the provider expects users to see in the container the CPU/MEM resources that they have purchased.
This approach can also extend the use of Theia-Cloud as a business model for service providers of their own applications.
The issue mentioned above also exists in Gitpod's IDE containers. They charges for the CPU/MEM resource specifications of the IDE container as a commodity. However, even with the corresponding resources set, the CPU/MEM resources visible from within the container are those of the host machine. I am unsure whether this is due to their oversight or if they believe there are no potential risks in this approach.
When setting up your production cluster, you could use Nodes that can only fit one IDE to ensure separation.
To achieve this, I know we could directly set the resources of a Kubernetes node to match those required by a single IDE Pod, but this contradicts the principle of node sharing in Kubernetes. Kubernetes itself cannot dynamically orchestrate these nodes, as this exceeds its capabilities. This would necessitate additional frameworks such as OpenStack or Proxmox, introducing extra maintenance and technical costs.
I am looking for a method to limit the CPU/MEM resources visible within a container, similar to how users can only see the process resources and file system resources within the container.
This issue is stale because it has been open for 180 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.