-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configure capacity of the worker nodes #877
Comments
can you elaborate a bit more? |
@aojea Doing some scheduler work and would like to consider the CPU and memory capacities of each node. I could use labels for this but was wondering if it is possible to do this when the cluster is setup? Also if labels is the only option, would be possible to tag each node with particular labels from the initialisation script? |
well, that seems interesting.@BenTheElder what do you think? |
I might be wrong, but I don't think setting resource upper bounds will impact the current cgroup architecture. I do see performance issues with starving the node of resources, though. I'm thinking about the UX side of things too; Docker resource constraints are pretty granular. Maybe we only expose some subset of the constraints, or maybe abstract them all together? |
Feel free to try this out but IIRC this doesn't work. Similarly if swap is enabled on the host memory limits won't work on your pods either. |
I'm working on decoupling us from docker's command line, when we experiment again with support for ignite and other backends when that is complete, some of those can actually limit things because while they are based around running container images they use VMs :+) |
/assign |
That of course works but ... does it actually limit everything on the node? Have you deployed a pod trying to use more? What does kubelet report? |
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
# the control plane node
- role: control-plane
- role: worker
constraints:
memory: "100m"
cpu: "1" I modify to use directly and try to use 1.5g memory:
the pod takes more than 4 mins to be created, it doesn't seem to be a hard limit, maybe we should tweak something on cgroups, but checking inside the node it really seems is limiting the memory
|
Looking at the kernel docs it seems that this is throttling https://www.kernel.org/doc/Documentation/cgroup-v1/blkio-controller.txt , check the block I/o stats
do we want this? or is the idea to fail if it overcommit? |
I think that there are several optison:
otherwise you can set the limit manually as explained here using container constraints (cgroups) is only valid for limiting the resources, but kubelet keeps using the whole host memory and cpu resources for its calculations. |
Hello @aojea , |
that sounds nice, do you think it has chances to be approved? |
I hope 🤷🏻♂️ |
Sadly no re: cAdvisor. This doesn't leave us with spectactular options. Maybe we can trick kubelet into reading our own ""vfs"" or something (like lxcfs?) 😬 , semi related: #2318's solution. |
@palade Did you mean we can limit node's CPU and memory capacities provided to kubernetes cluster by assigning some labels to node? which label you use? Can you give me an example? Thanks a lot. |
any progress? |
Would be possible to set the capacity of the worker nodes when the cluster is created?
The text was updated successfully, but these errors were encountered: