-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question on cluster modeling #2462
Comments
Not really, I mean real laptops are really memory bounded and the mechanisms in Kubernetes where something would be triggered by memory limits require real memory usage by the applications in the pods. See also: #877
Overhead from cluster offerings will be variable, but if you assume no overhead or calculate that elsewhere you really just need to know the resource usage of your applications. I don't think you're going to get an accurate take on that running them in a local cluster anyhow though without the real application load. In the future please just file the issue and let us triage it as best as we can, we're all drowning in notifications already. |
I'm sorry @BenTheElder, did not want to make noise.
Yeah, that is why I ask if it is possible to trick kubelets (or whatever reports pods` resource consumption) into thinking that certain pods consume that much resources. That way, perhaps, it would be possible to make such models. The model I did in minizinc tries to "pack" those "pods" but who knows how close it is to real-life ... |
that sounds more something like kubemark https://kubernetes.io/blog/2016/07/update-on-kubernetes-for-windows-server-containers/ , you need a simulator for that. Think in Kind more as an emulator, you can't emulate without using the resources |
If you require resource limits configured on pods you can estimate the necessary capacity by processing the manifests and combining an estimate of how many pods you need with what limits they set, but I don't think you can reasonably simulate this with a local cluster regardless of feature set / I don't think having a cluster is useful for this purpose unless you intend to actually run a real production workload under realistic load and see how the resource usage of the workload behaves.
I don't see the purpose of this, kubelet's behavior around resources is pretty simple and predictable (e.g. once limits are reached no more pods will be admitted), what's not so trivially predictable is the resources actually needed by a production deployment under load. |
Hello colleagues.
One of the typical tasks when building Kubernetes-based platforms in the cloud is cost estimates. Seems like, that at the moment there is no simple way to estimate the cost until you run the full cluster. There are a lot of factors, which influence the size of the target cluster, like pod number, their sizes, cloud provider VMs, and their sizes and price, and so on. This leads to a big space of different variations of the target solution, which is hard to assess.
What I was looking for is some tool to try to model a cluster and roughly calculate its minimum cost. I believe it would be beneficial if the model could take multiple VMs sizes and their price, pods sizes, some load characteristics of the pods, and output the minimum viable simple cluster size with the selected VM type, which could fit all the apps.
For that, I'm trying to put together a simple minizinc model which could do smth like that.
However, I believe the best way to model this is to use Kubernetes itself. But as far as it is a model, it should not be running real loads, but just pretend that it runs them, without really consuming resources. I mean, that an ability to define and run some "virtual nodes" of virtually "any size" and some "virtual pods", which report they use a certain about of RAM and CPU on such "virtual nodes" but in fact fit into a single laptop. And let Kubernetes do the orchestration, collect metrics and make decisions based on this.
My question is, is it possible to achieve with Kind? Or with something else?
@BenTheElder , @munnerz, @aojea, @amwat
The text was updated successfully, but these errors were encountered: