Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VPA - Document the current recommendation algorithm #2747

Open
bskiba opened this issue Jan 17, 2020 · 30 comments
Open

VPA - Document the current recommendation algorithm #2747

bskiba opened this issue Jan 17, 2020 · 30 comments
Assignees
Labels
area/vertical-pod-autoscaler kind/documentation Categorizes issue or PR as related to documentation. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@bskiba
Copy link
Member

bskiba commented Jan 17, 2020

Document

  • how recommendations are calculated out of raw samples for CPU and Memory.
  • when it is reasonable to expect a stable recommendation for a new workload

Please note that VPA recommendation algorithm is not part of the API and is subject to change without notice

@bskiba bskiba self-assigned this Jan 17, 2020
@bskiba bskiba added area/vertical-pod-autoscaler kind/documentation Categorizes issue or PR as related to documentation. labels Jan 17, 2020
@hochuenw-dd
Copy link

@bskiba any updates on this?

@yashbhutwala
Copy link
Contributor

This would be awesome!! How can I help?

@yashbhutwala
Copy link
Contributor

yashbhutwala commented Apr 19, 2020

I've been digging around the code for a bit, this is what I understand so far, please correct me where I'm wrong 😃

To answer this:

how recommendations are calculated out of raw samples for CPU and Memory.

recommendations are calculated using decaying histogram of weighted samples from the metrics server, where the newer samples are assigned higher weights; older samples are decaying and hence affect less and less w.r.t. to the recommendations. CPU is calculated using the 90th percentile of all cpu samples, and memory is calculated using the 90th percentile peak over the 8 day window.

when it is reasonable to expect a stable recommendation for a new workload

8 days of history is used for recommendation (1 memory usage sample per day). Prometheus can be used a history provider in this calculation. By default, vpa is collecting data about all controllers, so when new vpa objects are created, they are already providing stable recommendations (unless you specify memory-save=true). All active vpa recommendations are checkpointed.

Please note that VPA recommendation algorithm is not part of the API and is subject to change without notice

saw this in the code here 🙃

...I'm not sure if it's possible to get a "stable" recommendation before 8 days...

@djjayeeta
Copy link

@yashbhutwala Great Summary!!

I am getting a huge upper bound for my recommendation at the startup and trying to understand the behavior.

Below is the VPA object.

Name: xxx-vpa Namespace: xxxx Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"autoscaling.k8s.io/v1beta2","kind":"VerticalPodAutoscaler","metadata":{"annotations":{},"name":"kube-apiserver-vpa","namesp... API Version: autoscaling.k8s.io/v1 Kind: VerticalPodAutoscaler Metadata: Creation Timestamp: 2020-04-30T20:39:43Z Generation: 165 Resource Version: 1683082 Self Link: xxxx UID: <some_number> Spec: Resource Policy: Container Policies: Container Name: c_name Controlled Resources: cpu memory Min Allowed: Cpu: 100m Memory: 400Mi Container Name: c_name2 Mode: Off Target Ref: API Version: apps/v1beta2 Kind: StatefulSet Name: name Update Policy: Update Mode: Auto Status: Conditions: Last Transition Time: 2020-04-30T20:40:07Z Status: True Type: RecommendationProvided Recommendation: Container Recommendations: Container Name: c_name Lower Bound: Cpu: 100m Memory: 400Mi Target: Cpu: 125m Memory: 903203073 Uncapped Target: Cpu: 125m Memory: 903203073 Upper Bound: Cpu: 2967m Memory: 16855113438 Events: <none>

I don't want to set an upper limit in the VPA object. I don't have checkpoints as history is loaded from prometheus server. But I noticed this huge upperbound (numbers may be slightly different ) irrespective of whether I load from check point or prometheus. Can you tell why does the algorithm give such a high upper bound?

Also, there are no OOM events in VPA recommender logs.

I did the same experiment without prometheus server and got similar numbers. I checked the checkpoint of VPA
"status": { "cpuHistogram": { "bucketWeights": { "1": 10000, "10": 728, "11": 1088, "12": 121, "2": 2891, "3": 1009, "4": 686, "5": 240, "8": 41, "9": 5436 }, "referenceTimestamp": "2020-05-01T00:00:00Z", "totalWeight": 51.24105164098685 }, "firstSampleStart": "2020-04-30T20:39:30Z", "lastSampleStart": "2020-04-30T22:11:02Z", "lastUpdateTime": null, "memoryHistogram": { "referenceTimestamp": "2020-05-02T00:00:00Z" }, "totalSamplesCount": 552, "version": "v3" }

The surprising case is there is no memory histogram. Is this because it will only appear after 24 hr?

I deleted the VPA object, checkpoint and then restarted the VPA object but still getting huge upper bounds after 2 hours of startup. How is it recommending memory without any histogram?

Can you please answer this?

@yashbhutwala
Copy link
Contributor

yashbhutwala commented May 1, 2020

@djjayeeta good questions!! I'm not an expert here, but as far as I understand it, the most important value for you to look at is Target. This is the recommendation given for you to set the requests to. There is no limit recommendation given by VPA currently.

Lower and Upper bound are meant to only be used by the VPA updater to allow pod to be running if the requests are in that range; and not evict them. For upper bound, I suspect this is set by default to node's capacity (as in your case this is 16Gi). just fyi, Uncapped Target gives the recommendation before applying constraints specified in the VPA spec, such as min or max.

With this in mind, in your case, the target of 125m cpu and 0.85Gi (903203073 bytes) mem seems reasonable.

The surprising case is there is no memory histogram. Is this because it will only appear after 24 hr?

Yes, it samples the peak per day

@bskiba
Copy link
Member Author

bskiba commented May 8, 2020

@yashbhutwala, thanks for taking time to answer here, your answer is very precise. 👍
If you would like, could you add this answer to the FAQ? I think it would prove very useful to other users

@avmohan
Copy link

avmohan commented Jul 23, 2020

@djjayeeta high upper bound at startup would be due to confidence factor scaling. With more data, it will go closer towards the 95th percentile.

upperBoundEstimator = WithConfidenceMultiplier(1.0, 1.0, upperBoundEstimator)

lifespanInDays := float64(s.LastSampleStart.Sub(s.FirstSampleStart)) / float64(time.Hour*24)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 21, 2020
@yashbhutwala
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 21, 2020
@ghost
Copy link

ghost commented Nov 7, 2020

Hi, Can I work on this? I think there is enough information added in the comment: #2747 (comment) and #2747 (comment) .

I can rephrase it and add in the FAQ: https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/FAQ.md

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 5, 2021
@bskiba
Copy link
Member Author

bskiba commented Mar 1, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 1, 2021
@Duske
Copy link

Duske commented Apr 27, 2021

Hi @yashbhutwala, is it still the case that the VPA does not recommend the resource limits and only the requests?

@yashbhutwala
Copy link
Contributor

@Duske yes.

@k8s-triage-robot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 27, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 26, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@gosoon
Copy link

gosoon commented May 2, 2022

I do not find relevant instructions of Recommender Components algorithm in the project. It is still a little complicated to learn from the code. Is there no documentation of the algorithm in the community?

eg:
initialize histogram algorithm and how is it designed.
calculate the recommended value algorithm.

@ashvinsharma
Copy link

I agree. I would like to know what heuristics are applied in the algorithm to ensure that correct target values are sent to different services with different usage patterns.

@alvaroaleman
Copy link
Member

/reopen

I don't think this is solved and it is very hard to find information about this

@k8s-ci-robot
Copy link
Contributor

@alvaroaleman: Reopened this issue.

In response to this:

/reopen

I don't think this is solved and it is very hard to find information about this

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this May 12, 2023
@alvaroaleman
Copy link
Member

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels May 12, 2023
@ManuelMueller1st
Copy link

ManuelMueller1st commented Dec 29, 2023

I am wondering why the VPA recommender uses targetMemoryPeaksPercentile := 0.9.
Shouldn't it use the maximum observed memory usage to avoid OOM kill?

https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/recommender/logic/recommender.go#L108

@pierreozoux
Copy link

@ManuelMueller1st it is a request not a limit.

@mwarkentin
Copy link

@pierreozoux many people (including @thockin as a general rule, with possible caveats of course) recommend setting memory requests == limits as memory is an uncompressable resource and you may over-allocate memory on a host resulting in random OOMs anyways.

https://home.robusta.dev/blog/kubernetes-memory-limit

image

@mwarkentin
Copy link

I also think a high level overview of how the algorithm works would be super helpful for making people comfortable with using VPA particularly for memory recommendations.

@iamzili
Copy link
Contributor

iamzili commented Jan 17, 2025

@mwarkentin, if you want to maintain a 1:1 ratio between memory requests and limits, you can achieve this by setting both memory requests and limits to the same values when you initially deploy the pods first:

The TargetBound contains the recommended optimal amount of resources, which is calculated by the Recommender. It is used to set resource requests. The base for TargetBound is the 90th percentile, with an additional safety margin (default 15%, configurable via the --recommendation-margin-fraction flag). The Recommender doesn't compute the limits ... instead, the limits are set based on the ratio we initially define when we deploy the pods first with resource requests and limits. For example, when you deploy a Deployment with 2 pods, and you set both memory requests and limits to 100MiB, if the memory request is updated to 200MiB (because of new recommended value), the limit is automatically adjusted to 200MiB as well.

Additionally, it's crucial to note that the TargetBound, LowerBound, and UpperBound achieve maximum accuracy when there is 8 days of historical data.

@sftim
Copy link
Contributor

sftim commented Feb 11, 2025

BTW, it would be great to add VPA to the official documentation

@iamzili
Copy link
Contributor

iamzili commented Feb 12, 2025

BTW, it would be great to add VPA to the official documentation

I did some debugging, mainly in the Recommender, because I was interested in how the CPU/Memory samples are aggregated and how the recommendations are calculated. I posted my findings on my personal webpage - those who are interested can check the URL in my GitHub profile.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/vertical-pod-autoscaler kind/documentation Categorizes issue or PR as related to documentation. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests