-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VPA: deleting VPA and recreate it, still gives some recommendation #4682
Comments
Please add more information. Preferably step-by-step reproduction instructions and explain what happened vs what you expected |
Hi What I did earlier is the following:
actually it became more weired today....after the weekend I checked the recommendation again and also checked the top pods:
while the vpa describer shows this:
The targer memory is 105Mbytes, but there is no load for two days in this pod. And Now I deleted VPA, scale my deployement to 0, then scale it to 1 again, create VPA and the result is the following:
I see a quite big difference between the top and the recommendation. so what is the right way to "reset" the vpa and start a new measurement. Actually we just want to get a recommendation when the load is 100 tps, but now for me it seems that I cannot reproduce twice the same result.... Is there somewhere explained how the recommendation is calculated? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale I didn't have time to look into the problem but I think it would be good to do that. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
/remove-lifecycle rotten |
/lifecycle frozen |
Curious about this too. Changes have been made to my pod and therefore resource needs, but deleting and recreating the vpa uses historical data still. |
By default, historical data is stored in a
|
Did not work for me. I had to delete the VPA itself (and before that, disable Argo CD to prevent instant recreation), then restart |
Here's what I do (based on other's answers):
|
Which component are you using?:
vertical-pod-autoscaler
What version of the component are you using?:
Component version: 0.10.0
What k8s version are you using (
kubectl version
)?:kubectl version
OutputThe text was updated successfully, but these errors were encountered: