Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VPA Helm Chart #3068

Open
techdragon opened this issue Apr 19, 2020 · 45 comments
Open

VPA Helm Chart #3068

techdragon opened this issue Apr 19, 2020 · 45 comments
Assignees
Labels
area/vertical-pod-autoscaler help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@techdragon
Copy link

It would be very useful to have a Helm Chart for deploying the Vertical Pod Autoscaler. At the moment its one of the few parts I cant deploy in any kind of idempotent/config management toolchain ( other than using shell execution of course 😄)

If there is a recommended one it should be documented somewhere, if not, could someone more familiar with the install scripting look at if there are any blockers to making a helm chart?

@bskiba bskiba added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. area/vertical-pod-autoscaler labels Apr 20, 2020
@dbirks
Copy link

dbirks commented Apr 22, 2020

I've noticed a chart by @sebastien-prudhomme that he may want to contribute:
https://github.com/cowboysysop/charts/tree/master/charts/vertical-pod-autoscaler

@jkbschmid
Copy link

Any update on this issue? Is a Helm chart or support for any other deploy tool planned or generally desired?

@sebastien-prudhomme
Copy link

@jkbschmid i'm the author of the above chart.

If you look at the cluster-autoscaler Helm chart, it's maintained by the Helm community not the Kubernetes organization.

My feeling is that Kubernetes developpers don't want to take care of application packaging. They just provide raw YAML the same way some open source products just provide source code without DEB or RPM package.

I can understand it because some people, like me, prefer Helm, other people prefer Kustomize and so on.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 24, 2020
@yashbhutwala
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 24, 2020
@sudermanjr
Copy link

Found this issue after I created my own chart today. Seems very similar to the other one mentioned here. We will continue to maintain this one for now since it is a component needed by another tool of ours.

FairwindsOps/charts#358

@mkjmkumar
Copy link

/remove-lifecycle stale

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 22, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 21, 2021
@techdragon
Copy link
Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 3, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 4, 2021
@techdragon
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 5, 2021
@k8s-triage-robot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 3, 2021
@techdragon
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 10, 2021
@jonkerj
Copy link

jonkerj commented Sep 7, 2021

We (Equinix Managed Services NL) have rolled our own chart for VPA, and would love to submit it to be included upstream. What do the devs think about this?

@rabidscorpio
Copy link

@jonkerj Did anything happen with this? Did you submit your chart? I'd love to see something official adopted.

@jonkerj
Copy link

jonkerj commented Nov 23, 2021

No, it went dead after my message on 7 Sep. If a PR is desired from the VPA devs, we'd be glad to submit ours.

We have mixed experience with putting a lot of effort into polishing something "internal" for PR/publication, so I'd like a confirmation from the devs before we put time into it (which we'd love to do).

@fredgate
Copy link

Maybe @bskiba has some answer.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 19, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 19, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot
Copy link
Contributor

@borats: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@imageschool
Copy link

Still no confirmation..?!

@rileyai-dev
Copy link

@jbartosik @kgolab @bskiba A project already exists (https://github.com/cowboysysop/charts/tree/master/charts/vertical-pod-autoscaler) and AFAIK the owner is willing to contribute to its integration into your repo. @jonkerj is willing to submit a chart. All we need from the maintainers is an approval or at least an explanation...

It's been 2.5 years and still nothing! Can't we, at least, have a message from the team about this?

@jbartosik
Copy link
Collaborator

@rileyai-dev I'm happy to accept contributions but I don't have capacity to maintain Helm charts.

@martinnirtl
Copy link

Hi @sebastien-prudhomme!
i just read the history of this issue (including your comment), and wonder what's your stand on moving the chart to this repository and contribute to it here? i think, it would gain much more attraction here and the community is definitely waiting for a helm chart for vpa 😃

@mrclrchtr
Copy link

mrclrchtr commented May 21, 2024

I landed here because I was looking for the easiest way to install VPA. Do you rather use https://github.com/cowboysysop/charts/tree/master/charts/vertical-pod-autoscaler or https://github.com/FairwindsOps/charts/tree/master/stable/vpa ?

It would be really great if there was an official chat.

@nikimanoledaki
Copy link
Contributor

nikimanoledaki commented Aug 20, 2024

@sebastien-prudhomme, with 55+ 👍's, it seems that the community would really appreciate this contribution! Would you be willing to donate the Chart that you have built to this project? That way we can all help to add best practices etc.

Alternatively, @sebastien-prudhomme, would you accept that another contributor takes the initiative to do this with the Chart that you created and are maintaining, as offered by @jonkerj?

@techdragon or a maintainer - would it be possible to reopen this issue, please?

@techdragon
Copy link
Author

/reopen

@k8s-ci-robot
Copy link
Contributor

@techdragon: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot reopened this Aug 20, 2024
@techdragon
Copy link
Author

I really hate auto close bots... glad it was just closed not locked.

@nikimanoledaki
Copy link
Contributor

nikimanoledaki commented Aug 20, 2024

Thank you @techdragon! 🎉 It would be fantastic to have an official up-to-date Helm Chart with best practices maintained by the community, similarly to the cluster-autoscaler one - https://github.com/kubernetes/autoscaler/tree/master/charts/cluster-autoscaler

@sebastien-prudhomme, if you would prefer to not go ahead with this for any reason, that would be okay as well. Please let us know so that we know our options and whether we need to look into alternatives. Thank you! ☺️

@marcin-wlodarczyk
Copy link

@jonkerj is this quality VPA chart already published?

@sebastien-prudhomme
Copy link

@nikimanoledaki feel free to use my chart to build an official chart.

@nikimanoledaki
Copy link
Contributor

Thank you @sebastien-prudhomme for offering to contribute the chart 🌟

Since @sebastien-prudhomme gave the green light - @jonkerj, does your offer still stand? 😄

@MattSurabian
Copy link

MattSurabian commented Sep 19, 2024

Been a while since @jonkerj was active on this thread, but in case they're not interested in taking up this effort I'd be happy to do so. I have a branch started on my own autoscaler fork but it seems the biggest constraint is going to be reconciling how the deployment assets are managed today in this repo vs how they were being managed in @sebastien-prudhomme 's chart repo which I imagine is the reason this effort has languished.

While it's possible to almost drop the old chart into the charts folder and call this done (so tempting!) it's worth noting that for cluster-autoscaler helm is the first class citizen and all deployment assets are managed up at the root /charts level. Since VPA hasn't had a chart, once one is added something needs to be done with the hack and deploy folders otherwise there will be two locations for maintaining manifest assets which is a nightmare.

I think priority for this effort should be not totally blowing up the workflow of the existing VPA project developers while also ensuring minimal/no values file changes for folks who have been on the unofficial community chart. Some of the existing hack scripts could be modified to accomplish things via helm which might be a start on that front. Success seems easy to measure since the output of ./vertical-pod-autoscaler/hack/vpa-process-yamls.sh print should be able to be compared against helm template calls to ensure correctness.

Smaller considerations for manifest assets are things like how in the existing chart repo rbac resources are managed on a per component level while the existing VPA deploy logic lumps them all into one file (vpa-rbac). Personally I like the way the chart has things broken out, it seems desirable for long term maintenance and clarity. There's also the management of the base CRDs to consider.

@nikimanoledaki, curious to hear your and others reaction to the above. Happy to help in the implementation effort but want to make sure we start in the right direction and that I don't step on anyone's toes.

@mohag
Copy link

mohag commented Dec 30, 2024

I think priority for this effort should be not totally blowing up the workflow of the existing VPA project developers while also ensuring minimal/no values file changes for folks who have been on the unofficial community chart. Some of the existing hack scripts could be modified to accomplish things via helm which might be a start on that front. Success seems easy to measure since the output of ./vertical-pod-autoscaler/hack/vpa-process-yamls.sh print should be able to be compared against helm template calls to ensure correctness.

It might make sense to get it to match and then use helm template to generate the YAMLs from that script... (I'm not sure how the maintainers feel about centralising the workflow on Helm though) (It is commonly done, e.g. for Calico)

(This is mainly a comment to keep the bot from closing the issue)
/remove-lifecycle rotten

@sftim
Copy link
Contributor

sftim commented Jan 15, 2025

A datapoint: the Kubernetes project doesn't routinely provide Helm charts for other components.

@sftim
Copy link
Contributor

sftim commented Jan 15, 2025

/remove-lifecycle rotten

@jonkerj
Copy link

jonkerj commented Jan 15, 2025

I've been zoned out of discussion for far too long, sorry for that.

@jonkerj is this quality VPA chart already published?

Only internally within our company

Thank you @sebastien-prudhomme for offering to contribute the chart 🌟

Since @sebastien-prudhomme gave the green light - @jonkerj, does your offer still stand? 😄

VPA used to be a key ingredient in one of our products, but that product's future has taken a wild detour and the team structure has been rearranged a bit, reducing capacity for maintenance of external projects. Combined with the speed of decision making/discussion (years have passed since our offer) in this project makes us withdraw our offer, I'm afraid.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/vertical-pod-autoscaler help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests