Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: move decision-making of desired VM size to VM monitor #8111

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

hlinnaka
Copy link
Contributor

@hlinnaka hlinnaka commented Jun 19, 2024

@hlinnaka hlinnaka requested review from a team, sharnoff, kelvich, problame and stradig and removed request for a team June 19, 2024 13:50
hlinnaka added a commit to neondatabase/autoscaling that referenced this pull request Jun 19, 2024
The new protocol message allows the vm-monitor to directly specify the
desired size of the VM. With that, the agent doesn't need the metrics
anymore, it will just try to make the vm-monitor's wish true.

This is the autoscaler agent implementation of the RFC I proposed
here: neondatabase/neon#8111. In order to use
the new API, see the corresponding VM monitor changes at:
https://github.com/neondatabase/neon/tree/heikki/wip-autoscale-api
hlinnaka added a commit that referenced this pull request Jun 19, 2024
This is the VM monitor implementation of the RFC at
#8111.

I tried to keep the user-visible behavior unchanged from what we have
today. Improving the autoscaling algorithm is a separate topic, the
point of this work is just to move the algorihm from the autoscaler
agent to the VM monitor. That lays the groundwork for improving it
later, based on more metrics and signals inside the VM.

Some notable changes:

- I removed all the cgroup managing stuff. Instead of polling the
  cgroup memory threshold, this polls the overall system memory usage.

- The scaling algorithm is based on sliding window of load average and
  memory usage over the last minute. I'm not sure how close that is to
  the algorithm used by the autoscaler agent, I couldn't find a
  description of what exactly the algorithm used there is. I think
  this is close, but if not, it can be changed to match the agent's
  current algorithm more closely. I copied the LoadAverageFractionTarget
  and MemoryUsageFractionTarget settings from the autoscaler agent, with
  the defaults I found in the repo, but I'm not sure if we use different
  settings in production.

- I also didn't fully understand how the memory history logging in VM
  monitor, which was used to trigger upscaling. There is only one
  memory scaling codepath now, based on the max over 1-minute sliding
  window.
Copy link
Contributor

@problame problame left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on the high-level idea that the workload should request the compute size, not an external observer.

I'm missing details on the ScaleRequest semantics. Is it a synchronous call? Is it just a "would be nice to have but until you give it to me, I will work with existing resources"? Is the response to the ScaleRequest an estimate for how long it's going to take until the upscaling is complete?

@hlinnaka
Copy link
Contributor Author

+1 on the high-level idea that the workload should request the compute size, not an external observer.

I'm missing details on the ScaleRequest semantics. Is it a synchronous call? Is it just a "would be nice to have but until you give it to me, I will work with existing resources"? Is the response to the ScaleRequest an estimate for how long it's going to take until the upscaling is complete?

It's "would be nice to have but until you give it to me, I will work with existing resources". The agent doesn't send any response to the ScaleRequest. If the ScaleRequest results in upscaling or downscaling, however, the agent will send a DownScaleRequest or UpscaleNotification to the VM monitor, just like it does today when it decides to perform an upscale or downscale.

I'm not sure that's the best protocol, but I think it's the path of least resistence, because it's very close to how the current UpscaleRequest mesage works.

Copy link

3228 tests run: 3111 passed, 0 failed, 117 skipped (full report)


Code coverage* (full report)

  • functions: 32.4% (6848 of 21165 functions)
  • lines: 49.7% (53410 of 107360 lines)

* collected from Rust tests only


The comment gets automatically updated with the latest test results
8d8c728 at 2024-06-19T15:40:45.689Z :recycle:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants