Skip to content

[clusteragent/autoscaling] Defer autoscaling stack startup until first DPA#50305

Open
davidor wants to merge 6 commits intomainfrom
davidor/contp-1632-autoscaling-lazy-start
Open

[clusteragent/autoscaling] Defer autoscaling stack startup until first DPA#50305
davidor wants to merge 6 commits intomainfrom
davidor/contp-1632-autoscaling-lazy-start

Conversation

@davidor
Copy link
Copy Markdown
Member

@davidor davidor commented May 4, 2026

What does this PR do?

The goal of this PR is to allow enabling workload autoscaling without extra cost when it's not in use.

Right now, when autoscaling is enabled, the kubeapiserver workloadmeta collector starts a pod reflector among other things. In large clusters, this can use a lot of memory. This happens even if no DPA is deployed.

We want to avoid this memory usage when no DPAs are deployed.

This will let us enable workload autoscaling by default without extra cost when it's not in use. Users who want it will be able to create DPAs directly, without having to enable the option in the Cluster Agent.

This PR does not flip the autoscaling.workload.enabled default to true. That will be done in a separate PR so it can be reverted independently if needed.

Describe how you validated your changes

Unit tests plus tests on a local kind cluster.

For the kind tests, I used kwok to simulate a large number of pods so the memory impact of the pod reflector would be measurable.

First, I deployed with autoscaling disabled to get a memory baseline. Then I deployed with autoscaling enabled but no DPAs, and checked that memory usage stayed close to the baseline. Finally, I created a DPA and checked that memory usage went up as expected.

@davidor davidor added this to the 7.80.0 milestone May 4, 2026
@davidor davidor added the qa/done QA done before merge and regressions are covered by tests label May 4, 2026
@dd-octo-sts dd-octo-sts Bot added the internal Identify a non-fork PR label May 4, 2026
@davidor davidor added the changelog/no-changelog No changelog entry needed label May 4, 2026
@dd-octo-sts dd-octo-sts Bot added the team/container-platform The Container Platform Team label May 4, 2026
@github-actions github-actions Bot added the long review PR is complex, plan time to review it label May 4, 2026
@dd-octo-sts
Copy link
Copy Markdown
Contributor

dd-octo-sts Bot commented May 4, 2026

Go Package Import Differences

Baseline: 4bb5357
Comparison: 4a2acf7

binaryosarchchange
agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/clusteragent/autoscaling/autoscalinggate
agentlinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/clusteragent/autoscaling/autoscalinggate
agentwindowsamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/clusteragent/autoscaling/autoscalinggate
agentdarwinamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/clusteragent/autoscaling/autoscalinggate
agentdarwinarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/clusteragent/autoscaling/autoscalinggate
cluster-agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/clusteragent/autoscaling/autoscalinggate
cluster-agentlinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/clusteragent/autoscaling/autoscalinggate

@datadog-official
Copy link
Copy Markdown
Contributor

datadog-official Bot commented May 4, 2026

🎯 Code Coverage (details)
Patch Coverage: 37.61%
Overall Coverage: 50.26% (-0.00%)

This comment will be updated automatically if new data arrives.
🔗 Commit SHA: 4a2acf7 | Docs | Datadog PR Page | Give us feedback!

@dd-octo-sts
Copy link
Copy Markdown
Contributor

dd-octo-sts Bot commented May 4, 2026

Files inventory check summary

File checks results against ancestor 4bb5357e:

Results for datadog-agent_7.80.0~devel.git.474.4a2acf7.pipeline.111694927-1_amd64.deb:

No change detected

@dd-octo-sts
Copy link
Copy Markdown
Contributor

dd-octo-sts Bot commented May 4, 2026

Static quality checks

✅ Please find below the results from static quality gates
Comparison made with ancestor 4bb5357
📊 Static Quality Gates Dashboard
🔗 SQG Job

Successful checks

Info

Quality gate Change Size (prev → curr → max)
agent_deb_amd64 +4.6 KiB (0.00% increase) 740.963 → 740.968 → 750.310
agent_deb_amd64_fips +4.6 KiB (0.00% increase) 699.151 → 699.156 → 702.690
agent_rpm_amd64 +4.6 KiB (0.00% increase) 740.947 → 740.952 → 750.280
agent_rpm_amd64_fips +4.6 KiB (0.00% increase) 699.135 → 699.140 → 702.670
agent_rpm_arm64 +8.03 KiB (0.00% increase) 719.018 → 719.026 → 724.050
agent_rpm_arm64_fips +8.03 KiB (0.00% increase) 680.293 → 680.301 → 684.460
agent_suse_amd64 +4.6 KiB (0.00% increase) 740.947 → 740.952 → 750.280
agent_suse_amd64_fips +4.6 KiB (0.00% increase) 699.135 → 699.140 → 702.670
agent_suse_arm64 +8.03 KiB (0.00% increase) 719.018 → 719.026 → 724.050
agent_suse_arm64_fips +8.03 KiB (0.00% increase) 680.293 → 680.301 → 684.460
docker_agent_amd64 +4.59 KiB (0.00% increase) 801.343 → 801.347 → 805.870
docker_agent_arm64 +8.04 KiB (0.00% increase) 804.242 → 804.250 → 809.730
docker_agent_jmx_amd64 +4.6 KiB (0.00% increase) 992.262 → 992.267 → 996.590
docker_agent_jmx_arm64 +8.03 KiB (0.00% increase) 983.941 → 983.949 → 989.410
docker_cluster_agent_amd64 +12.12 KiB (0.01% increase) 206.603 → 206.615 → 207.600
17 successful checks with minimal change (< 2 KiB)
Quality gate Current Size
agent_heroku_amd64 309.076 MiB
docker_cluster_agent_arm64 220.634 MiB
docker_cws_instrumentation_amd64 7.142 MiB
docker_cws_instrumentation_arm64 6.689 MiB
docker_host_profiler_amd64 301.106 MiB
docker_host_profiler_arm64 312.620 MiB
docker_dogstatsd_amd64 39.370 MiB
docker_dogstatsd_arm64 37.628 MiB
dogstatsd_deb_amd64 30.028 MiB
dogstatsd_deb_arm64 28.169 MiB
dogstatsd_rpm_amd64 30.028 MiB
dogstatsd_suse_amd64 30.028 MiB
iot_agent_deb_amd64 44.458 MiB
iot_agent_deb_arm64 41.439 MiB
iot_agent_deb_armhf 42.179 MiB
iot_agent_rpm_amd64 44.459 MiB
iot_agent_suse_amd64 44.459 MiB
On-wire sizes (compressed)
Quality gate Change Size (prev → curr → max)
agent_deb_amd64 -40.04 KiB (0.02% reduction) 175.290 → 175.251 → 179.160
agent_deb_amd64_fips +20.47 KiB (0.01% increase) 167.006 → 167.026 → 174.440
agent_heroku_amd64 neutral 74.959 MiB → 80.310
agent_rpm_amd64 +58.68 KiB (0.03% increase) 177.288 → 177.345 → 182.080
agent_rpm_amd64_fips +41.11 KiB (0.02% increase) 168.356 → 168.397 → 174.140
agent_rpm_arm64 +20.63 KiB (0.01% increase) 159.386 → 159.406 → 163.610
agent_rpm_arm64_fips +41.11 KiB (0.03% increase) 151.711 → 151.751 → 156.850
agent_suse_amd64 +58.68 KiB (0.03% increase) 177.288 → 177.345 → 182.080
agent_suse_amd64_fips +41.11 KiB (0.02% increase) 168.356 → 168.397 → 174.140
agent_suse_arm64 +20.63 KiB (0.01% increase) 159.386 → 159.406 → 163.610
agent_suse_arm64_fips +41.11 KiB (0.03% increase) 151.711 → 151.751 → 156.850
docker_agent_amd64 neutral 267.708 MiB → 272.990
docker_agent_arm64 +3.72 KiB (0.00% increase) 254.708 → 254.712 → 261.470
docker_agent_jmx_amd64 -4.26 KiB (0.00% reduction) 336.352 → 336.348 → 341.610
docker_agent_jmx_arm64 +3.28 KiB (0.00% increase) 319.350 → 319.353 → 326.050
docker_cluster_agent_amd64 neutral 72.427 MiB → 73.460
docker_cluster_agent_arm64 neutral 67.882 MiB → 68.680
docker_cws_instrumentation_amd64 neutral 2.999 MiB → 3.330
docker_cws_instrumentation_arm64 neutral 2.729 MiB → 3.090
docker_host_profiler_amd64 neutral 110.753 MiB → 125.600
docker_host_profiler_arm64 neutral 105.076 MiB → 120.000
docker_dogstatsd_amd64 neutral 15.237 MiB → 15.870
docker_dogstatsd_arm64 neutral 14.558 MiB → 14.890
dogstatsd_deb_amd64 neutral 7.941 MiB → 8.830
dogstatsd_deb_arm64 neutral 6.826 MiB → 7.750
dogstatsd_rpm_amd64 neutral 7.952 MiB → 8.840
dogstatsd_suse_amd64 neutral 7.952 MiB → 8.840
iot_agent_deb_amd64 neutral 11.703 MiB → 13.210
iot_agent_deb_arm64 neutral 9.997 MiB → 11.620
iot_agent_deb_armhf +3.25 KiB (0.03% increase) 10.207 → 10.210 → 11.780
iot_agent_rpm_amd64 +3.07 KiB (0.03% increase) 11.718 → 11.721 → 13.230
iot_agent_suse_amd64 +3.07 KiB (0.03% increase) 11.718 → 11.721 → 13.230

@cit-pr-commenter-54b7da
Copy link
Copy Markdown

cit-pr-commenter-54b7da Bot commented May 4, 2026

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: d257c4ca-3535-4200-9d98-b3ec4f5c840d

Baseline: 4bb5357
Comparison: 4a2acf7
Diff

Optimization Goals: ✅ No significant changes detected

Experiments ignored for regressions

Regressions in experiments with settings containing erratic: true are ignored.

perf experiment goal Δ mean % Δ mean % CI trials links
docker_containers_cpu % cpu utilization -1.53 [-4.52, +1.45] 1 Logs

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
quality_gate_metrics_logs memory utilization +0.93 [+0.68, +1.19] 1 Logs bounds checks dashboard
otlp_ingest_logs memory utilization +0.87 [+0.77, +0.97] 1 Logs
quality_gate_idle_all_features memory utilization +0.46 [+0.41, +0.50] 1 Logs bounds checks dashboard
ddot_metrics_sum_delta memory utilization +0.28 [+0.10, +0.46] 1 Logs
ddot_metrics_sum_cumulativetodelta_exporter memory utilization +0.16 [-0.07, +0.40] 1 Logs
ddot_logs memory utilization +0.14 [+0.08, +0.20] 1 Logs
uds_dogstatsd_20mb_12k_contexts_20_senders memory utilization +0.12 [+0.07, +0.18] 1 Logs
ddot_metrics memory utilization +0.02 [-0.18, +0.22] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput +0.01 [-0.10, +0.11] 1 Logs
file_to_blackhole_1000ms_latency egress throughput -0.01 [-0.47, +0.44] 1 Logs
file_to_blackhole_0ms_latency egress throughput -0.02 [-0.54, +0.49] 1 Logs
uds_dogstatsd_to_api_v3 ingress throughput -0.02 [-0.22, +0.18] 1 Logs
uds_dogstatsd_to_api ingress throughput -0.03 [-0.23, +0.17] 1 Logs
file_to_blackhole_500ms_latency egress throughput -0.03 [-0.44, +0.37] 1 Logs
docker_containers_memory memory utilization -0.05 [-0.15, +0.05] 1 Logs
file_to_blackhole_100ms_latency egress throughput -0.05 [-0.19, +0.09] 1 Logs
ddot_metrics_sum_cumulative memory utilization -0.11 [-0.27, +0.05] 1 Logs
quality_gate_idle memory utilization -0.14 [-0.19, -0.09] 1 Logs bounds checks dashboard
quality_gate_logs % cpu utilization -0.15 [-1.13, +0.82] 1 Logs bounds checks dashboard
otlp_ingest_metrics memory utilization -0.19 [-0.34, -0.03] 1 Logs
file_tree memory utilization -0.20 [-0.25, -0.15] 1 Logs
tcp_syslog_to_blackhole ingress throughput -1.11 [-1.31, -0.92] 1 Logs
docker_containers_cpu % cpu utilization -1.53 [-4.52, +1.45] 1 Logs

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed observed_value links
docker_containers_cpu simple_check_run 10/10 716 ≥ 26
docker_containers_memory memory_usage 10/10 241.90MiB ≤ 370MiB
docker_containers_memory simple_check_run 10/10 734 ≥ 26
file_to_blackhole_0ms_latency memory_usage 10/10 0.16GiB ≤ 1.20GiB
file_to_blackhole_0ms_latency missed_bytes 10/10 0B = 0B
file_to_blackhole_1000ms_latency memory_usage 10/10 0.20GiB ≤ 1.20GiB
file_to_blackhole_1000ms_latency missed_bytes 10/10 0B = 0B
file_to_blackhole_100ms_latency memory_usage 10/10 0.17GiB ≤ 1.20GiB
file_to_blackhole_100ms_latency missed_bytes 10/10 0B = 0B
file_to_blackhole_500ms_latency memory_usage 10/10 0.18GiB ≤ 1.20GiB
file_to_blackhole_500ms_latency missed_bytes 10/10 0B = 0B
quality_gate_idle intake_connections 10/10 3 ≤ 4 bounds checks dashboard
quality_gate_idle memory_usage 10/10 143.50MiB ≤ 147MiB bounds checks dashboard
quality_gate_idle_all_features intake_connections 10/10 3 ≤ 4 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 472.27MiB ≤ 495MiB bounds checks dashboard
quality_gate_logs intake_connections 10/10 4 ≤ 6 bounds checks dashboard
quality_gate_logs memory_usage 10/10 178.56MiB ≤ 195MiB bounds checks dashboard
quality_gate_logs missed_bytes 10/10 0B = 0B bounds checks dashboard
quality_gate_metrics_logs cpu_usage 10/10 351.57 ≤ 2000 bounds checks dashboard
quality_gate_metrics_logs intake_connections 10/10 4 ≤ 6 bounds checks dashboard
quality_gate_metrics_logs memory_usage 10/10 389.38MiB ≤ 430MiB bounds checks dashboard
quality_gate_metrics_logs missed_bytes 10/10 0B = 0B bounds checks dashboard

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check missed_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check missed_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.

@davidor davidor changed the title [clusteragent/autoscaling] Defer pod reflector startup until first DPA [clusteragent/autoscaling] Defer autoscaling stack startup until first DPA May 4, 2026
@davidor davidor force-pushed the davidor/contp-1632-autoscaling-lazy-start branch from 2e6d3d3 to 6592584 Compare May 5, 2026 07:11
@davidor davidor marked this pull request as ready for review May 5, 2026 07:49
@davidor davidor requested review from a team as code owners May 5, 2026 07:49
Comment on lines +128 to +133
w.patcherMutex.RLock()
p := w.patcher
w.patcherMutex.RUnlock()
if p == nil {
return false, nil
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor: if I understand correctly here we check if the webhook is essentially active. We can move this check earlier into the WebhookFunc such that we avoid serializing and deserializing admission payload until patcher is set.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I fixed this 👍

}

// SetPatcher installs the PodPatcher used to apply recommendations.
func (w *Webhook) SetPatcher(p workload.PodPatcher) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you are doing runtime check for the webhook enablement, I think you would need to set isEnabled=true unconditionally.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the current check is correct.
Although, in the future if we flip autoscaling.workload.enabled to true by default, it would be equivalent to what you mentioned.

gate *autoscalinggate.Gate,
) error {
enable := func(_ any) { gate.Enable() }
handlers := cache.ResourceEventHandlerFuncs{AddFunc: enable}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor: it's worth checking whether initial load calls add function or update function. Also I guess we could shut down informers after the gate is enabled.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initial lists call the handlers defined. We cannot shut down the informers, because the autoscaling controller needs them.


// WaitForEnable blocks until Enable is called or ctx is cancelled. Returns true
// if Enable was called, false if ctx was cancelled first.
func (g *Gate) WaitForEnable(ctx context.Context) bool {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: alternatively you could make it return context error to simplify the call sites and error logging from

if !gate.WaitForEnable(ctx) || ctx.Err() != nil {
	return
}

to

if err := gate.WaitForEnable(ctx); err != nil {
    log.Errorf("...%v", err)
	return
}

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left this one as-is. The reason is that not all context cancellations are errors (for example when DCA is shutting down) so I don't think we should always log.

Comment on lines +692 to +697
for _, webhook := range webhooks {
if aw, ok := webhook.(*admissionautoscaling.Webhook); ok {
autoscalingWebhook = aw
break
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: here you could introduce admissionautoscaling.GetWebhook([]Webhook) *admissionautoscaling.Webhook helper to extract auto-scaling webhook and use it in place where it's needed:

if config.GetBool("autoscaling.workload.enabled") {
  autoscalingWebhook := admissionautoscaling.GetWebhook(webhooks)
  ...
}

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extracting the helper to the admissionautoscaling package is not possible because it would create a circular dependency.

I extracted the helper to the same file, to clean this part a bit.

Comment on lines +219 to +234
gateWired := c.autoscalingGate != nil
needsGateSync := gateWired && c.config.GetBool("autoscaling.workload.enabled")

// Pod store handling
if shouldStartPodStoreEagerly(c.config, gateWired) {
reflector, store := newPodStore(ctx, wlmetaStore, c.config, client)
objectStores = append(objectStores, store)
go reflector.Run(ctx.Done())
if needsGateSync {
go c.markPodCollectionSyncedWhenReady(ctx, store)
}
} else if needsGateSync {
// Defer starting the pod reflector until the first DatadogPodAutoscaler
// or DatadogPodAutoscalerClusterProfile has been deployed.
go c.startPodStoreOnGate(ctx, wlmetaStore, client, newPodStore)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here I find it hard to understand the logic because we use gateWired as (c.config.GetBool("autoscaling.workload.enabled") && c.autoscalingGate == nil) inside of shouldStartPodStoreEagerly but also needsGateSync (c.config.GetBool("autoscaling.workload.enabled") && c.autoscalingGate != nil) in both branches.
I think it should be possible to refactor this conditions somehow to simplify it.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering if we can come up with something like EnabledGate (enabled by default) in case c.autoscalingGate is nil and then avoid branching at all

gate := c.autoscalingGate
if gate == nil {
    gate = autoscalinggate.NewEnabledGate()
}
...
// use gate which could delay pod store or create it right away

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree this was part a bit hard to follow.
I've rewritten it a little bit to try to make it simpler to understand.

} else if needsGateSync {
// Defer starting the pod reflector until the first DatadogPodAutoscaler
// or DatadogPodAutoscalerClusterProfile has been deployed.
go c.startPodStoreOnGate(ctx, wlmetaStore, client, newPodStore)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we do not add created store to the objectStores, is that correct?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's correct. objectStores is only used for the startup readiness. When starting the store lazily we don't want to wait for it at startup.

@davidor davidor force-pushed the davidor/contp-1632-autoscaling-lazy-start branch from 6592584 to 4a2acf7 Compare May 6, 2026 10:20
@davidor
Copy link
Copy Markdown
Member Author

davidor commented May 6, 2026

@AlexanderYastrebov thanks for the review. I addressed your comments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

changelog/no-changelog No changelog entry needed internal Identify a non-fork PR long review PR is complex, plan time to review it qa/done QA done before merge and regressions are covered by tests team/container-integrations team/container-platform The Container Platform Team

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants