[autoscaling] HPA migration controller for DPA workload autoscaler#50210
[autoscaling] HPA migration controller for DPA workload autoscaler#50210clamoriniere wants to merge 3 commits intomainfrom
Conversation
|
🎯 Code Coverage (details) 🔗 Commit SHA: 79ae686 | Docs | Datadog PR Page | Give us feedback! |
Files inventory check summaryFile checks results against ancestor 9d5f741f: Results for datadog-agent_7.80.0~devel.git.502.79ae686.pipeline.111614420-1_amd64.deb:No change detected |
Static quality checks✅ Please find below the results from static quality gates Successful checksInfo
30 successful checks with minimal change (< 2 KiB)
On-wire sizes (compressed)
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 9d5f741 Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | +1.98 | [-0.99, +4.94] | 1 | Logs |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | +1.98 | [-0.99, +4.94] | 1 | Logs |
| ➖ | quality_gate_logs | % cpu utilization | +1.57 | [+0.60, +2.54] | 1 | Logs bounds checks dashboard |
| ➖ | docker_containers_memory | memory utilization | +1.16 | [+1.03, +1.30] | 1 | Logs |
| ➖ | file_tree | memory utilization | +0.68 | [+0.63, +0.73] | 1 | Logs |
| ➖ | ddot_metrics_sum_delta | memory utilization | +0.26 | [+0.07, +0.44] | 1 | Logs |
| ➖ | quality_gate_metrics_logs | memory utilization | +0.23 | [-0.01, +0.48] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_logs | memory utilization | +0.15 | [+0.09, +0.20] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulative | memory utilization | +0.08 | [-0.07, +0.24] | 1 | Logs |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | +0.04 | [-0.36, +0.44] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.01 | [-0.08, +0.11] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api | ingress throughput | -0.00 | [-0.20, +0.20] | 1 | Logs |
| ➖ | ddot_metrics | memory utilization | -0.01 | [-0.20, +0.18] | 1 | Logs |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.04 | [-0.48, +0.41] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | -0.06 | [-0.62, +0.51] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api_v3 | ingress throughput | -0.06 | [-0.25, +0.13] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | -0.06 | [-0.11, -0.02] | 1 | Logs bounds checks dashboard |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | -0.08 | [-0.21, +0.06] | 1 | Logs |
| ➖ | quality_gate_idle_all_features | memory utilization | -0.14 | [-0.18, -0.10] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_metrics_sum_cumulativetodelta_exporter | memory utilization | -0.31 | [-0.55, -0.07] | 1 | Logs |
| ➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | -0.32 | [-0.37, -0.27] | 1 | Logs |
| ➖ | otlp_ingest_logs | memory utilization | -0.37 | [-0.47, -0.27] | 1 | Logs |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | -0.53 | [-0.73, -0.32] | 1 | Logs |
| ➖ | otlp_ingest_metrics | memory utilization | -0.56 | [-0.72, -0.41] | 1 | Logs |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | observed_value | links |
|---|---|---|---|---|---|
| ✅ | docker_containers_cpu | simple_check_run | 10/10 | 721 ≥ 26 | |
| ✅ | docker_containers_memory | memory_usage | 10/10 | 246.33MiB ≤ 370MiB | |
| ✅ | docker_containers_memory | simple_check_run | 10/10 | 696 ≥ 26 | |
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | 0.16GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_0ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | 0.20GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_1000ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | 0.17GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_100ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | 0.18GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_500ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | quality_gate_idle | intake_connections | 10/10 | 3 ≤ 4 | bounds checks dashboard |
| ✅ | quality_gate_idle | memory_usage | 10/10 | 141.95MiB ≤ 147MiB | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | intake_connections | 10/10 | 3 ≤ 4 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 | 471.57MiB ≤ 495MiB | bounds checks dashboard |
| ✅ | quality_gate_logs | intake_connections | 10/10 | 4 ≤ 6 | bounds checks dashboard |
| ✅ | quality_gate_logs | memory_usage | 10/10 | 178.09MiB ≤ 195MiB | bounds checks dashboard |
| ✅ | quality_gate_logs | missed_bytes | 10/10 | 0B = 0B | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | cpu_usage | 10/10 | 354.24 ≤ 2000 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | intake_connections | 10/10 | 3 ≤ 6 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | memory_usage | 10/10 | 380.18MiB ≤ 430MiB | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | missed_bytes | 10/10 | 0B = 0B | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check missed_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check missed_bytes: 10/10 replicas passed. Gate passed.
8da5acd to
bbbf9b2
Compare
05fe08e to
315ade0
Compare
Adds a new HPA migration controller that detects existing HPAs for a DPA target, imports their configuration (metrics, replica bounds, custom queries) into the DPA spec as objectives/constraints, neutralises the HPA by setting its replica count to a very high value, and adds an HPAMigrationFinalizer to the DPA. On DPA deletion the finalizer handler restores the HPA to its original state before completing the deletion. Two correctness fixes are included: - Preserve HPA-imported objectives and constraints across profile template updates: when a ClusterProfile template hash changes, UpdateFromProfile() replaces the full DPA spec with the profile-derived one (which has no objectives). RestoreHPAImportedSpec() is now called before writing the updated spec to Kubernetes so the one-shot-imported fields survive. - Resolve %%tag_kube_cluster_name%% and %%env_VAR%% placeholders in DatadogMetric queries before storing them in the DPA spec. The Datadog backend cannot resolve cluster-agent-side template variables, so ResolveMetricQuery() (exported from externalmetrics/model) is called at import time inside resolveExternalMetricConfig(). Assisted-by: Claude:claude-sonnet-4-6
Assisted-by: Claude:claude-sonnet-4-6
315ade0 to
c4a69bd
Compare
…heck logic Two bugs prevented the HPA admission webhook from working correctly during HPA migration: 1. apiVersions hardcoded to ["v1"] — the MutatingWebhookConfiguration rule only targeted autoscaling/v1, so the API server never forwarded autoscaling/v2 HPA UPDATE requests to the webhook. Fixed by introducing an optional webhookWithResourceAPIVersions interface; HPAWebhook returns ["v1","v2"]. 2. Webhook blocked the cluster-agent's own disableHPA patch — the admission handler was checking only the incoming object for the managed-by-dpa annotation. Since disableHPA adds the annotation in the same patch, the webhook saw the annotation, treated it as an external edit, and reverted the spec — leaving the HPA with no selectPolicy:Disabled. Fixed by checking BOTH oldObject and object: disableHPA (old has no annotation) and restoreHPA (new has no annotation) pass through; external edits (annotation on both sides) are correctly reverted. MatchConditions (CEL) removed from the webhook registration — all filtering is now done in the Go handler (revertHPASpec), keeping the logic in one place and avoiding CEL evaluation overhead on the API server. Assisted-by: Claude:claude-sonnet-4-6
c4a69bd to
79ae686
Compare
Summary
controller_hpa_migration.go) that detects existing HPAs targeting the same workload as a DPA, imports their configuration (metrics, replica bounds, custom queries) as DPA objectives/constraints, neutralises the HPA by setting its replica count to a sentinel value, and adds anHPAMigrationFinalizerto the DPA. On DPA deletion the finalizer handler restores the HPA to its original state.hpa_webhook.go) that intercepts UPDATE operations on HPA resources managed by a DPA, reverts any spec change to keep the HPA neutralised, and surfaces a warning to the user.RestoreHPAImportedSpec()is called before writing an updated profile spec to Kubernetes so one-shot-imported fields survive profile hash changes.%%tag_kube_cluster_name%%/%%env_VAR%%placeholders being stored raw in the DPA spec:ResolveMetricQuery()(exported fromexternalmetrics/model) is now called at import time insideresolveExternalMetricConfig().Test plan
HPAWebhookinhpa_webhook_test.go(8 tests, all passing)RestoreHPAImportedSpecinpod_autoscaler_test.goResolveMetricQueryinutils_test.goand template variable resolution incontroller_hpa_migration_test.gokindcluster (clamoriniere-hpa-migration) with the Datadog Operator and a real HPA targeting an nginx deployment🤖 PR description and code assisted by Claude:claude-sonnet-4-6