Skip to content

dyninst/symdb: stream JSON into gzip, chunk by compressed size#50396

Merged
gh-worker-dd-mergequeue-cf854d[bot] merged 2 commits intomainfrom
ajwerner/symdb-streaming
May 6, 2026
Merged

dyninst/symdb: stream JSON into gzip, chunk by compressed size#50396
gh-worker-dd-mergequeue-cf854d[bot] merged 2 commits intomainfrom
ajwerner/symdb-streaming

Conversation

@ajwerner
Copy link
Copy Markdown
Contributor

@ajwerner ajwerner commented May 5, 2026

What does this PR do?

The SymDB upload pipeline previously held each batch in memory three times over: as a []Scope slice, as a marshalled JSON []byte, and as a gzipped []byte. Batches were flushed by buffered function count (default 10000), which let the in-memory []Scope grow large before any compression.

Replace UploadBatch([]Scope) with a streaming BatchEncoder that owns the gzip writer and a json.Encoder wrapping it. Scopes are encoded straight into the gzip stream as they arrive, the caller no longer accumulates a slice, and flushes are triggered when the compressed buffer reaches a threshold (DefaultFlushThresholdBytes = 2 MiB). The envelope is written inside the gzip stream as {service,version,language,upload_id,batch_num, scopes:[...],final}, with final written at flush time.

Threshold is soft: gzip's internal window means the flushed payload may overshoot by up to ~32 KiB. A threshold <= 0 forces per-scope flushing, preserving the cancel-between-flushes test behaviour previously achieved with maxBufferFuncs=1.

ErrUpload is exposed as a sentinel so callers can distinguish HTTP-side failures (retryable) from local encoder errors via errors.Is.

Motivation

We've seen some OOMs uploading symdb data.

Describe how you validated your changes

There's some testing but could be more I suppose.

Additional Notes

https://datadoghq.atlassian.net/browse/DEBUG-5553

The SymDB upload pipeline previously held each batch in memory three times
over: as a []Scope slice, as a marshalled JSON []byte, and as a gzipped
[]byte. Batches were flushed by buffered function count (default 10000),
which let the in-memory []Scope grow large before any compression.

Replace UploadBatch([]Scope) with a streaming BatchEncoder that owns the
gzip writer and a json.Encoder wrapping it. Scopes are encoded straight
into the gzip stream as they arrive, the caller no longer accumulates a
slice, and flushes are triggered when the compressed buffer reaches a
threshold (DefaultFlushThresholdBytes = 2 MiB). The envelope is written
inside the gzip stream as {service,version,language,upload_id,batch_num,
scopes:[...],final}, with final written at flush time.

Threshold is soft: gzip's internal window means the flushed payload may
overshoot by up to ~32 KiB. A threshold <= 0 forces per-scope flushing,
preserving the cancel-between-flushes test behaviour previously achieved
with maxBufferFuncs=1.

ErrUpload is exposed as a sentinel so callers can distinguish HTTP-side
failures (retryable) from local encoder errors via errors.Is.

https://datadoghq.atlassian.net/browse/DEBUG-5553
@ajwerner ajwerner requested a review from a team as a code owner May 5, 2026 18:59
@ajwerner ajwerner marked this pull request as draft May 5, 2026 19:00
@dd-octo-sts dd-octo-sts Bot added the internal Identify a non-fork PR label May 5, 2026
@github-actions github-actions Bot added the medium review PR review might take time label May 5, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 73310b4273

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +132 to +137
// BatchEncoder streams Scope objects into a gzip-compressed SymDB JSON
// envelope and uploads chunks to the SymDB intake whenever the compressed
// payload reaches the configured threshold. A single BatchEncoder
// corresponds to one logical upload (one UploadID) and may emit multiple
// batches.
type BatchEncoder struct {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Restore the exported batch upload API or update CLI

This replaces the exported UploadBatch/UploadInfo API, but pkg/dyninst/symdb/cli/main.go still imports this package and calls up.UploadBatch(..., uploader.UploadInfo{...}) under the same linux_bpf build tag. I checked the repo with rg "UploadBatch|UploadInfo"; that CLI is the remaining caller, so building the SymDB CLI package now fails until it is migrated to NewBatchEncoder or a compatibility wrapper is kept.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in the latest push: the CLI is migrated to NewBatchEncoder in the same commit, so the linux_bpf build is no longer broken.

Comment thread pkg/dyninst/symdb/uploader/symdb.go Outdated
// the gzip writer's internal window may not yet be reflected in the buffer
// length, so the actual flushed payload may overshoot by up to the gzip
// window size (~32 KiB).
func WithFlushThreshold(bytes int) BatchEncoderOption {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it looks like this option is not really optional; each caller needs to provide it, otherwise they get a weird behavior for 0. Let's get rid of the options and pass it explicitly to the ctor.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Dropped WithFlushThreshold and the flushThreshold field; the encoder no longer manages a threshold. Replaced ShouldFlush with Size() and let the caller decide when to flush.

}
b.gz = gzip.NewWriter(&b.buf)
b.enc = json.NewEncoder(b.gz)
b.enc.SetEscapeHTML(false)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's up with this SetEscapeHTML? We didn't have it before?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It isn't needed, and it is weird to have added it, but I think it is technically what we want.

String values encode as JSON strings coerced to valid UTF-8, replacing invalid bytes with the Unicode replacement rune. So that the JSON will be safe to embed inside HTML <script> tags, the string is encoded using HTMLEscape, which replaces "<", ">", "&", U+2028, and U+2029 are escaped to "\u003c","\u003e", "\u0026", "\u2028", and "\u2029". This replacement can be disabled when using an Encoder, by calling Encoder.SetEscapeHTML(false).

from https://pkg.go.dev/encoding/json#Marshal

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kept it but added a comment. The previous json.Marshal path also escaped (it's the default for both Marshal and Encoder), so disabling it here is a small intentional wire-format change rather than preservation — without it, a Go type name like chan<- would be emitted as chan<- in the SymDB payload, which nobody wants.

Comment thread pkg/dyninst/symdb/uploader/symdb.go Outdated
// this upload.
Final bool
// BatchEncoder streams Scope objects into a gzip-compressed SymDB JSON
// envelope and uploads chunks to the SymDB intake whenever the compressed
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this comment is not really accurate, is it? The Encoder doesn't upload anything by itself; you need to call Flush(). If it doesn't do the flushing on its own, consider letting the caller deal with the threshold too (so only provide a Size() function)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did both. Fixed the doc comment (the encoder no longer claims to flush on its own; the caller drives Flush). Dropped the threshold from the encoder in favour of Size() — the caller now owns the threshold check.

@dd-octo-sts
Copy link
Copy Markdown
Contributor

dd-octo-sts Bot commented May 5, 2026

Files inventory check summary

File checks results against ancestor 84ef6841:

Results for datadog-agent_7.80.0~devel.git.482.efbc8af.pipeline.111607510-1_amd64.deb:

No change detected

Comment thread pkg/dyninst/symdb/uploader/symdb.go Outdated
}

// NewBatchEncoder creates a BatchEncoder for a single logical upload.
func (s *SymDBUploader) NewBatchEncoder(uploadID uuid.UUID, opts ...BatchEncoderOption) *BatchEncoder {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually, massage the flow some more -- as it stands, the caller creates a SymDBUploader only to call NewBatchEncoder() on it, and uses the Encoder from then on. Either make the caller only aware of the uploader, or only aware of the encoder.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Went with encoder-only. SymDBUploader is now an unexported helper inside BatchEncoder; callers construct the encoder directly via NewBatchEncoder(url, service, version, runtimeID, uploadID, headers...).

@dd-octo-sts
Copy link
Copy Markdown
Contributor

dd-octo-sts Bot commented May 5, 2026

Static quality checks

✅ Please find below the results from static quality gates
Comparison made with ancestor 84ef684
📊 Static Quality Gates Dashboard
🔗 SQG Job

Successful checks

Info

Quality gate Change Size (prev → curr → max)
agent_deb_amd64 +3.86 KiB (0.00% increase) 740.963 → 740.967 → 750.310
agent_rpm_amd64 +3.86 KiB (0.00% increase) 740.947 → 740.950 → 750.280
agent_rpm_arm64 +2.34 KiB (0.00% increase) 719.026 → 719.028 → 724.050
agent_suse_amd64 +3.86 KiB (0.00% increase) 740.947 → 740.950 → 750.280
agent_suse_arm64 +2.34 KiB (0.00% increase) 719.026 → 719.028 → 724.050
docker_agent_amd64 +3.84 KiB (0.00% increase) 801.342 → 801.346 → 805.870
docker_agent_arm64 +2.34 KiB (0.00% increase) 804.251 → 804.253 → 809.730
docker_agent_jmx_amd64 +3.86 KiB (0.00% increase) 992.262 → 992.266 → 996.590
docker_agent_jmx_arm64 +2.34 KiB (0.00% increase) 983.949 → 983.951 → 989.410
23 successful checks with minimal change (< 2 KiB)
Quality gate Current Size
agent_deb_amd64_fips 699.151 MiB
agent_heroku_amd64 309.076 MiB
agent_rpm_amd64_fips 699.135 MiB
agent_rpm_arm64_fips 680.300 MiB
agent_suse_amd64_fips 699.135 MiB
agent_suse_arm64_fips 680.300 MiB
docker_cluster_agent_amd64 206.603 MiB
docker_cluster_agent_arm64 220.634 MiB
docker_cws_instrumentation_amd64 7.142 MiB
docker_cws_instrumentation_arm64 6.689 MiB
docker_host_profiler_amd64 301.101 MiB
docker_host_profiler_arm64 312.622 MiB
docker_dogstatsd_amd64 39.374 MiB
docker_dogstatsd_arm64 37.628 MiB
dogstatsd_deb_amd64 30.032 MiB
dogstatsd_deb_arm64 28.173 MiB
dogstatsd_rpm_amd64 30.032 MiB
dogstatsd_suse_amd64 30.032 MiB
iot_agent_deb_amd64 44.462 MiB
iot_agent_deb_arm64 41.442 MiB
iot_agent_deb_armhf 42.179 MiB
iot_agent_rpm_amd64 44.462 MiB
iot_agent_suse_amd64 44.462 MiB
On-wire sizes (compressed)
Quality gate Change Size (prev → curr → max)
agent_deb_amd64 -3.12 KiB (0.00% reduction) 175.268 → 175.265 → 179.160
agent_deb_amd64_fips -27.21 KiB (0.02% reduction) 167.002 → 166.976 → 174.440
agent_heroku_amd64 neutral 74.956 MiB → 80.310
agent_rpm_amd64 -16.61 KiB (0.01% reduction) 177.309 → 177.293 → 182.080
agent_rpm_amd64_fips +49.55 KiB (0.03% increase) 168.365 → 168.413 → 174.140
agent_rpm_arm64 -13.51 KiB (0.01% reduction) 159.426 → 159.412 → 163.610
agent_rpm_arm64_fips +6.19 KiB (0.00% increase) 151.734 → 151.740 → 156.850
agent_suse_amd64 -16.61 KiB (0.01% reduction) 177.309 → 177.293 → 182.080
agent_suse_amd64_fips +49.55 KiB (0.03% increase) 168.365 → 168.413 → 174.140
agent_suse_arm64 -13.51 KiB (0.01% reduction) 159.426 → 159.412 → 163.610
agent_suse_arm64_fips +6.19 KiB (0.00% increase) 151.734 → 151.740 → 156.850
docker_agent_amd64 -3.66 KiB (0.00% reduction) 267.710 → 267.707 → 272.990
docker_agent_arm64 neutral 254.708 MiB → 261.470
docker_agent_jmx_amd64 +3.67 KiB (0.00% increase) 336.348 → 336.351 → 341.610
docker_agent_jmx_arm64 +7.88 KiB (0.00% increase) 319.350 → 319.357 → 326.050
docker_cluster_agent_amd64 neutral 72.426 MiB → 73.460
docker_cluster_agent_arm64 neutral 67.880 MiB → 68.680
docker_cws_instrumentation_amd64 neutral 2.999 MiB → 3.330
docker_cws_instrumentation_arm64 neutral 2.729 MiB → 3.090
docker_host_profiler_amd64 neutral 110.739 MiB → 125.600
docker_host_profiler_arm64 neutral 105.089 MiB → 120.000
docker_dogstatsd_amd64 neutral 15.240 MiB → 15.870
docker_dogstatsd_arm64 neutral 14.558 MiB → 14.890
dogstatsd_deb_amd64 -2.17 KiB (0.03% reduction) 7.944 → 7.942 → 8.830
dogstatsd_deb_arm64 neutral 6.826 MiB → 7.750
dogstatsd_rpm_amd64 neutral 7.953 MiB → 8.840
dogstatsd_suse_amd64 neutral 7.953 MiB → 8.840
iot_agent_deb_amd64 -2.75 KiB (0.02% reduction) 11.703 → 11.700 → 13.210
iot_agent_deb_arm64 neutral 9.996 MiB → 11.620
iot_agent_deb_armhf +4.71 KiB (0.05% increase) 10.204 → 10.209 → 11.780
iot_agent_rpm_amd64 neutral 11.716 MiB → 13.230
iot_agent_suse_amd64 neutral 11.716 MiB → 13.230

@cit-pr-commenter-54b7da
Copy link
Copy Markdown

cit-pr-commenter-54b7da Bot commented May 5, 2026

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: f68e072b-b9bc-48d3-a9ec-8718241c9da3

Baseline: e8100bb
Comparison: 1f3b62b
Diff

Optimization Goals: ✅ No significant changes detected

Experiments ignored for regressions

Regressions in experiments with settings containing erratic: true are ignored.

perf experiment goal Δ mean % Δ mean % CI trials links
docker_containers_cpu % cpu utilization +0.04 [-2.89, +2.97] 1 Logs

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
quality_gate_metrics_logs memory utilization +1.20 [+0.95, +1.45] 1 Logs bounds checks dashboard
ddot_metrics_sum_cumulativetodelta_exporter memory utilization +0.79 [+0.55, +1.03] 1 Logs
ddot_metrics memory utilization +0.56 [+0.36, +0.76] 1 Logs
ddot_logs memory utilization +0.52 [+0.45, +0.59] 1 Logs
ddot_metrics_sum_delta memory utilization +0.51 [+0.33, +0.69] 1 Logs
otlp_ingest_metrics memory utilization +0.49 [+0.34, +0.64] 1 Logs
quality_gate_idle memory utilization +0.29 [+0.24, +0.34] 1 Logs bounds checks dashboard
quality_gate_security_idle memory utilization +0.27 [+0.19, +0.35] 1 Logs bounds checks dashboard
otlp_ingest_logs memory utilization +0.22 [+0.13, +0.31] 1 Logs
quality_gate_security_mean_fs_load memory utilization +0.17 [+0.13, +0.21] 1 Logs bounds checks dashboard
ddot_metrics_sum_cumulative memory utilization +0.16 [-0.00, +0.31] 1 Logs
file_to_blackhole_500ms_latency egress throughput +0.05 [-0.35, +0.45] 1 Logs
docker_containers_cpu % cpu utilization +0.04 [-2.89, +2.97] 1 Logs
file_to_blackhole_1000ms_latency egress throughput +0.02 [-0.42, +0.46] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.09, +0.10] 1 Logs
uds_dogstatsd_to_api ingress throughput +0.00 [-0.19, +0.20] 1 Logs
file_to_blackhole_100ms_latency egress throughput -0.02 [-0.17, +0.12] 1 Logs
uds_dogstatsd_to_api_v3 ingress throughput -0.03 [-0.23, +0.18] 1 Logs
quality_gate_security_no_fs_load memory utilization -0.03 [-0.14, +0.07] 1 Logs bounds checks dashboard
quality_gate_logs % cpu utilization -0.04 [-1.01, +0.92] 1 Logs bounds checks dashboard
uds_dogstatsd_20mb_12k_contexts_20_senders memory utilization -0.04 [-0.10, +0.01] 1 Logs
file_to_blackhole_0ms_latency egress throughput -0.05 [-0.61, +0.51] 1 Logs
docker_containers_memory memory utilization -0.11 [-0.21, -0.00] 1 Logs
quality_gate_idle_all_features memory utilization -0.51 [-0.55, -0.47] 1 Logs bounds checks dashboard
tcp_syslog_to_blackhole ingress throughput -1.86 [-2.07, -1.64] 1 Logs

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed observed_value links
docker_containers_cpu simple_check_run 10/10 713 ≥ 26
docker_containers_memory memory_usage 10/10 244.86MiB ≤ 370MiB
docker_containers_memory simple_check_run 10/10 685 ≥ 26
file_to_blackhole_0ms_latency memory_usage 10/10 0.16GiB ≤ 1.20GiB
file_to_blackhole_0ms_latency missed_bytes 10/10 0B = 0B
file_to_blackhole_1000ms_latency memory_usage 10/10 0.21GiB ≤ 1.20GiB
file_to_blackhole_1000ms_latency missed_bytes 10/10 0B = 0B
file_to_blackhole_100ms_latency memory_usage 10/10 0.17GiB ≤ 1.20GiB
file_to_blackhole_100ms_latency missed_bytes 10/10 0B = 0B
file_to_blackhole_500ms_latency memory_usage 10/10 0.19GiB ≤ 1.20GiB
file_to_blackhole_500ms_latency missed_bytes 10/10 0B = 0B
quality_gate_idle intake_connections 10/10 3 ≤ 4 bounds checks dashboard
quality_gate_idle memory_usage 10/10 142.36MiB ≤ 147MiB bounds checks dashboard
quality_gate_idle_all_features intake_connections 10/10 3 ≤ 4 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 471.54MiB ≤ 495MiB bounds checks dashboard
quality_gate_logs intake_connections 10/10 4 ≤ 6 bounds checks dashboard
quality_gate_logs memory_usage 10/10 177.55MiB ≤ 195MiB bounds checks dashboard
quality_gate_logs missed_bytes 10/10 0B = 0B bounds checks dashboard
quality_gate_metrics_logs cpu_usage 10/10 353.16 ≤ 2000 bounds checks dashboard
quality_gate_metrics_logs intake_connections 10/10 3 ≤ 6 bounds checks dashboard
quality_gate_metrics_logs memory_usage 10/10 376.60MiB ≤ 430MiB bounds checks dashboard
quality_gate_metrics_logs missed_bytes 10/10 0B = 0B bounds checks dashboard
quality_gate_security_idle cpu_usage 10/10 25.23 ≤ 40 bounds checks dashboard
quality_gate_security_idle memory_usage 10/10 290.62MiB ≤ 330MiB bounds checks dashboard
quality_gate_security_mean_fs_load cpu_usage 10/10 52.56 ≤ 70 bounds checks dashboard
quality_gate_security_mean_fs_load memory_usage 10/10 267.21MiB ≤ 320MiB bounds checks dashboard
quality_gate_security_no_fs_load cpu_usage 10/10 19.68 ≤ 40 bounds checks dashboard
quality_gate_security_no_fs_load memory_usage 10/10 279.78MiB ≤ 320MiB bounds checks dashboard

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_security_idle, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_security_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_security_mean_fs_load, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_security_mean_fs_load, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check missed_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_security_no_fs_load, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_security_no_fs_load, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check missed_bytes: 10/10 replicas passed. Gate passed.

@ajwerner ajwerner added changelog/no-changelog No changelog entry needed qa/done QA done before merge and regressions are covered by tests labels May 5, 2026
- Drop the WithFlushThreshold functional option. The threshold isn't truly
  optional (every caller needs to set it, and the 0 sentinel for
  flush-after-every-scope was a wart), and the encoder doesn't auto-flush
  anyway. Replace ShouldFlush with a Size() accessor and let the caller
  decide when to flush.
- Hide SymDBUploader inside the BatchEncoder: callers now construct an
  encoder directly with NewBatchEncoder(url, service, version, runtimeID,
  uploadID, headers...) rather than going through SymDBUploader as an
  intermediary they then never use again.
- Comment why we SetEscapeHTML(false): without it, '<', '>' and '&' inside
  string values get emitted as their six-character \u00XX escape, which
  would corrupt Go type names like 'chan<-' in the SymDB payload.
- Fix the BatchEncoder doc comment: the encoder doesn't ship batches on
  its own; the caller must call Flush.
- Migrate the symdb CLI to the new API so the package keeps building. As a
  side effect, this fixes a pre-existing bug where the CLI always sent
  BatchNum: 0 (the local batchNum was declared but never incremented);
  BatchEncoder increments it correctly per batch.
@ajwerner ajwerner force-pushed the ajwerner/symdb-streaming branch from b4a69e2 to efbc8af Compare May 5, 2026 20:31
@github-actions github-actions Bot added long review PR is complex, plan time to review it and removed medium review PR review might take time labels May 5, 2026
@ajwerner ajwerner marked this pull request as ready for review May 5, 2026 20:51
@ajwerner
Copy link
Copy Markdown
Contributor Author

ajwerner commented May 6, 2026

/merge

@gh-worker-devflow-routing-ef8351
Copy link
Copy Markdown

gh-worker-devflow-routing-ef8351 Bot commented May 6, 2026

View all feedbacks in Devflow UI.

2026-05-06 16:20:44 UTC ℹ️ Start processing command /merge


2026-05-06 16:20:48 UTC ℹ️ MergeQueue: pull request added to the queue

The expected merge time in main is approximately 1h (p90).


2026-05-06 17:09:02 UTC ℹ️ MergeQueue: This merge request was merged

@gh-worker-dd-mergequeue-cf854d gh-worker-dd-mergequeue-cf854d Bot merged commit 1f3b62b into main May 6, 2026
330 checks passed
@gh-worker-dd-mergequeue-cf854d gh-worker-dd-mergequeue-cf854d Bot deleted the ajwerner/symdb-streaming branch May 6, 2026 17:09
@github-actions github-actions Bot added this to the 7.80.0 milestone May 6, 2026
@ajwerner ajwerner added the backport/7.79.x Automatically create a backport PR to the 7.79.x branch once the PR is merged label May 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backport/7.79.x Automatically create a backport PR to the 7.79.x branch once the PR is merged changelog/no-changelog No changelog entry needed internal Identify a non-fork PR long review PR is complex, plan time to review it qa/done QA done before merge and regressions are covered by tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants