Skip to content

NETOBSERV-2471: TLS usage tracking#815

Merged
jotak merged 12 commits intonetobserv:mainfrom
jotak:tls
Apr 1, 2026
Merged

NETOBSERV-2471: TLS usage tracking#815
jotak merged 12 commits intonetobserv:mainfrom
jotak:tls

Conversation

@jotak
Copy link
Copy Markdown
Member

@jotak jotak commented Oct 14, 2025

  • Start implementing TLS, by reading the TLS header when present
  • Extract SSL version
  • Report the TLS version in output records

Fixes netobserv/netobserv-operator#2492

Dependencies

Summary by CodeRabbit

  • New Features

    • TLS/SSL usage tracking added (config option ENABLE_TLS_TRACKING, default off).
    • Flow records now include TLS metadata: version, cipher suite, key-exchange/curve, TLS message types, and SSL-mismatch indication.
    • Export/protobuf and decoding now surface TLS fields so downstream consumers and UIs can display TLS info.
  • Chores

    • Updated vendored dependencies including OpenTelemetry and various golang modules.

@openshift-ci
Copy link
Copy Markdown

openshift-ci bot commented Oct 14, 2025

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@jotak jotak changed the title WIP TLS NETOBSERV-2471: TLS usage tracking Oct 30, 2025
@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Oct 30, 2025

@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.21.0" version, but no target version was set.

Details

In response to this:

(wip status = pretty much done, except for handshake version (see code comment), maybe needs an experimental feature gate; correctness to be verified, as I'm getting less TLS flows than expected - am I missing something?)

use tls.VersionName

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@jotak jotak force-pushed the tls branch 2 times, most recently from 9797fe9 to abc2fd7 Compare October 30, 2025 16:04
@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Oct 30, 2025

@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.21.0" version, but no target version was set.

Details

In response to this:

  • Start implementing TLS, by reading the TLS header when present
  • Extract SSL version (not done yet for the handshake message)
  • Report the TLS version in output records

TODO: handshake message: version is stored a couple of bits further

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Oct 31, 2025

@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.21.0" version, but no target version was set.

Details

In response to this:

  • Start implementing TLS, by reading the TLS header when present
  • Extract SSL version (not done yet for the handshake message)
  • Report the TLS version in output records

TODO: handshake message: version is stored a couple of bits further

Dependencies

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@jotak jotak marked this pull request as ready for review October 31, 2025 15:27
@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Oct 31, 2025

@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.21.0" version, but no target version was set.

Details

In response to this:

  • Start implementing TLS, by reading the TLS header when present
  • Extract SSL version
  • Report the TLS version in output records

Dependencies

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Oct 31, 2025

@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.21.0" version, but no target version was set.

Details

In response to this:

  • Start implementing TLS, by reading the TLS header when present
  • Extract SSL version
  • Report the TLS version in output records

Dependencies

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.


// Extract TLS info
static inline void track_tls_version(struct __sk_buff *skb, pkt_info *pkt) {
if (pkt->id->transport_protocol == IPPROTO_TCP) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do u think this work for STCP protocol too ?, not sure what is the story for DTLS ?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't digged into other protocols.. might be something to do if someone asks for it, but I believe TCP covers most of the expectations

Copy link
Copy Markdown
Contributor

@msherif1234 msherif1234 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work!!, I left some comments/suggestions, nothing major small enhancements . Also pls share some screenshots for this new feature

@jotak jotak added the ok-to-test To set manually when a PR is safe to test. Triggers image build on PR. label Nov 3, 2025
@jotak jotak added the needs-review Tells that the PR needs a review label Mar 19, 2026
@jotak jotak requested a review from leandroberetta March 19, 2026 15:59
@jotak jotak added the ok-to-test To set manually when a PR is safe to test. Triggers image build on PR. label Mar 19, 2026
@github-actions
Copy link
Copy Markdown

New images:

quay.io/netobserv/netobserv-ebpf-agent:5132e43

quay.io/netobserv/ebpf-bytecode:5132e43

They will expire in two weeks.

To deploy this build, run from the operator repo, assuming the operator is running:

USER=netobserv VERSION=5132e43 make set-agent-image

Comment on lines +16 to +23
var typesMap map[string]uint8

func init() {
typesMap = make(map[string]uint8, len(tlsTypes))
for _, t := range tlsTypes {
typesMap[t.name] = t.value
}
}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that still used ?

Comment on lines +53 to +73
if (handshake_version == 0x0303) {
// Check extensions to discriminate 1.2 and 1.3
u8 session_len, compr_len;
u16 cipher_len, exts_len;
offset += 32; /*skip random*/
// Read session
if (bpf_skb_load_bytes(skb, offset, &session_len, sizeof(session_len)) < 0) {
return TLSTRACKER_UNKNOWN;
}
offset += 1 + session_len;
// Read cipher suites
if (bpf_skb_load_bytes(skb, offset, &cipher_len, sizeof(cipher_len)) < 0) {
return TLSTRACKER_UNKNOWN;
}
offset += 2 + bpf_ntohs(cipher_len);
// Read compression
if (bpf_skb_load_bytes(skb, offset, &compr_len, sizeof(compr_len)) < 0) {
return TLSTRACKER_UNKNOWN;
}
offset += 1 + compr_len;
// Read extensions
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to be able to see pre TLS-1.2 connections here even if it's not expected in recent environments. That's especially useful for auditing legacy TLS usage.

Something around those lines should work:

Suggested change
if (handshake_version == 0x0303) {
// Check extensions to discriminate 1.2 and 1.3
u8 session_len, compr_len;
u16 cipher_len, exts_len;
offset += 32; /*skip random*/
// Read session
if (bpf_skb_load_bytes(skb, offset, &session_len, sizeof(session_len)) < 0) {
return TLSTRACKER_UNKNOWN;
}
offset += 1 + session_len;
// Read cipher suites
if (bpf_skb_load_bytes(skb, offset, &cipher_len, sizeof(cipher_len)) < 0) {
return TLSTRACKER_UNKNOWN;
}
offset += 2 + bpf_ntohs(cipher_len);
// Read compression
if (bpf_skb_load_bytes(skb, offset, &compr_len, sizeof(compr_len)) < 0) {
return TLSTRACKER_UNKNOWN;
}
offset += 1 + compr_len;
// Read extensions
u8 session_len;
offset += 32; /*skip random*/
// Read session
if (bpf_skb_load_bytes(skb, offset, &session_len, sizeof(session_len)) < 0) {
return TLSTRACKER_UNKNOWN;
}
offset += 1 + session_len;
// Read cipher suite (same position in all versions)
if (bpf_skb_load_bytes(skb, offset, &tls->cipher_suite, sizeof(tls->cipher_suite)) < 0) {
return TLSTRACKER_UNKNOWN;
}
tls->cipher_suite = bpf_ntohs(tls->cipher_suite);
offset += 3; // Skip also compression (1B)
if (handshake_version == 0x0303) {
// Check extensions to discriminate 1.2 and 1.3
u16 exts_len;

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pre-1.2 should already be handled - before 1.2, the TLS version was not provided through an extension, it was in the handshake version field (read from lines 41-46 above).
So it would skip the whole if (handshake_version == 0x0303) block and return correctly with the handshake version.

if ((bpf_skb_load_bytes(skb, offset, &rec, sizeof(rec))) < 0) {
return TLSTRACKER_NOTLS;
}
offset += 5;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please put a comment to explain this ?

if (bpf_skb_load_bytes(skb, offset, &handshake, sizeof(handshake)) < 0) {
return TLSTRACKER_NOTLS;
}
offset += 4;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here. A comment would help.

@github-actions github-actions bot removed the ok-to-test To set manually when a PR is safe to test. Triggers image build on PR. label Mar 30, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 30, 2026

📝 Walkthrough

Walkthrough

Adds kernel-space TLS usage tracking via a new eBPF TLS tracker, extends flow metrics with TLS fields and a runtime toggle, wires the toggle from agent config through eBPF variables, updates protobuf/schema and Go model/encoding to carry TLS metadata, and upgrades several vendored dependencies.

Changes

Cohort / File(s) Summary
eBPF TLS tracker & integration
bpf/tls_tracker.h, bpf/flows.c, bpf/configs.h, bpf/types.h
New TLS parsing/metadata extraction header; per-packet TLS tracking integrated into flow processing; new enable_tls_usage_tracking config and flow metric TLS fields.
Generated eBPF bindings (all arches)
pkg/ebpf/bpf_arm64_bpfel.go, pkg/ebpf/bpf_powerpc_bpfel.go, pkg/ebpf/bpf_s390_bpfeb.go, pkg/ebpf/bpf_x86_bpfel.go
Extended BpfFlowMetricsT with TLS fields and added EnableTlsUsageTracking variable spec/variable.
Agent config & tracer wiring
pkg/config/config.go, pkg/tracer/tracer.go
Added EnableTLSTracking config flag and set the eBPF compile-time variable when loading flows.
Model, PB, encoding, export
proto/flow.proto, pkg/pbflow/proto.go, pkg/decode/decode_protobuf.go, pkg/decode/decode_protobuf_test.go, pkg/exporter/converters_test.go, pkg/model/record.go, pkg/model/record_test.go, pkg/model/tls_types.go
Extended protobuf Record with TLS fields and SSL-mismatch, mapped TLS fields to/from model and to map outputs, added TLS-to-string helpers and tests updated/added.
eBPF-to-Go integration sites
bpf/... callsites (flows update calls) and generated bindings
Updated callsites to pass per-packet TLS info to flow update logic and initialize TLS fields for new flows.
Proto/format tests
pkg/decode/*, pkg/exporter/*, pkg/model/* tests
Updated/added tests asserting TLS field propagation and SSL-mismatch formatting.
go.mod & vendor upgrades
go.mod, vendor/modules.txt, vendor/go.opentelemetry.io/otel/..., vendor/golang.org/x/*, vendor/github.com/minio/minio-go/v7/*, vendor/github.com/grpc-ecosystem/grpc-gateway/v2/*
Multiple dependency bumps and vendored updates (OpenTelemetry v1.40→v1.42, x/net updates with httpsfv & HTTP/2 changes, minio and grpc-gateway tweaks).
Vendor functional & library changes
vendor/... (many files across otel, x/net, x/text, minio, grpc-gateway)
Various behavioral, API, doc, and test changes in vendored packages (see diff for per-file details).

Sequence Diagram

sequenceDiagram
    participant App as Application
    participant Cfg as Config
    participant Tracer as Tracer
    participant eBPF as eBPF Kernel
    participant Flow as Flow Monitor
    participant Export as Exporter

    App->>Cfg: set ENABLE_TLS_TRACKING
    Cfg->>Tracer: provide config
    Tracer->>eBPF: set enable_tls_usage_tracking variable
    Flow->>eBPF: packet (TCP, ACK)
    eBPF->>eBPF: track_tls() parse record & hello
    eBPF->>Flow: populate tls_info in flow update
    Flow->>Export: export BpfFlowMetrics (with TLS fields)
    Export->>Export: encode to protobuf / map (TLS strings)
    Export->>App: deliver enriched flow record
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

@jotak
Copy link
Copy Markdown
Member Author

jotak commented Mar 30, 2026

thanks @jpinsonneau , comments addressed

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Mar 30, 2026

@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

  • Start implementing TLS, by reading the TLS header when present
  • Extract SSL version
  • Report the TLS version in output records

Fixes netobserv/netobserv-operator#2492

Dependencies

Summary by CodeRabbit

Release Notes

  • New Features

  • Added TLS/SSL usage tracking and monitoring for network flows with configurable ENABLE_TLS_TRACKING option (default: disabled).

  • Flow records now capture TLS metadata including version, cipher suite, key exchange information, and SSL mismatch detection.

  • Chores

  • Updated dependencies including OpenTelemetry (v1.40.0→v1.42.0), grpc-gateway, and golang.org packages.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@jpinsonneau
Copy link
Copy Markdown
Member

LGTM

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go (1)

355-355: ⚠️ Potential issue | 🟡 Minor

Variable name typo in error check.

The condition checks err != io.EOF but the variable from readFull is rerr, not err. This should be rerr != io.EOF for consistency with line 528 and correctness.

🔧 Proposed fix
-		if rerr != nil && rerr != io.ErrUnexpectedEOF && err != io.EOF {
+		if rerr != nil && rerr != io.ErrUnexpectedEOF && rerr != io.EOF {
			return UploadInfo{}, rerr
		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go` at line 355,
The error-check condition mistakenly compares the wrong variable: change the
conditional that reads "if rerr != nil && rerr != io.ErrUnexpectedEOF && err !=
io.EOF" to use rerr for the EOF check so it becomes "rerr != io.EOF"; locate the
check in the streaming upload flow (in the same block that handles readFull
returning rerr) in api-put-object-streaming.go and replace the err reference
with rerr to match the readFull variable and the similar check around line 528.
vendor/golang.org/x/net/http2/server.go (1)

2168-2173: ⚠️ Potential issue | 🟡 Minor

Apply the same priority initialization on the h2c upgrade path.

processHeaders now derives initial priority and intermediary detection from request headers, but upgradeRequest still hard-codes the default. The first upgraded request will ignore an explicit Priority header and proxy markers.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@vendor/golang.org/x/net/http2/server.go` around lines 2168 - 2173, The h2c
upgrade path in serverConn.upgradeRequest is still hard-coding the default
priority and intermediary flag; update upgradeRequest to derive initial priority
and intermediary detection from the incoming req headers the same way
processHeaders does: parse the Priority header and proxy markers from req.Header
(or call the same helper used by processHeaders), set sc.priorityAware and
sc.hasIntermediary accordingly, compute the initial priority (instead of calling
defaultRFC9218Priority unconditionally), and pass that computed priority into
newStream when creating st (also ensure sc.maxClientStreamID and st.reqTrailer
remain set as before).
vendor/golang.org/x/net/http2/http2.go (1)

172-183: ⚠️ Potential issue | 🟡 Minor

Keep Setting.Valid() in sync with the new setting.

Adding SettingNoRFC7540Priorities here without a matching case in Setting.Valid() means Valid() still returns nil for invalid values greater than 1. serverConn.processSetting compensates locally, but any other caller that relies on Valid() will still accept malformed settings.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@vendor/golang.org/x/net/http2/http2.go` around lines 172 - 183, The new
SettingNoRFC7540Priorities constant was added to the settingName map but not
included in Setting.Valid(), so Valid() will incorrectly accept values above the
previous max; update the Setting.Valid() switch (or its validation logic) to
include a case for SettingNoRFC7540Priorities (the same symbol) so it returns a
non-nil error for unknown values, ensuring callers of Setting.Valid() (not just
serverConn.processSetting) reject malformed settings.
🧹 Nitpick comments (6)
pkg/decode/decode_protobuf_test.go (1)

103-103: Test correctly validates TLS 1.2 version mapping.

0x0303 → "TLS 1.2" is accurate. Consider adding test cases for other versions (0x0301=TLS 1.0, 0x0302=TLS 1.1, 0x0304=TLS 1.3) and edge cases (0x0000 or invalid values) to ensure robust coverage of the new TLS tracking feature.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/decode/decode_protobuf_test.go` at line 103, The test currently asserts
SslVersion: 0x0303 (TLS 1.2) but lacks coverage for other mappings; extend the
test in pkg/decode/decode_protobuf_test.go to include additional cases that map
SslVersion values to their expected strings (e.g., 0x0301 -> "TLS 1.0", 0x0302
-> "TLS 1.1", 0x0304 -> "TLS 1.3") and add edge cases for invalid/zero values
(e.g., 0x0000 and an out-of-range value) to verify the decoding logic; locate
the test that sets the SslVersion field (SslVersion) and either expand it into a
table-driven test or add separate assertions to validate each mapping and the
handling of invalid values.
pkg/tracer/tracer.go (1)

2169-2185: Keep NewPacketFetcher constant-pruning in sync with Line 2169.

You added constEnableTLSUsageTracking to flow variables, but it is not pruned in NewPacketFetcher’s cleanup list. Sync both paths to match the in-file contract.

Proposed fix (outside this range, in NewPacketFetcher)
 	delete(spec.Programs, constEnableDirectFlowRingbuf)
 	delete(spec.Programs, constEnableOpenSSLTracking)
+	delete(spec.Programs, constEnableTLSUsageTracking)
 	delete(spec.Programs, dnsNameMap)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/tracer/tracer.go` around lines 2169 - 2185, The variablesMapping list now
includes constEnableTLSUsageTracking but NewPacketFetcher’s constant-pruning
list wasn’t updated; update NewPacketFetcher to remove/prune
constEnableTLSUsageTracking alongside the other constants so the two lists
remain in sync (search for NewPacketFetcher and the cleanup/prune constants
block and add constEnableTLSUsageTracking to that cleanup list to match the
variablesMapping declaration).
pkg/model/record_test.go (1)

55-59: Assert non-zero cipher suite/key share to fully validate layout decoding.

Line 56 and Line 57 are zeroed, so the test cannot catch mis-decoding of TlsCipherSuite and TlsKeyShare. Use non-zero fixture values and assert both fields in expected BpfFlowMetrics.

Proposed test hardening diff
-		0x00, 0x00, // u16 tls_cipher_suite
-		0x00, 0x00, // u16 tls_key_share
+		0x01, 0x13, // u16 tls_cipher_suite (0x1301)
+		0x1d, 0x00, // u16 tls_key_share (0x001d)

...
 		Metrics: ebpf.BpfFlowMetrics{
 ...
 			SslVersion:         0x0303,
+			TlsCipherSuite:     0x1301,
+			TlsKeyShare:        0x001d,
 			TlsTypes:           0x21,
 		},

Also applies to: 91-92

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/model/record_test.go` around lines 55 - 59, Update the test fixture bytes
in pkg/model/record_test.go to use non-zero values for the two u16 fields
currently zeroed (tls_cipher_suite and tls_key_share) so the decoder must parse
them, then add assertions to the expected BpfFlowMetrics for the TlsCipherSuite
and TlsKeyShare fields to validate their decoded values; locate the byte slice
in the test that contains 0x00,0x00 for those fields and replace with distinct
non-zero pairs, and update the expected BpfFlowMetrics object (and any
Assert/Equal checks) to include the corresponding TlsCipherSuite and TlsKeyShare
values.
bpf/tls_tracker.h (3)

145-150: Clarify the offset advancement comment.

Line 150's comment says "Skip also compression (1B)" but offset += 3 actually skips cipher suite (2B) + compression (1B). The code is correct, but the comment could be clearer:

-        offset += 3; // Skip also compression (1B)
+        offset += 3; // Skip past cipher suite (2B) + compression (1B)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bpf/tls_tracker.h` around lines 145 - 150, Offset advancement comment is
ambiguous: after reading tls->cipher_suite via bpf_skb_load_bytes, offset += 3
advances past the 2-byte cipher suite and the 1-byte compression; update the
comment next to offset += 3 (and mention tls->cipher_suite and
bpf_skb_load_bytes) to explicitly state "skip cipher suite (2B) and compression
(1B)" so it matches the actual increment.

92-92: Consider named constants for TLS extension types.

Magic numbers 0x002b (supported_versions) and 0x0033 (key_share) appear multiple times. Named constants would improve readability:

`#define` TLS_EXT_SUPPORTED_VERSIONS 0x002b
`#define` TLS_EXT_KEY_SHARE 0x0033

Also applies to: 171-171, 179-179

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bpf/tls_tracker.h` at line 92, Replace the magic numeric TLS extension
literals with named constants: add definitions like TLS_EXT_SUPPORTED_VERSIONS
(0x002b) and TLS_EXT_KEY_SHARE (0x0033) near the top of tls_tracker.h (or in an
appropriate header) and replace all direct comparisons of ext_hdr.type == 0x002b
and ext_hdr.type == 0x0033 with ext_hdr.type == TLS_EXT_SUPPORTED_VERSIONS and
ext_hdr.type == TLS_EXT_KEY_SHARE respectively (also update the other
occurrences noted). Use macros or an enum to define the constants so all
references (ext_hdr.type checks in this file) are clear and maintainable.

279-282: ACK-only filter may miss some TLS packets.

Filtering on flags & 0x10 (ACK) is a reasonable optimization, but TLS data could appear in non-ACK packets (e.g., pure PSH). This is likely intentional to reduce overhead, but worth documenting as a known limitation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bpf/tls_tracker.h` around lines 279 - 282, The ACK-only filter (checking
flags & 0x10) in the TCP path can miss TLS in non-ACK packets (e.g., PSH);
either broaden the condition or document this limitation: update the conditional
around proto/IPPROTO_TCP that calls track_tls_tcp(skb, l4_hdr, tls) to also
allow packets with PSH (flags & 0x08) or better, allow any TCP packet with
payload length > 0 to be inspected, and if you keep the ACK-only optimization
add a clear comment in tls_tracker.h explaining this known limitation and the
tradeoff in overhead versus completeness.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@go.mod`:
- Line 166: The go.mod currently contains a replace directive pointing to the
personal fork "github.com/jotak/flowlogs-pipeline
v0.0.0-20260319155004-bff214fed5a0" that overrides
"github.com/netobserv/flowlogs-pipeline"; remove or replace that directive so
the module resolves to the upstream package instead of the fork. Locate the
replace line containing "replace github.com/netobserv/flowlogs-pipeline =>
github.com/jotak/flowlogs-pipeline" and either delete that replace statement or
update it to the official upstream module/version you intend to use, then run go
mod tidy to update go.sum and verify builds.

In `@vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go`:
- Line 528: This change modifies a vendored third-party file (the condition
using rerr in api-put-object-streaming.go) which must not be edited in-tree;
either revert this edit and upstream the fix or add a go.mod replace directive
to point to a fork that contains the change. Locate the conditional using the
symbol rerr (and related checks against io.ErrUnexpectedEOF and io.EOF) and
remove/revert your modifications in the vendor copy; instead implement the
correction in your fork and update go.mod with a replace to that fork or submit
the patch upstream to minio-go. Do not attempt to patch other pre-existing bugs
in the vendored file (for example the rerr vs err issue mentioned) in this PR.

In `@vendor/go.opentelemetry.io/otel/.golangci.yml`:
- Line 197: The var-naming rule's arguments contain an extra nested list marker
("- -") so the revive option is passed as a single-item list instead of a map;
edit the var-naming arguments to remove the extra dash so the entry for
skip-package-name-collision-with-go-std appears as a normal map entry (i.e.,
change the "- - skip-package-name-collision-with-go-std: true" line to a
single-level "- skip-package-name-collision-with-go-std: true") so revive will
correctly recognize the option.

In
`@vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp/client.go`:
- Around line 266-267: The request's GetBody is currently returning the
uncompressed `body` while `req.bodyReader` uses the compressed bytes
`b.Bytes()`, so when the request is retried GetBody will replay the wrong
(uncompressed) payload; change `req.GetBody` to return the same compressed
reader used by `req.bodyReader` (i.e. use `bodyReaderErr(b.Bytes())` or an
equivalent helper that wraps the compressed buffer) so `bodyReader`,
`req.bodyReader`, and `req.GetBody` are all consistent for gzip requests (refer
to `req.bodyReader`, `req.GetBody`, `bodyReader`, `bodyReaderErr`, `body`, and
`b.Bytes()`).

In `@vendor/go.opentelemetry.io/otel/semconv/v1.40.0/README.md`:
- Around line 1-3: The vendored semconv package
(vendor/go.opentelemetry.io/otel/semconv/v1.40.0, e.g., README.md for semconv
v1.40.0) appears unused; either remove the entire vendor directory for
go.opentelemetry.io/otel/semconv/v1.40.0 from the repo or add it as an explicit
dependency in go.mod (run `go get go.opentelemetry.io/otel/semconv@v1.40.0` or
the appropriate version) and regenerate the vendor folder; update or remove any
related references in build scripts as needed so the project compiles cleanly
without stale vendored files.

In `@vendor/golang.org/x/net/http2/frame.go`:
- Around line 1651-1665: The rfc9218Priority method on MetaHeadersFrame
currently overwrites multiple "priority" header instances by assigning s =
field.Value; change it to combine repeated priority header values the same way
HTTP does (comma-join instances) before calling parseRFC9218Priority. Locate
MetaHeadersFrame.rfc9218Priority and where s is set from mh.Fields and, instead
of replacing s, append field.Value to s separated by a comma when s is non-empty
(trimming whitespace as needed) so parseRFC9218Priority receives the combined
header string; keep priorityAware and hasIntermediary logic unchanged.

In `@vendor/golang.org/x/net/http2/server.go`:
- Around line 1831-1834: When handling SettingNoRFC7540Priorities, besides
validating s.Val, treat s.Val==1 as the peer signalling RFC7540
priority-awareness: set the connection-level flag/state so the server uses
RFC7540-style priorities for new requests (instead of the legacy incremental
default) and avoid waiting for a per-stream Priority header or PRIORITY_UPDATE;
keep the existing ConnectionError(ErrCodeProtocol) return for s.Val>1. Locate
the switch case on SettingNoRFC7540Priorities and update the connection state
(the flag used by stream creation/priority logic) and call the codepath that
applies/recalculates initial priorities for streams that have no explicit
Priority yet so the first requests on the connection use RFC7540 semantics.

In `@vendor/golang.org/x/net/internal/httpsfv/httpsfv.go`:
- Around line 244-254: ParseDictionary currently advances past a parsed item
then trims whitespace but allows fall-through when the next byte isn't a comma,
letting inputs like "u=0 i" be accepted; in ParseDictionary (the loop handling
s, countLeftWhitespace and commas) update the logic so that after trimming
whitespace and confirming len(s)>0 you explicitly check if s[0] != ',' and if so
return false (reject the dictionary member), otherwise consume the comma and
continue; ensure you still trim whitespace after consuming the comma as in the
existing flow.

---

Outside diff comments:
In `@vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go`:
- Line 355: The error-check condition mistakenly compares the wrong variable:
change the conditional that reads "if rerr != nil && rerr != io.ErrUnexpectedEOF
&& err != io.EOF" to use rerr for the EOF check so it becomes "rerr != io.EOF";
locate the check in the streaming upload flow (in the same block that handles
readFull returning rerr) in api-put-object-streaming.go and replace the err
reference with rerr to match the readFull variable and the similar check around
line 528.

In `@vendor/golang.org/x/net/http2/http2.go`:
- Around line 172-183: The new SettingNoRFC7540Priorities constant was added to
the settingName map but not included in Setting.Valid(), so Valid() will
incorrectly accept values above the previous max; update the Setting.Valid()
switch (or its validation logic) to include a case for
SettingNoRFC7540Priorities (the same symbol) so it returns a non-nil error for
unknown values, ensuring callers of Setting.Valid() (not just
serverConn.processSetting) reject malformed settings.

In `@vendor/golang.org/x/net/http2/server.go`:
- Around line 2168-2173: The h2c upgrade path in serverConn.upgradeRequest is
still hard-coding the default priority and intermediary flag; update
upgradeRequest to derive initial priority and intermediary detection from the
incoming req headers the same way processHeaders does: parse the Priority header
and proxy markers from req.Header (or call the same helper used by
processHeaders), set sc.priorityAware and sc.hasIntermediary accordingly,
compute the initial priority (instead of calling defaultRFC9218Priority
unconditionally), and pass that computed priority into newStream when creating
st (also ensure sc.maxClientStreamID and st.reqTrailer remain set as before).

---

Nitpick comments:
In `@bpf/tls_tracker.h`:
- Around line 145-150: Offset advancement comment is ambiguous: after reading
tls->cipher_suite via bpf_skb_load_bytes, offset += 3 advances past the 2-byte
cipher suite and the 1-byte compression; update the comment next to offset += 3
(and mention tls->cipher_suite and bpf_skb_load_bytes) to explicitly state "skip
cipher suite (2B) and compression (1B)" so it matches the actual increment.
- Line 92: Replace the magic numeric TLS extension literals with named
constants: add definitions like TLS_EXT_SUPPORTED_VERSIONS (0x002b) and
TLS_EXT_KEY_SHARE (0x0033) near the top of tls_tracker.h (or in an appropriate
header) and replace all direct comparisons of ext_hdr.type == 0x002b and
ext_hdr.type == 0x0033 with ext_hdr.type == TLS_EXT_SUPPORTED_VERSIONS and
ext_hdr.type == TLS_EXT_KEY_SHARE respectively (also update the other
occurrences noted). Use macros or an enum to define the constants so all
references (ext_hdr.type checks in this file) are clear and maintainable.
- Around line 279-282: The ACK-only filter (checking flags & 0x10) in the TCP
path can miss TLS in non-ACK packets (e.g., PSH); either broaden the condition
or document this limitation: update the conditional around proto/IPPROTO_TCP
that calls track_tls_tcp(skb, l4_hdr, tls) to also allow packets with PSH (flags
& 0x08) or better, allow any TCP packet with payload length > 0 to be inspected,
and if you keep the ACK-only optimization add a clear comment in tls_tracker.h
explaining this known limitation and the tradeoff in overhead versus
completeness.

In `@pkg/decode/decode_protobuf_test.go`:
- Line 103: The test currently asserts SslVersion: 0x0303 (TLS 1.2) but lacks
coverage for other mappings; extend the test in
pkg/decode/decode_protobuf_test.go to include additional cases that map
SslVersion values to their expected strings (e.g., 0x0301 -> "TLS 1.0", 0x0302
-> "TLS 1.1", 0x0304 -> "TLS 1.3") and add edge cases for invalid/zero values
(e.g., 0x0000 and an out-of-range value) to verify the decoding logic; locate
the test that sets the SslVersion field (SslVersion) and either expand it into a
table-driven test or add separate assertions to validate each mapping and the
handling of invalid values.

In `@pkg/model/record_test.go`:
- Around line 55-59: Update the test fixture bytes in pkg/model/record_test.go
to use non-zero values for the two u16 fields currently zeroed (tls_cipher_suite
and tls_key_share) so the decoder must parse them, then add assertions to the
expected BpfFlowMetrics for the TlsCipherSuite and TlsKeyShare fields to
validate their decoded values; locate the byte slice in the test that contains
0x00,0x00 for those fields and replace with distinct non-zero pairs, and update
the expected BpfFlowMetrics object (and any Assert/Equal checks) to include the
corresponding TlsCipherSuite and TlsKeyShare values.

In `@pkg/tracer/tracer.go`:
- Around line 2169-2185: The variablesMapping list now includes
constEnableTLSUsageTracking but NewPacketFetcher’s constant-pruning list wasn’t
updated; update NewPacketFetcher to remove/prune constEnableTLSUsageTracking
alongside the other constants so the two lists remain in sync (search for
NewPacketFetcher and the cleanup/prune constants block and add
constEnableTLSUsageTracking to that cleanup list to match the variablesMapping
declaration).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a9487391-33b4-430d-8918-cd655ebfb014

📥 Commits

Reviewing files that changed from the base of the PR and between 29e32fd and fb32bfe.

⛔ Files ignored due to path filters (6)
  • go.sum is excluded by !**/*.sum
  • pkg/ebpf/bpf_arm64_bpfel.o is excluded by !**/*.o
  • pkg/ebpf/bpf_powerpc_bpfel.o is excluded by !**/*.o
  • pkg/ebpf/bpf_s390_bpfeb.o is excluded by !**/*.o
  • pkg/ebpf/bpf_x86_bpfel.o is excluded by !**/*.o
  • pkg/pbflow/flow.pb.go is excluded by !**/*.pb.go
📒 Files selected for processing (128)
  • bpf/configs.h
  • bpf/flows.c
  • bpf/tls_tracker.h
  • bpf/types.h
  • go.mod
  • pkg/config/config.go
  • pkg/decode/decode_protobuf.go
  • pkg/decode/decode_protobuf_test.go
  • pkg/ebpf/bpf_arm64_bpfel.go
  • pkg/ebpf/bpf_powerpc_bpfel.go
  • pkg/ebpf/bpf_s390_bpfeb.go
  • pkg/ebpf/bpf_x86_bpfel.go
  • pkg/exporter/converters_test.go
  • pkg/model/record.go
  • pkg/model/record_test.go
  • pkg/model/tls_types.go
  • pkg/pbflow/proto.go
  • pkg/tracer/tracer.go
  • proto/flow.proto
  • vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/handler.go
  • vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/mux.go
  • vendor/github.com/minio/minio-go/v7/.gitignore
  • vendor/github.com/minio/minio-go/v7/api-get-object.go
  • vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go
  • vendor/github.com/minio/minio-go/v7/api-update-object-encryption.go
  • vendor/github.com/minio/minio-go/v7/functional_tests.go
  • vendor/go.opentelemetry.io/otel/.golangci.yml
  • vendor/go.opentelemetry.io/otel/CHANGELOG.md
  • vendor/go.opentelemetry.io/otel/CONTRIBUTING.md
  • vendor/go.opentelemetry.io/otel/Makefile
  • vendor/go.opentelemetry.io/otel/README.md
  • vendor/go.opentelemetry.io/otel/baggage/baggage.go
  • vendor/go.opentelemetry.io/otel/dependencies.Dockerfile
  • vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/client.go
  • vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/version.go
  • vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp/client.go
  • vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp/version.go
  • vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/client.go
  • vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/observ/instrumentation.go
  • vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/version.go
  • vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp/client.go
  • vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp/internal/observ/instrumentation.go
  • vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp/internal/version.go
  • vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/version.go
  • vendor/go.opentelemetry.io/otel/internal/errorhandler/errorhandler.go
  • vendor/go.opentelemetry.io/otel/internal/global/handler.go
  • vendor/go.opentelemetry.io/otel/internal/global/state.go
  • vendor/go.opentelemetry.io/otel/metric/asyncfloat64.go
  • vendor/go.opentelemetry.io/otel/metric/asyncint64.go
  • vendor/go.opentelemetry.io/otel/metric/meter.go
  • vendor/go.opentelemetry.io/otel/metric/syncfloat64.go
  • vendor/go.opentelemetry.io/otel/metric/syncint64.go
  • vendor/go.opentelemetry.io/otel/propagation/baggage.go
  • vendor/go.opentelemetry.io/otel/propagation/trace_context.go
  • vendor/go.opentelemetry.io/otel/requirements.txt
  • vendor/go.opentelemetry.io/otel/sdk/metric/internal/observ/instrumentation.go
  • vendor/go.opentelemetry.io/otel/sdk/metric/periodic_reader.go
  • vendor/go.opentelemetry.io/otel/sdk/metric/version.go
  • vendor/go.opentelemetry.io/otel/sdk/resource/builtin.go
  • vendor/go.opentelemetry.io/otel/sdk/resource/container.go
  • vendor/go.opentelemetry.io/otel/sdk/resource/env.go
  • vendor/go.opentelemetry.io/otel/sdk/resource/host_id.go
  • vendor/go.opentelemetry.io/otel/sdk/resource/host_id_readfile.go
  • vendor/go.opentelemetry.io/otel/sdk/resource/os.go
  • vendor/go.opentelemetry.io/otel/sdk/resource/process.go
  • vendor/go.opentelemetry.io/otel/sdk/trace/batch_span_processor.go
  • vendor/go.opentelemetry.io/otel/sdk/trace/internal/observ/batch_span_processor.go
  • vendor/go.opentelemetry.io/otel/sdk/trace/internal/observ/simple_span_processor.go
  • vendor/go.opentelemetry.io/otel/sdk/trace/internal/observ/tracer.go
  • vendor/go.opentelemetry.io/otel/sdk/trace/provider.go
  • vendor/go.opentelemetry.io/otel/sdk/trace/span.go
  • vendor/go.opentelemetry.io/otel/sdk/version.go
  • vendor/go.opentelemetry.io/otel/semconv/v1.39.0/MIGRATION.md
  • vendor/go.opentelemetry.io/otel/semconv/v1.39.0/README.md
  • vendor/go.opentelemetry.io/otel/semconv/v1.40.0/MIGRATION.md
  • vendor/go.opentelemetry.io/otel/semconv/v1.40.0/README.md
  • vendor/go.opentelemetry.io/otel/semconv/v1.40.0/attribute_group.go
  • vendor/go.opentelemetry.io/otel/semconv/v1.40.0/doc.go
  • vendor/go.opentelemetry.io/otel/semconv/v1.40.0/error_type.go
  • vendor/go.opentelemetry.io/otel/semconv/v1.40.0/exception.go
  • vendor/go.opentelemetry.io/otel/semconv/v1.40.0/otelconv/metric.go
  • vendor/go.opentelemetry.io/otel/semconv/v1.40.0/schema.go
  • vendor/go.opentelemetry.io/otel/trace/auto.go
  • vendor/go.opentelemetry.io/otel/trace/trace.go
  • vendor/go.opentelemetry.io/otel/trace/tracestate.go
  • vendor/go.opentelemetry.io/otel/version.go
  • vendor/go.opentelemetry.io/otel/versions.yaml
  • vendor/golang.org/x/net/html/iter.go
  • vendor/golang.org/x/net/html/node.go
  • vendor/golang.org/x/net/html/nodetype_string.go
  • vendor/golang.org/x/net/http2/client_priority_go126.go
  • vendor/golang.org/x/net/http2/client_priority_go127.go
  • vendor/golang.org/x/net/http2/frame.go
  • vendor/golang.org/x/net/http2/http2.go
  • vendor/golang.org/x/net/http2/server.go
  • vendor/golang.org/x/net/http2/transport.go
  • vendor/golang.org/x/net/http2/writesched_priority_rfc7540.go
  • vendor/golang.org/x/net/internal/httpsfv/httpsfv.go
  • vendor/golang.org/x/net/internal/socket/empty.s
  • vendor/golang.org/x/net/internal/socket/msghdr_solaris_64bit.go
  • vendor/golang.org/x/net/internal/socket/socket.go
  • vendor/golang.org/x/net/internal/socket/sys.go
  • vendor/golang.org/x/net/internal/socket/sys_posix.go
  • vendor/golang.org/x/net/ipv4/header.go
  • vendor/golang.org/x/net/ipv6/control_rfc2292_unix.go
  • vendor/golang.org/x/net/ipv6/control_rfc3542_unix.go
  • vendor/golang.org/x/net/publicsuffix/data/children
  • vendor/golang.org/x/net/publicsuffix/data/nodes
  • vendor/golang.org/x/net/publicsuffix/data/text
  • vendor/golang.org/x/net/publicsuffix/table.go
  • vendor/golang.org/x/text/secure/bidirule/bidirule.go
  • vendor/golang.org/x/text/secure/bidirule/bidirule10.0.0.go
  • vendor/golang.org/x/text/secure/bidirule/bidirule9.0.0.go
  • vendor/golang.org/x/text/unicode/bidi/tables10.0.0.go
  • vendor/golang.org/x/text/unicode/bidi/tables11.0.0.go
  • vendor/golang.org/x/text/unicode/bidi/tables12.0.0.go
  • vendor/golang.org/x/text/unicode/bidi/tables13.0.0.go
  • vendor/golang.org/x/text/unicode/bidi/tables15.0.0.go
  • vendor/golang.org/x/text/unicode/bidi/tables17.0.0.go
  • vendor/golang.org/x/text/unicode/bidi/tables9.0.0.go
  • vendor/golang.org/x/text/unicode/norm/forminfo.go
  • vendor/golang.org/x/text/unicode/norm/tables10.0.0.go
  • vendor/golang.org/x/text/unicode/norm/tables11.0.0.go
  • vendor/golang.org/x/text/unicode/norm/tables12.0.0.go
  • vendor/golang.org/x/text/unicode/norm/tables15.0.0.go
  • vendor/golang.org/x/text/unicode/norm/tables17.0.0.go
  • vendor/golang.org/x/text/unicode/norm/tables9.0.0.go
  • vendor/modules.txt
💤 Files with no reviewable changes (6)
  • vendor/go.opentelemetry.io/otel/semconv/v1.39.0/MIGRATION.md
  • vendor/go.opentelemetry.io/otel/semconv/v1.39.0/README.md
  • vendor/golang.org/x/net/html/iter.go
  • vendor/golang.org/x/net/internal/socket/sys.go
  • vendor/golang.org/x/text/secure/bidirule/bidirule10.0.0.go
  • vendor/golang.org/x/text/secure/bidirule/bidirule9.0.0.go

go.mod Outdated

replace github.com/vmware/go-ipfix => github.com/jotak/go-ipfix v0.0.0-20250708115123-407c539ea101

replace github.com/netobserv/flowlogs-pipeline => github.com/jotak/flowlogs-pipeline v0.0.0-20260319155004-bff214fed5a0
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Checking replace directives for flowlogs-pipeline..."
rg -n '^\s*replace\s+github.com/netobserv/flowlogs-pipeline' go.mod || true

echo
echo "Checking references to forked module..."
rg -n 'github.com/jotak/flowlogs-pipeline' go.mod go.sum || true

Repository: netobserv/netobserv-ebpf-agent

Length of output: 687


Remove the forked replace directive before merging to main.

Line 166 in go.mod redirects github.com/netobserv/flowlogs-pipeline to the personal fork github.com/jotak/flowlogs-pipeline v0.0.0-20260319155004-bff214fed5a0. Using a personal fork on the default branch creates supply-chain risk and maintenance burden; it must be replaced with either the official upstream or removed entirely.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@go.mod` at line 166, The go.mod currently contains a replace directive
pointing to the personal fork "github.com/jotak/flowlogs-pipeline
v0.0.0-20260319155004-bff214fed5a0" that overrides
"github.com/netobserv/flowlogs-pipeline"; remove or replace that directive so
the module resolves to the upstream package instead of the fork. Locate the
replace line containing "replace github.com/netobserv/flowlogs-pipeline =>
github.com/jotak/flowlogs-pipeline" and either delete that replace statement or
update it to the official upstream module/version you intend to use, then run go
mod tidy to update go.sum and verify builds.

}

if rerr != nil && rerr != io.ErrUnexpectedEOF && err != io.EOF {
if rerr != nil && rerr != io.ErrUnexpectedEOF && rerr != io.EOF {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify if other vendor files are modified and check go.mod for replace directives

# Check for other modified vendor files in this PR
echo "=== Other modified vendor files in this changeset ==="
git diff --name-only HEAD~1 | grep '^vendor/' || echo "None found in last commit"

# Check if go.mod has replace directive for minio-go
echo -e "\n=== Checking go.mod for minio-go replace directives ==="
rg -n "replace.*minio-go" go.mod || echo "No replace directive found"

# Search for similar error handling pattern inconsistencies
echo -e "\n=== Checking for similar error handling patterns in vendor file ==="
rg -n "err != io\.EOF" vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go

Repository: netobserv/netobserv-ebpf-agent

Length of output: 723


🏁 Script executed:

# Read the file to understand context around lines 355 and 528
cat -n vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go | sed -n '340,365p'

Repository: netobserv/netobserv-ebpf-agent

Length of output: 874


🏁 Script executed:

# Also read around line 528 to see the full context
cat -n vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go | sed -n '515,540p'

Repository: netobserv/netobserv-ebpf-agent

Length of output: 842


Remove vendor file modification or establish proper dependency override.

This vendored third-party file from minio-go/v7 is being modified in a PR about TLS tracking—these are unrelated. Manual patches to vendor/ will be lost on the next go mod vendor or dependency update.

Either:

  • Contribute the change upstream to minio-go, or
  • Use a replace directive in go.mod to point to your fork

Also note: Line 355 in the same file has a pre-existing bug (err != io.EOF should be rerr != io.EOF), but that's a separate issue for upstream, not to be fixed here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go` at line 528,
This change modifies a vendored third-party file (the condition using rerr in
api-put-object-streaming.go) which must not be edited in-tree; either revert
this edit and upstream the fix or add a go.mod replace directive to point to a
fork that contains the change. Locate the conditional using the symbol rerr (and
related checks against io.ErrUnexpectedEOF and io.EOF) and remove/revert your
modifications in the vendor copy; instead implement the correction in your fork
and update go.mod with a replace to that fork or submit the patch upstream to
minio-go. Do not attempt to patch other pre-existing bugs in the vendored file
(for example the rerr vs err issue mentioned) in this PR.

arguments:
- ["ID"] # AllowList
- ["Otel", "Aws", "Gcp"] # DenyList
- - skip-package-name-collision-with-go-std: true
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

python - <<'PY'
import sys, subprocess, pathlib
try:
    import yaml
except Exception:
    subprocess.check_call([sys.executable, "-m", "pip", "install", "pyyaml", "-q"])
    import yaml

p = pathlib.Path("vendor/go.opentelemetry.io/otel/.golangci.yml")
cfg = yaml.safe_load(p.read_text())
rules = cfg["linters"]["settings"]["revive"]["rules"]
vn = next(r for r in rules if r.get("name") == "var-naming")
args = vn.get("arguments", [])
print("arguments:", args)
print("third_arg_type:", type(args[2]).__name__ if len(args) > 2 else "missing")
PY

Repository: netobserv/netobserv-ebpf-agent

Length of output: 191


Remove nested list in var-naming arguments at line 197.

The double dash (- -) wraps the argument in an extra list layer, making the revive rule receive [{...}] instead of {...}. This prevents the option from taking effect.

Fix
-            - - skip-package-name-collision-with-go-std: true
+            - skip-package-name-collision-with-go-std: true
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- - skip-package-name-collision-with-go-std: true
- skip-package-name-collision-with-go-std: true
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@vendor/go.opentelemetry.io/otel/.golangci.yml` at line 197, The var-naming
rule's arguments contain an extra nested list marker ("- -") so the revive
option is passed as a single-item list instead of a map; edit the var-naming
arguments to remove the extra dash so the entry for
skip-package-name-collision-with-go-std appears as a normal map entry (i.e.,
change the "- - skip-package-name-collision-with-go-std: true" line to a
single-level "- skip-package-name-collision-with-go-std: true") so revive will
correctly recognize the option.

Comment on lines 266 to +267
req.bodyReader = bodyReader(b.Bytes())
req.GetBody = bodyReaderErr(body)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix GetBody mismatch for gzip requests.

req.bodyReader uses compressed bytes, but req.GetBody is wired to uncompressed body. This can resend the wrong payload when GetBody is used.

Suggested fix
  req.bodyReader = bodyReader(b.Bytes())
- req.GetBody = bodyReaderErr(body)
+ req.GetBody = bodyReaderErr(b.Bytes())
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
req.bodyReader = bodyReader(b.Bytes())
req.GetBody = bodyReaderErr(body)
req.bodyReader = bodyReader(b.Bytes())
req.GetBody = bodyReaderErr(b.Bytes())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp/client.go`
around lines 266 - 267, The request's GetBody is currently returning the
uncompressed `body` while `req.bodyReader` uses the compressed bytes
`b.Bytes()`, so when the request is retried GetBody will replay the wrong
(uncompressed) payload; change `req.GetBody` to return the same compressed
reader used by `req.bodyReader` (i.e. use `bodyReaderErr(b.Bytes())` or an
equivalent helper that wraps the compressed buffer) so `bodyReader`,
`req.bodyReader`, and `req.GetBody` are all consistent for gzip requests (refer
to `req.bodyReader`, `req.GetBody`, `bodyReader`, `bodyReaderErr`, `body`, and
`b.Bytes()`).

Comment on lines +1 to +3
# Semconv v1.40.0

[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/semconv/v1.40.0)](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.40.0)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify semconv v1.40.0 was properly vendored and check go.mod for usage

# Check if the full package exists in vendor
fd -t d 'v1.40.0' vendor/go.opentelemetry.io/otel/semconv/

# Check go.mod for semconv references
rg -n 'go.opentelemetry.io/otel/semconv' go.mod

# Look for actual usage of semconv v1.40.0 in non-vendor code
rg -l 'semconv/v1.40.0' --glob '!vendor/**'

Repository: netobserv/netobserv-ebpf-agent

Length of output: 122


Remove unused vendored semconv v1.40.0 or justify its presence.

The semconv v1.40.0 package is vendored but has no references in go.mod and is not used in the codebase. Either remove it from vendor if it's not needed, or add it as an explicit dependency in go.mod if it is required.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@vendor/go.opentelemetry.io/otel/semconv/v1.40.0/README.md` around lines 1 -
3, The vendored semconv package
(vendor/go.opentelemetry.io/otel/semconv/v1.40.0, e.g., README.md for semconv
v1.40.0) appears unused; either remove the entire vendor directory for
go.opentelemetry.io/otel/semconv/v1.40.0 from the repo or add it as an explicit
dependency in go.mod (run `go get go.opentelemetry.io/otel/semconv@v1.40.0` or
the appropriate version) and regenerate the vendor folder; update or remove any
related references in build scripts as needed so the project compiles cleanly
without stale vendored files.

Comment on lines +1651 to +1665
func (mh *MetaHeadersFrame) rfc9218Priority(priorityAware bool) (p PriorityParam, priorityAwareAfter, hasIntermediary bool) {
var s string
for _, field := range mh.Fields {
if field.Name == "priority" {
s = field.Value
priorityAware = true
}
if slices.Contains([]string{"via", "forwarded", "x-forwarded-for"}, field.Name) {
hasIntermediary = true
}
}
// No need to check for ok. parseRFC9218Priority will return a default
// value if there is no priority field or if the field cannot be parsed.
p, _ = parseRFC9218Priority(s, priorityAware && !hasIntermediary)
return p, priorityAware, hasIntermediary
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Combine repeated Priority headers instead of overwriting them.

HTTP field instances are comma-joined. With the current s = field.Value, priority: u=0 followed by priority: i is parsed as only i, which changes the computed stream priority.

Proposed fix
 	for _, field := range mh.Fields {
 		if field.Name == "priority" {
-			s = field.Value
+			if s == "" {
+				s = field.Value
+			} else {
+				s += "," + field.Value
+			}
 			priorityAware = true
 		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (mh *MetaHeadersFrame) rfc9218Priority(priorityAware bool) (p PriorityParam, priorityAwareAfter, hasIntermediary bool) {
var s string
for _, field := range mh.Fields {
if field.Name == "priority" {
s = field.Value
priorityAware = true
}
if slices.Contains([]string{"via", "forwarded", "x-forwarded-for"}, field.Name) {
hasIntermediary = true
}
}
// No need to check for ok. parseRFC9218Priority will return a default
// value if there is no priority field or if the field cannot be parsed.
p, _ = parseRFC9218Priority(s, priorityAware && !hasIntermediary)
return p, priorityAware, hasIntermediary
func (mh *MetaHeadersFrame) rfc9218Priority(priorityAware bool) (p PriorityParam, priorityAwareAfter, hasIntermediary bool) {
var s string
for _, field := range mh.Fields {
if field.Name == "priority" {
if s == "" {
s = field.Value
} else {
s += "," + field.Value
}
priorityAware = true
}
if slices.Contains([]string{"via", "forwarded", "x-forwarded-for"}, field.Name) {
hasIntermediary = true
}
}
// No need to check for ok. parseRFC9218Priority will return a default
// value if there is no priority field or if the field cannot be parsed.
p, _ = parseRFC9218Priority(s, priorityAware && !hasIntermediary)
return p, priorityAware, hasIntermediary
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@vendor/golang.org/x/net/http2/frame.go` around lines 1651 - 1665, The
rfc9218Priority method on MetaHeadersFrame currently overwrites multiple
"priority" header instances by assigning s = field.Value; change it to combine
repeated priority header values the same way HTTP does (comma-join instances)
before calling parseRFC9218Priority. Locate MetaHeadersFrame.rfc9218Priority and
where s is set from mh.Fields and, instead of replacing s, append field.Value to
s separated by a comma when s is non-empty (trimming whitespace as needed) so
parseRFC9218Priority receives the combined header string; keep priorityAware and
hasIntermediary logic unchanged.

Comment on lines +1831 to +1834
case SettingNoRFC7540Priorities:
if s.Val > 1 {
return ConnectionError(ErrCodeProtocol)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Treat NO_RFC7540_PRIORITIES=1 as a priority-awareness signal.

Right now we only validate the setting. A peer that already sent SETTINGS_NO_RFC7540_PRIORITIES=1 still gets the legacy incremental default until it also sends a Priority header or PRIORITY_UPDATE, which defeats this negotiation for the first requests on the connection.

Proposed fix
 	case SettingNoRFC7540Priorities:
 		if s.Val > 1 {
 			return ConnectionError(ErrCodeProtocol)
 		}
+		if s.Val == 1 {
+			sc.priorityAware = true
+		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
case SettingNoRFC7540Priorities:
if s.Val > 1 {
return ConnectionError(ErrCodeProtocol)
}
case SettingNoRFC7540Priorities:
if s.Val > 1 {
return ConnectionError(ErrCodeProtocol)
}
if s.Val == 1 {
sc.priorityAware = true
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@vendor/golang.org/x/net/http2/server.go` around lines 1831 - 1834, When
handling SettingNoRFC7540Priorities, besides validating s.Val, treat s.Val==1 as
the peer signalling RFC7540 priority-awareness: set the connection-level
flag/state so the server uses RFC7540-style priorities for new requests (instead
of the legacy incremental default) and avoid waiting for a per-stream Priority
header or PRIORITY_UPDATE; keep the existing ConnectionError(ErrCodeProtocol)
return for s.Val>1. Locate the switch case on SettingNoRFC7540Priorities and
update the connection state (the flag used by stream creation/priority logic)
and call the codepath that applies/recalculates initial priorities for streams
that have no explicit Priority yet so the first requests on the connection use
RFC7540 semantics.

Comment on lines +244 to +254
s = s[countLeftWhitespace(s):]
if len(s) == 0 {
break
}
if s[0] == ',' {
s = s[1:]
}
s = s[countLeftWhitespace(s):]
if len(s) == 0 {
return false
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reject dictionary members without a comma.

ParseDictionary falls through when more input remains but the next byte is not ,. That makes malformed structured fields like u=0 i parse successfully, so invalid Priority / PRIORITY_UPDATE values can be accepted and applied instead of rejected.

Proposed fix
 		if len(s) == 0 {
 			break
 		}
-		if s[0] == ',' {
-			s = s[1:]
-		}
+		if s[0] != ',' {
+			return false
+		}
+		s = s[1:]
 		s = s[countLeftWhitespace(s):]
 		if len(s) == 0 {
 			return false
 		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
s = s[countLeftWhitespace(s):]
if len(s) == 0 {
break
}
if s[0] == ',' {
s = s[1:]
}
s = s[countLeftWhitespace(s):]
if len(s) == 0 {
return false
}
s = s[countLeftWhitespace(s):]
if len(s) == 0 {
break
}
if s[0] != ',' {
return false
}
s = s[1:]
s = s[countLeftWhitespace(s):]
if len(s) == 0 {
return false
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@vendor/golang.org/x/net/internal/httpsfv/httpsfv.go` around lines 244 - 254,
ParseDictionary currently advances past a parsed item then trims whitespace but
allows fall-through when the next byte isn't a comma, letting inputs like "u=0
i" be accepted; in ParseDictionary (the loop handling s, countLeftWhitespace and
commas) update the logic so that after trimming whitespace and confirming
len(s)>0 you explicitly check if s[0] != ',' and if so return false (reject the
dictionary member), otherwise consume the comma and continue; ensure you still
trim whitespace after consuming the comma as in the existing flow.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Mar 30, 2026

@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

  • Start implementing TLS, by reading the TLS header when present
  • Extract SSL version
  • Report the TLS version in output records

Fixes netobserv/netobserv-operator#2492

Dependencies

Summary by CodeRabbit

  • New Features

  • TLS/SSL usage tracking added (config option ENABLE_TLS_TRACKING, default off).

  • Flow records now include TLS metadata: version, cipher suite, key-exchange/curve, TLS message types, and SSL-mismatch indication.

  • Export/protobuf and decoding now surface TLS fields so downstream consumers and UIs can display TLS info.

  • Chores

  • Updated vendored dependencies including OpenTelemetry and various golang modules.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@jotak jotak added the ok-to-test To set manually when a PR is safe to test. Triggers image build on PR. label Mar 31, 2026
@github-actions
Copy link
Copy Markdown

New images:

quay.io/netobserv/netobserv-ebpf-agent:893fd75

quay.io/netobserv/ebpf-bytecode:893fd75

They will expire in two weeks.

To deploy this build, run from the operator repo, assuming the operator is running:

USER=netobserv VERSION=893fd75 make set-agent-image

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Mar 31, 2026

@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

  • Start implementing TLS, by reading the TLS header when present
  • Extract SSL version
  • Report the TLS version in output records

Fixes netobserv/netobserv-operator#2492

Dependencies

Summary by CodeRabbit

  • New Features

  • TLS/SSL usage tracking added (config option ENABLE_TLS_TRACKING, default off).

  • Flow records now include TLS metadata: version, cipher suite, key-exchange/curve, TLS message types, and SSL-mismatch indication.

  • Export/protobuf and decoding now surface TLS fields so downstream consumers and UIs can display TLS info.

  • Chores

  • Updated vendored dependencies including OpenTelemetry and various golang modules.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@Amoghrd
Copy link
Copy Markdown
Member

Amoghrd commented Mar 31, 2026

/label qe-approved

@openshift-ci openshift-ci bot added the qe-approved QE has approved this pull request label Mar 31, 2026
@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Mar 31, 2026

@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

  • Start implementing TLS, by reading the TLS header when present
  • Extract SSL version
  • Report the TLS version in output records

Fixes netobserv/netobserv-operator#2492

Dependencies

Summary by CodeRabbit

  • New Features

  • TLS/SSL usage tracking added (config option ENABLE_TLS_TRACKING, default off).

  • Flow records now include TLS metadata: version, cipher suite, key-exchange/curve, TLS message types, and SSL-mismatch indication.

  • Export/protobuf and decoding now surface TLS fields so downstream consumers and UIs can display TLS info.

  • Chores

  • Updated vendored dependencies including OpenTelemetry and various golang modules.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@Amoghrd
Copy link
Copy Markdown
Member

Amoghrd commented Mar 31, 2026

/test qe-e2e-tests

@openshift-ci
Copy link
Copy Markdown

openshift-ci bot commented Apr 1, 2026

@jotak: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/qe-e2e-tests a7b3374 link false /test qe-e2e-tests

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@github-actions github-actions bot removed the ok-to-test To set manually when a PR is safe to test. Triggers image build on PR. label Apr 1, 2026
@jotak jotak merged commit 7004f08 into netobserv:main Apr 1, 2026
6 of 9 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Apr 2, 2026
11 tasks
Amoghrd pushed a commit to Amoghrd/netobserv-ebpf-agent that referenced this pull request Apr 2, 2026
* NETOBSERV-2471: TLS usage tracking

- Start implementing TLS, by reading the TLS header when present
- Extract SSL version (not done yet for the handshake message)
- Report the TLS version in output records

* Check handshake version, use bpf_skb_load_bytes

* Show ~ on version mismatch

* Implement 1.3 / handshake extension reading

* bump flp

* Manage various edge cases on version mismatch

* Add curves, polishing

* fix format

* Make TLS tracking configurable

* remove unused constants

* add comments, remove unused code

* remove replace directive
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference needs-review Tells that the PR needs a review qe-approved QE has approved this pull request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

TLS and QUIC observability

7 participants