NETOBSERV-2471: TLS usage tracking#815
Conversation
|
Skipping CI for Draft Pull Request. |
|
@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.21.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
9797fe9 to
abc2fd7
Compare
|
@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.21.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.21.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.21.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.21.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
bpf/tls_tracker.h
Outdated
|
|
||
| // Extract TLS info | ||
| static inline void track_tls_version(struct __sk_buff *skb, pkt_info *pkt) { | ||
| if (pkt->id->transport_protocol == IPPROTO_TCP) { |
There was a problem hiding this comment.
do u think this work for STCP protocol too ?, not sure what is the story for DTLS ?
There was a problem hiding this comment.
I haven't digged into other protocols.. might be something to do if someone asks for it, but I believe TCP covers most of the expectations
|
New images: quay.io/netobserv/netobserv-ebpf-agent:5132e43
quay.io/netobserv/ebpf-bytecode:5132e43
They will expire in two weeks. To deploy this build, run from the operator repo, assuming the operator is running: USER=netobserv VERSION=5132e43 make set-agent-image |
pkg/model/tls_types.go
Outdated
| var typesMap map[string]uint8 | ||
|
|
||
| func init() { | ||
| typesMap = make(map[string]uint8, len(tlsTypes)) | ||
| for _, t := range tlsTypes { | ||
| typesMap[t.name] = t.value | ||
| } | ||
| } |
| if (handshake_version == 0x0303) { | ||
| // Check extensions to discriminate 1.2 and 1.3 | ||
| u8 session_len, compr_len; | ||
| u16 cipher_len, exts_len; | ||
| offset += 32; /*skip random*/ | ||
| // Read session | ||
| if (bpf_skb_load_bytes(skb, offset, &session_len, sizeof(session_len)) < 0) { | ||
| return TLSTRACKER_UNKNOWN; | ||
| } | ||
| offset += 1 + session_len; | ||
| // Read cipher suites | ||
| if (bpf_skb_load_bytes(skb, offset, &cipher_len, sizeof(cipher_len)) < 0) { | ||
| return TLSTRACKER_UNKNOWN; | ||
| } | ||
| offset += 2 + bpf_ntohs(cipher_len); | ||
| // Read compression | ||
| if (bpf_skb_load_bytes(skb, offset, &compr_len, sizeof(compr_len)) < 0) { | ||
| return TLSTRACKER_UNKNOWN; | ||
| } | ||
| offset += 1 + compr_len; | ||
| // Read extensions |
There was a problem hiding this comment.
It would be good to be able to see pre TLS-1.2 connections here even if it's not expected in recent environments. That's especially useful for auditing legacy TLS usage.
Something around those lines should work:
| if (handshake_version == 0x0303) { | |
| // Check extensions to discriminate 1.2 and 1.3 | |
| u8 session_len, compr_len; | |
| u16 cipher_len, exts_len; | |
| offset += 32; /*skip random*/ | |
| // Read session | |
| if (bpf_skb_load_bytes(skb, offset, &session_len, sizeof(session_len)) < 0) { | |
| return TLSTRACKER_UNKNOWN; | |
| } | |
| offset += 1 + session_len; | |
| // Read cipher suites | |
| if (bpf_skb_load_bytes(skb, offset, &cipher_len, sizeof(cipher_len)) < 0) { | |
| return TLSTRACKER_UNKNOWN; | |
| } | |
| offset += 2 + bpf_ntohs(cipher_len); | |
| // Read compression | |
| if (bpf_skb_load_bytes(skb, offset, &compr_len, sizeof(compr_len)) < 0) { | |
| return TLSTRACKER_UNKNOWN; | |
| } | |
| offset += 1 + compr_len; | |
| // Read extensions | |
| u8 session_len; | |
| offset += 32; /*skip random*/ | |
| // Read session | |
| if (bpf_skb_load_bytes(skb, offset, &session_len, sizeof(session_len)) < 0) { | |
| return TLSTRACKER_UNKNOWN; | |
| } | |
| offset += 1 + session_len; | |
| // Read cipher suite (same position in all versions) | |
| if (bpf_skb_load_bytes(skb, offset, &tls->cipher_suite, sizeof(tls->cipher_suite)) < 0) { | |
| return TLSTRACKER_UNKNOWN; | |
| } | |
| tls->cipher_suite = bpf_ntohs(tls->cipher_suite); | |
| offset += 3; // Skip also compression (1B) | |
| if (handshake_version == 0x0303) { | |
| // Check extensions to discriminate 1.2 and 1.3 | |
| u16 exts_len; |
There was a problem hiding this comment.
Pre-1.2 should already be handled - before 1.2, the TLS version was not provided through an extension, it was in the handshake version field (read from lines 41-46 above).
So it would skip the whole if (handshake_version == 0x0303) block and return correctly with the handshake version.
| if ((bpf_skb_load_bytes(skb, offset, &rec, sizeof(rec))) < 0) { | ||
| return TLSTRACKER_NOTLS; | ||
| } | ||
| offset += 5; |
There was a problem hiding this comment.
Can you please put a comment to explain this ?
| if (bpf_skb_load_bytes(skb, offset, &handshake, sizeof(handshake)) < 0) { | ||
| return TLSTRACKER_NOTLS; | ||
| } | ||
| offset += 4; |
There was a problem hiding this comment.
Same here. A comment would help.
📝 WalkthroughWalkthroughAdds kernel-space TLS usage tracking via a new eBPF TLS tracker, extends flow metrics with TLS fields and a runtime toggle, wires the toggle from agent config through eBPF variables, updates protobuf/schema and Go model/encoding to carry TLS metadata, and upgrades several vendored dependencies. Changes
Sequence DiagramsequenceDiagram
participant App as Application
participant Cfg as Config
participant Tracer as Tracer
participant eBPF as eBPF Kernel
participant Flow as Flow Monitor
participant Export as Exporter
App->>Cfg: set ENABLE_TLS_TRACKING
Cfg->>Tracer: provide config
Tracer->>eBPF: set enable_tls_usage_tracking variable
Flow->>eBPF: packet (TCP, ACK)
eBPF->>eBPF: track_tls() parse record & hello
eBPF->>Flow: populate tls_info in flow update
Flow->>Export: export BpfFlowMetrics (with TLS fields)
Export->>Export: encode to protobuf / map (TLS strings)
Export->>App: deliver enriched flow record
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes ✨ Finishing Touches🧪 Generate unit tests (beta)
|
|
thanks @jpinsonneau , comments addressed |
|
@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
LGTM |
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go (1)
355-355:⚠️ Potential issue | 🟡 MinorVariable name typo in error check.
The condition checks
err != io.EOFbut the variable fromreadFullisrerr, noterr. This should bererr != io.EOFfor consistency with line 528 and correctness.🔧 Proposed fix
- if rerr != nil && rerr != io.ErrUnexpectedEOF && err != io.EOF { + if rerr != nil && rerr != io.ErrUnexpectedEOF && rerr != io.EOF { return UploadInfo{}, rerr }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go` at line 355, The error-check condition mistakenly compares the wrong variable: change the conditional that reads "if rerr != nil && rerr != io.ErrUnexpectedEOF && err != io.EOF" to use rerr for the EOF check so it becomes "rerr != io.EOF"; locate the check in the streaming upload flow (in the same block that handles readFull returning rerr) in api-put-object-streaming.go and replace the err reference with rerr to match the readFull variable and the similar check around line 528.vendor/golang.org/x/net/http2/server.go (1)
2168-2173:⚠️ Potential issue | 🟡 MinorApply the same priority initialization on the h2c upgrade path.
processHeadersnow derives initial priority and intermediary detection from request headers, butupgradeRequeststill hard-codes the default. The first upgraded request will ignore an explicitPriorityheader and proxy markers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vendor/golang.org/x/net/http2/server.go` around lines 2168 - 2173, The h2c upgrade path in serverConn.upgradeRequest is still hard-coding the default priority and intermediary flag; update upgradeRequest to derive initial priority and intermediary detection from the incoming req headers the same way processHeaders does: parse the Priority header and proxy markers from req.Header (or call the same helper used by processHeaders), set sc.priorityAware and sc.hasIntermediary accordingly, compute the initial priority (instead of calling defaultRFC9218Priority unconditionally), and pass that computed priority into newStream when creating st (also ensure sc.maxClientStreamID and st.reqTrailer remain set as before).vendor/golang.org/x/net/http2/http2.go (1)
172-183:⚠️ Potential issue | 🟡 MinorKeep
Setting.Valid()in sync with the new setting.Adding
SettingNoRFC7540Prioritieshere without a matching case inSetting.Valid()meansValid()still returns nil for invalid values greater than1.serverConn.processSettingcompensates locally, but any other caller that relies onValid()will still accept malformed settings.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vendor/golang.org/x/net/http2/http2.go` around lines 172 - 183, The new SettingNoRFC7540Priorities constant was added to the settingName map but not included in Setting.Valid(), so Valid() will incorrectly accept values above the previous max; update the Setting.Valid() switch (or its validation logic) to include a case for SettingNoRFC7540Priorities (the same symbol) so it returns a non-nil error for unknown values, ensuring callers of Setting.Valid() (not just serverConn.processSetting) reject malformed settings.
🧹 Nitpick comments (6)
pkg/decode/decode_protobuf_test.go (1)
103-103: Test correctly validates TLS 1.2 version mapping.0x0303 → "TLS 1.2" is accurate. Consider adding test cases for other versions (0x0301=TLS 1.0, 0x0302=TLS 1.1, 0x0304=TLS 1.3) and edge cases (0x0000 or invalid values) to ensure robust coverage of the new TLS tracking feature.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/decode/decode_protobuf_test.go` at line 103, The test currently asserts SslVersion: 0x0303 (TLS 1.2) but lacks coverage for other mappings; extend the test in pkg/decode/decode_protobuf_test.go to include additional cases that map SslVersion values to their expected strings (e.g., 0x0301 -> "TLS 1.0", 0x0302 -> "TLS 1.1", 0x0304 -> "TLS 1.3") and add edge cases for invalid/zero values (e.g., 0x0000 and an out-of-range value) to verify the decoding logic; locate the test that sets the SslVersion field (SslVersion) and either expand it into a table-driven test or add separate assertions to validate each mapping and the handling of invalid values.pkg/tracer/tracer.go (1)
2169-2185: KeepNewPacketFetcherconstant-pruning in sync with Line 2169.You added
constEnableTLSUsageTrackingto flow variables, but it is not pruned inNewPacketFetcher’s cleanup list. Sync both paths to match the in-file contract.Proposed fix (outside this range, in
NewPacketFetcher)delete(spec.Programs, constEnableDirectFlowRingbuf) delete(spec.Programs, constEnableOpenSSLTracking) + delete(spec.Programs, constEnableTLSUsageTracking) delete(spec.Programs, dnsNameMap)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/tracer/tracer.go` around lines 2169 - 2185, The variablesMapping list now includes constEnableTLSUsageTracking but NewPacketFetcher’s constant-pruning list wasn’t updated; update NewPacketFetcher to remove/prune constEnableTLSUsageTracking alongside the other constants so the two lists remain in sync (search for NewPacketFetcher and the cleanup/prune constants block and add constEnableTLSUsageTracking to that cleanup list to match the variablesMapping declaration).pkg/model/record_test.go (1)
55-59: Assert non-zero cipher suite/key share to fully validate layout decoding.Line 56 and Line 57 are zeroed, so the test cannot catch mis-decoding of
TlsCipherSuiteandTlsKeyShare. Use non-zero fixture values and assert both fields in expectedBpfFlowMetrics.Proposed test hardening diff
- 0x00, 0x00, // u16 tls_cipher_suite - 0x00, 0x00, // u16 tls_key_share + 0x01, 0x13, // u16 tls_cipher_suite (0x1301) + 0x1d, 0x00, // u16 tls_key_share (0x001d) ... Metrics: ebpf.BpfFlowMetrics{ ... SslVersion: 0x0303, + TlsCipherSuite: 0x1301, + TlsKeyShare: 0x001d, TlsTypes: 0x21, },Also applies to: 91-92
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/model/record_test.go` around lines 55 - 59, Update the test fixture bytes in pkg/model/record_test.go to use non-zero values for the two u16 fields currently zeroed (tls_cipher_suite and tls_key_share) so the decoder must parse them, then add assertions to the expected BpfFlowMetrics for the TlsCipherSuite and TlsKeyShare fields to validate their decoded values; locate the byte slice in the test that contains 0x00,0x00 for those fields and replace with distinct non-zero pairs, and update the expected BpfFlowMetrics object (and any Assert/Equal checks) to include the corresponding TlsCipherSuite and TlsKeyShare values.bpf/tls_tracker.h (3)
145-150: Clarify the offset advancement comment.Line 150's comment says "Skip also compression (1B)" but
offset += 3actually skips cipher suite (2B) + compression (1B). The code is correct, but the comment could be clearer:- offset += 3; // Skip also compression (1B) + offset += 3; // Skip past cipher suite (2B) + compression (1B)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bpf/tls_tracker.h` around lines 145 - 150, Offset advancement comment is ambiguous: after reading tls->cipher_suite via bpf_skb_load_bytes, offset += 3 advances past the 2-byte cipher suite and the 1-byte compression; update the comment next to offset += 3 (and mention tls->cipher_suite and bpf_skb_load_bytes) to explicitly state "skip cipher suite (2B) and compression (1B)" so it matches the actual increment.
92-92: Consider named constants for TLS extension types.Magic numbers
0x002b(supported_versions) and0x0033(key_share) appear multiple times. Named constants would improve readability:`#define` TLS_EXT_SUPPORTED_VERSIONS 0x002b `#define` TLS_EXT_KEY_SHARE 0x0033Also applies to: 171-171, 179-179
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bpf/tls_tracker.h` at line 92, Replace the magic numeric TLS extension literals with named constants: add definitions like TLS_EXT_SUPPORTED_VERSIONS (0x002b) and TLS_EXT_KEY_SHARE (0x0033) near the top of tls_tracker.h (or in an appropriate header) and replace all direct comparisons of ext_hdr.type == 0x002b and ext_hdr.type == 0x0033 with ext_hdr.type == TLS_EXT_SUPPORTED_VERSIONS and ext_hdr.type == TLS_EXT_KEY_SHARE respectively (also update the other occurrences noted). Use macros or an enum to define the constants so all references (ext_hdr.type checks in this file) are clear and maintainable.
279-282: ACK-only filter may miss some TLS packets.Filtering on
flags & 0x10(ACK) is a reasonable optimization, but TLS data could appear in non-ACK packets (e.g., pure PSH). This is likely intentional to reduce overhead, but worth documenting as a known limitation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bpf/tls_tracker.h` around lines 279 - 282, The ACK-only filter (checking flags & 0x10) in the TCP path can miss TLS in non-ACK packets (e.g., PSH); either broaden the condition or document this limitation: update the conditional around proto/IPPROTO_TCP that calls track_tls_tcp(skb, l4_hdr, tls) to also allow packets with PSH (flags & 0x08) or better, allow any TCP packet with payload length > 0 to be inspected, and if you keep the ACK-only optimization add a clear comment in tls_tracker.h explaining this known limitation and the tradeoff in overhead versus completeness.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@go.mod`:
- Line 166: The go.mod currently contains a replace directive pointing to the
personal fork "github.com/jotak/flowlogs-pipeline
v0.0.0-20260319155004-bff214fed5a0" that overrides
"github.com/netobserv/flowlogs-pipeline"; remove or replace that directive so
the module resolves to the upstream package instead of the fork. Locate the
replace line containing "replace github.com/netobserv/flowlogs-pipeline =>
github.com/jotak/flowlogs-pipeline" and either delete that replace statement or
update it to the official upstream module/version you intend to use, then run go
mod tidy to update go.sum and verify builds.
In `@vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go`:
- Line 528: This change modifies a vendored third-party file (the condition
using rerr in api-put-object-streaming.go) which must not be edited in-tree;
either revert this edit and upstream the fix or add a go.mod replace directive
to point to a fork that contains the change. Locate the conditional using the
symbol rerr (and related checks against io.ErrUnexpectedEOF and io.EOF) and
remove/revert your modifications in the vendor copy; instead implement the
correction in your fork and update go.mod with a replace to that fork or submit
the patch upstream to minio-go. Do not attempt to patch other pre-existing bugs
in the vendored file (for example the rerr vs err issue mentioned) in this PR.
In `@vendor/go.opentelemetry.io/otel/.golangci.yml`:
- Line 197: The var-naming rule's arguments contain an extra nested list marker
("- -") so the revive option is passed as a single-item list instead of a map;
edit the var-naming arguments to remove the extra dash so the entry for
skip-package-name-collision-with-go-std appears as a normal map entry (i.e.,
change the "- - skip-package-name-collision-with-go-std: true" line to a
single-level "- skip-package-name-collision-with-go-std: true") so revive will
correctly recognize the option.
In
`@vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp/client.go`:
- Around line 266-267: The request's GetBody is currently returning the
uncompressed `body` while `req.bodyReader` uses the compressed bytes
`b.Bytes()`, so when the request is retried GetBody will replay the wrong
(uncompressed) payload; change `req.GetBody` to return the same compressed
reader used by `req.bodyReader` (i.e. use `bodyReaderErr(b.Bytes())` or an
equivalent helper that wraps the compressed buffer) so `bodyReader`,
`req.bodyReader`, and `req.GetBody` are all consistent for gzip requests (refer
to `req.bodyReader`, `req.GetBody`, `bodyReader`, `bodyReaderErr`, `body`, and
`b.Bytes()`).
In `@vendor/go.opentelemetry.io/otel/semconv/v1.40.0/README.md`:
- Around line 1-3: The vendored semconv package
(vendor/go.opentelemetry.io/otel/semconv/v1.40.0, e.g., README.md for semconv
v1.40.0) appears unused; either remove the entire vendor directory for
go.opentelemetry.io/otel/semconv/v1.40.0 from the repo or add it as an explicit
dependency in go.mod (run `go get go.opentelemetry.io/otel/semconv@v1.40.0` or
the appropriate version) and regenerate the vendor folder; update or remove any
related references in build scripts as needed so the project compiles cleanly
without stale vendored files.
In `@vendor/golang.org/x/net/http2/frame.go`:
- Around line 1651-1665: The rfc9218Priority method on MetaHeadersFrame
currently overwrites multiple "priority" header instances by assigning s =
field.Value; change it to combine repeated priority header values the same way
HTTP does (comma-join instances) before calling parseRFC9218Priority. Locate
MetaHeadersFrame.rfc9218Priority and where s is set from mh.Fields and, instead
of replacing s, append field.Value to s separated by a comma when s is non-empty
(trimming whitespace as needed) so parseRFC9218Priority receives the combined
header string; keep priorityAware and hasIntermediary logic unchanged.
In `@vendor/golang.org/x/net/http2/server.go`:
- Around line 1831-1834: When handling SettingNoRFC7540Priorities, besides
validating s.Val, treat s.Val==1 as the peer signalling RFC7540
priority-awareness: set the connection-level flag/state so the server uses
RFC7540-style priorities for new requests (instead of the legacy incremental
default) and avoid waiting for a per-stream Priority header or PRIORITY_UPDATE;
keep the existing ConnectionError(ErrCodeProtocol) return for s.Val>1. Locate
the switch case on SettingNoRFC7540Priorities and update the connection state
(the flag used by stream creation/priority logic) and call the codepath that
applies/recalculates initial priorities for streams that have no explicit
Priority yet so the first requests on the connection use RFC7540 semantics.
In `@vendor/golang.org/x/net/internal/httpsfv/httpsfv.go`:
- Around line 244-254: ParseDictionary currently advances past a parsed item
then trims whitespace but allows fall-through when the next byte isn't a comma,
letting inputs like "u=0 i" be accepted; in ParseDictionary (the loop handling
s, countLeftWhitespace and commas) update the logic so that after trimming
whitespace and confirming len(s)>0 you explicitly check if s[0] != ',' and if so
return false (reject the dictionary member), otherwise consume the comma and
continue; ensure you still trim whitespace after consuming the comma as in the
existing flow.
---
Outside diff comments:
In `@vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go`:
- Line 355: The error-check condition mistakenly compares the wrong variable:
change the conditional that reads "if rerr != nil && rerr != io.ErrUnexpectedEOF
&& err != io.EOF" to use rerr for the EOF check so it becomes "rerr != io.EOF";
locate the check in the streaming upload flow (in the same block that handles
readFull returning rerr) in api-put-object-streaming.go and replace the err
reference with rerr to match the readFull variable and the similar check around
line 528.
In `@vendor/golang.org/x/net/http2/http2.go`:
- Around line 172-183: The new SettingNoRFC7540Priorities constant was added to
the settingName map but not included in Setting.Valid(), so Valid() will
incorrectly accept values above the previous max; update the Setting.Valid()
switch (or its validation logic) to include a case for
SettingNoRFC7540Priorities (the same symbol) so it returns a non-nil error for
unknown values, ensuring callers of Setting.Valid() (not just
serverConn.processSetting) reject malformed settings.
In `@vendor/golang.org/x/net/http2/server.go`:
- Around line 2168-2173: The h2c upgrade path in serverConn.upgradeRequest is
still hard-coding the default priority and intermediary flag; update
upgradeRequest to derive initial priority and intermediary detection from the
incoming req headers the same way processHeaders does: parse the Priority header
and proxy markers from req.Header (or call the same helper used by
processHeaders), set sc.priorityAware and sc.hasIntermediary accordingly,
compute the initial priority (instead of calling defaultRFC9218Priority
unconditionally), and pass that computed priority into newStream when creating
st (also ensure sc.maxClientStreamID and st.reqTrailer remain set as before).
---
Nitpick comments:
In `@bpf/tls_tracker.h`:
- Around line 145-150: Offset advancement comment is ambiguous: after reading
tls->cipher_suite via bpf_skb_load_bytes, offset += 3 advances past the 2-byte
cipher suite and the 1-byte compression; update the comment next to offset += 3
(and mention tls->cipher_suite and bpf_skb_load_bytes) to explicitly state "skip
cipher suite (2B) and compression (1B)" so it matches the actual increment.
- Line 92: Replace the magic numeric TLS extension literals with named
constants: add definitions like TLS_EXT_SUPPORTED_VERSIONS (0x002b) and
TLS_EXT_KEY_SHARE (0x0033) near the top of tls_tracker.h (or in an appropriate
header) and replace all direct comparisons of ext_hdr.type == 0x002b and
ext_hdr.type == 0x0033 with ext_hdr.type == TLS_EXT_SUPPORTED_VERSIONS and
ext_hdr.type == TLS_EXT_KEY_SHARE respectively (also update the other
occurrences noted). Use macros or an enum to define the constants so all
references (ext_hdr.type checks in this file) are clear and maintainable.
- Around line 279-282: The ACK-only filter (checking flags & 0x10) in the TCP
path can miss TLS in non-ACK packets (e.g., PSH); either broaden the condition
or document this limitation: update the conditional around proto/IPPROTO_TCP
that calls track_tls_tcp(skb, l4_hdr, tls) to also allow packets with PSH (flags
& 0x08) or better, allow any TCP packet with payload length > 0 to be inspected,
and if you keep the ACK-only optimization add a clear comment in tls_tracker.h
explaining this known limitation and the tradeoff in overhead versus
completeness.
In `@pkg/decode/decode_protobuf_test.go`:
- Line 103: The test currently asserts SslVersion: 0x0303 (TLS 1.2) but lacks
coverage for other mappings; extend the test in
pkg/decode/decode_protobuf_test.go to include additional cases that map
SslVersion values to their expected strings (e.g., 0x0301 -> "TLS 1.0", 0x0302
-> "TLS 1.1", 0x0304 -> "TLS 1.3") and add edge cases for invalid/zero values
(e.g., 0x0000 and an out-of-range value) to verify the decoding logic; locate
the test that sets the SslVersion field (SslVersion) and either expand it into a
table-driven test or add separate assertions to validate each mapping and the
handling of invalid values.
In `@pkg/model/record_test.go`:
- Around line 55-59: Update the test fixture bytes in pkg/model/record_test.go
to use non-zero values for the two u16 fields currently zeroed (tls_cipher_suite
and tls_key_share) so the decoder must parse them, then add assertions to the
expected BpfFlowMetrics for the TlsCipherSuite and TlsKeyShare fields to
validate their decoded values; locate the byte slice in the test that contains
0x00,0x00 for those fields and replace with distinct non-zero pairs, and update
the expected BpfFlowMetrics object (and any Assert/Equal checks) to include the
corresponding TlsCipherSuite and TlsKeyShare values.
In `@pkg/tracer/tracer.go`:
- Around line 2169-2185: The variablesMapping list now includes
constEnableTLSUsageTracking but NewPacketFetcher’s constant-pruning list wasn’t
updated; update NewPacketFetcher to remove/prune constEnableTLSUsageTracking
alongside the other constants so the two lists remain in sync (search for
NewPacketFetcher and the cleanup/prune constants block and add
constEnableTLSUsageTracking to that cleanup list to match the variablesMapping
declaration).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a9487391-33b4-430d-8918-cd655ebfb014
⛔ Files ignored due to path filters (6)
go.sumis excluded by!**/*.sumpkg/ebpf/bpf_arm64_bpfel.ois excluded by!**/*.opkg/ebpf/bpf_powerpc_bpfel.ois excluded by!**/*.opkg/ebpf/bpf_s390_bpfeb.ois excluded by!**/*.opkg/ebpf/bpf_x86_bpfel.ois excluded by!**/*.opkg/pbflow/flow.pb.gois excluded by!**/*.pb.go
📒 Files selected for processing (128)
bpf/configs.hbpf/flows.cbpf/tls_tracker.hbpf/types.hgo.modpkg/config/config.gopkg/decode/decode_protobuf.gopkg/decode/decode_protobuf_test.gopkg/ebpf/bpf_arm64_bpfel.gopkg/ebpf/bpf_powerpc_bpfel.gopkg/ebpf/bpf_s390_bpfeb.gopkg/ebpf/bpf_x86_bpfel.gopkg/exporter/converters_test.gopkg/model/record.gopkg/model/record_test.gopkg/model/tls_types.gopkg/pbflow/proto.gopkg/tracer/tracer.goproto/flow.protovendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/handler.govendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/mux.govendor/github.com/minio/minio-go/v7/.gitignorevendor/github.com/minio/minio-go/v7/api-get-object.govendor/github.com/minio/minio-go/v7/api-put-object-streaming.govendor/github.com/minio/minio-go/v7/api-update-object-encryption.govendor/github.com/minio/minio-go/v7/functional_tests.govendor/go.opentelemetry.io/otel/.golangci.ymlvendor/go.opentelemetry.io/otel/CHANGELOG.mdvendor/go.opentelemetry.io/otel/CONTRIBUTING.mdvendor/go.opentelemetry.io/otel/Makefilevendor/go.opentelemetry.io/otel/README.mdvendor/go.opentelemetry.io/otel/baggage/baggage.govendor/go.opentelemetry.io/otel/dependencies.Dockerfilevendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/client.govendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/version.govendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp/client.govendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp/version.govendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/client.govendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/observ/instrumentation.govendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/version.govendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp/client.govendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp/internal/observ/instrumentation.govendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp/internal/version.govendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/version.govendor/go.opentelemetry.io/otel/internal/errorhandler/errorhandler.govendor/go.opentelemetry.io/otel/internal/global/handler.govendor/go.opentelemetry.io/otel/internal/global/state.govendor/go.opentelemetry.io/otel/metric/asyncfloat64.govendor/go.opentelemetry.io/otel/metric/asyncint64.govendor/go.opentelemetry.io/otel/metric/meter.govendor/go.opentelemetry.io/otel/metric/syncfloat64.govendor/go.opentelemetry.io/otel/metric/syncint64.govendor/go.opentelemetry.io/otel/propagation/baggage.govendor/go.opentelemetry.io/otel/propagation/trace_context.govendor/go.opentelemetry.io/otel/requirements.txtvendor/go.opentelemetry.io/otel/sdk/metric/internal/observ/instrumentation.govendor/go.opentelemetry.io/otel/sdk/metric/periodic_reader.govendor/go.opentelemetry.io/otel/sdk/metric/version.govendor/go.opentelemetry.io/otel/sdk/resource/builtin.govendor/go.opentelemetry.io/otel/sdk/resource/container.govendor/go.opentelemetry.io/otel/sdk/resource/env.govendor/go.opentelemetry.io/otel/sdk/resource/host_id.govendor/go.opentelemetry.io/otel/sdk/resource/host_id_readfile.govendor/go.opentelemetry.io/otel/sdk/resource/os.govendor/go.opentelemetry.io/otel/sdk/resource/process.govendor/go.opentelemetry.io/otel/sdk/trace/batch_span_processor.govendor/go.opentelemetry.io/otel/sdk/trace/internal/observ/batch_span_processor.govendor/go.opentelemetry.io/otel/sdk/trace/internal/observ/simple_span_processor.govendor/go.opentelemetry.io/otel/sdk/trace/internal/observ/tracer.govendor/go.opentelemetry.io/otel/sdk/trace/provider.govendor/go.opentelemetry.io/otel/sdk/trace/span.govendor/go.opentelemetry.io/otel/sdk/version.govendor/go.opentelemetry.io/otel/semconv/v1.39.0/MIGRATION.mdvendor/go.opentelemetry.io/otel/semconv/v1.39.0/README.mdvendor/go.opentelemetry.io/otel/semconv/v1.40.0/MIGRATION.mdvendor/go.opentelemetry.io/otel/semconv/v1.40.0/README.mdvendor/go.opentelemetry.io/otel/semconv/v1.40.0/attribute_group.govendor/go.opentelemetry.io/otel/semconv/v1.40.0/doc.govendor/go.opentelemetry.io/otel/semconv/v1.40.0/error_type.govendor/go.opentelemetry.io/otel/semconv/v1.40.0/exception.govendor/go.opentelemetry.io/otel/semconv/v1.40.0/otelconv/metric.govendor/go.opentelemetry.io/otel/semconv/v1.40.0/schema.govendor/go.opentelemetry.io/otel/trace/auto.govendor/go.opentelemetry.io/otel/trace/trace.govendor/go.opentelemetry.io/otel/trace/tracestate.govendor/go.opentelemetry.io/otel/version.govendor/go.opentelemetry.io/otel/versions.yamlvendor/golang.org/x/net/html/iter.govendor/golang.org/x/net/html/node.govendor/golang.org/x/net/html/nodetype_string.govendor/golang.org/x/net/http2/client_priority_go126.govendor/golang.org/x/net/http2/client_priority_go127.govendor/golang.org/x/net/http2/frame.govendor/golang.org/x/net/http2/http2.govendor/golang.org/x/net/http2/server.govendor/golang.org/x/net/http2/transport.govendor/golang.org/x/net/http2/writesched_priority_rfc7540.govendor/golang.org/x/net/internal/httpsfv/httpsfv.govendor/golang.org/x/net/internal/socket/empty.svendor/golang.org/x/net/internal/socket/msghdr_solaris_64bit.govendor/golang.org/x/net/internal/socket/socket.govendor/golang.org/x/net/internal/socket/sys.govendor/golang.org/x/net/internal/socket/sys_posix.govendor/golang.org/x/net/ipv4/header.govendor/golang.org/x/net/ipv6/control_rfc2292_unix.govendor/golang.org/x/net/ipv6/control_rfc3542_unix.govendor/golang.org/x/net/publicsuffix/data/childrenvendor/golang.org/x/net/publicsuffix/data/nodesvendor/golang.org/x/net/publicsuffix/data/textvendor/golang.org/x/net/publicsuffix/table.govendor/golang.org/x/text/secure/bidirule/bidirule.govendor/golang.org/x/text/secure/bidirule/bidirule10.0.0.govendor/golang.org/x/text/secure/bidirule/bidirule9.0.0.govendor/golang.org/x/text/unicode/bidi/tables10.0.0.govendor/golang.org/x/text/unicode/bidi/tables11.0.0.govendor/golang.org/x/text/unicode/bidi/tables12.0.0.govendor/golang.org/x/text/unicode/bidi/tables13.0.0.govendor/golang.org/x/text/unicode/bidi/tables15.0.0.govendor/golang.org/x/text/unicode/bidi/tables17.0.0.govendor/golang.org/x/text/unicode/bidi/tables9.0.0.govendor/golang.org/x/text/unicode/norm/forminfo.govendor/golang.org/x/text/unicode/norm/tables10.0.0.govendor/golang.org/x/text/unicode/norm/tables11.0.0.govendor/golang.org/x/text/unicode/norm/tables12.0.0.govendor/golang.org/x/text/unicode/norm/tables15.0.0.govendor/golang.org/x/text/unicode/norm/tables17.0.0.govendor/golang.org/x/text/unicode/norm/tables9.0.0.govendor/modules.txt
💤 Files with no reviewable changes (6)
- vendor/go.opentelemetry.io/otel/semconv/v1.39.0/MIGRATION.md
- vendor/go.opentelemetry.io/otel/semconv/v1.39.0/README.md
- vendor/golang.org/x/net/html/iter.go
- vendor/golang.org/x/net/internal/socket/sys.go
- vendor/golang.org/x/text/secure/bidirule/bidirule10.0.0.go
- vendor/golang.org/x/text/secure/bidirule/bidirule9.0.0.go
go.mod
Outdated
|
|
||
| replace github.com/vmware/go-ipfix => github.com/jotak/go-ipfix v0.0.0-20250708115123-407c539ea101 | ||
|
|
||
| replace github.com/netobserv/flowlogs-pipeline => github.com/jotak/flowlogs-pipeline v0.0.0-20260319155004-bff214fed5a0 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Checking replace directives for flowlogs-pipeline..."
rg -n '^\s*replace\s+github.com/netobserv/flowlogs-pipeline' go.mod || true
echo
echo "Checking references to forked module..."
rg -n 'github.com/jotak/flowlogs-pipeline' go.mod go.sum || trueRepository: netobserv/netobserv-ebpf-agent
Length of output: 687
Remove the forked replace directive before merging to main.
Line 166 in go.mod redirects github.com/netobserv/flowlogs-pipeline to the personal fork github.com/jotak/flowlogs-pipeline v0.0.0-20260319155004-bff214fed5a0. Using a personal fork on the default branch creates supply-chain risk and maintenance burden; it must be replaced with either the official upstream or removed entirely.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@go.mod` at line 166, The go.mod currently contains a replace directive
pointing to the personal fork "github.com/jotak/flowlogs-pipeline
v0.0.0-20260319155004-bff214fed5a0" that overrides
"github.com/netobserv/flowlogs-pipeline"; remove or replace that directive so
the module resolves to the upstream package instead of the fork. Locate the
replace line containing "replace github.com/netobserv/flowlogs-pipeline =>
github.com/jotak/flowlogs-pipeline" and either delete that replace statement or
update it to the official upstream module/version you intend to use, then run go
mod tidy to update go.sum and verify builds.
| } | ||
|
|
||
| if rerr != nil && rerr != io.ErrUnexpectedEOF && err != io.EOF { | ||
| if rerr != nil && rerr != io.ErrUnexpectedEOF && rerr != io.EOF { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify if other vendor files are modified and check go.mod for replace directives
# Check for other modified vendor files in this PR
echo "=== Other modified vendor files in this changeset ==="
git diff --name-only HEAD~1 | grep '^vendor/' || echo "None found in last commit"
# Check if go.mod has replace directive for minio-go
echo -e "\n=== Checking go.mod for minio-go replace directives ==="
rg -n "replace.*minio-go" go.mod || echo "No replace directive found"
# Search for similar error handling pattern inconsistencies
echo -e "\n=== Checking for similar error handling patterns in vendor file ==="
rg -n "err != io\.EOF" vendor/github.com/minio/minio-go/v7/api-put-object-streaming.goRepository: netobserv/netobserv-ebpf-agent
Length of output: 723
🏁 Script executed:
# Read the file to understand context around lines 355 and 528
cat -n vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go | sed -n '340,365p'Repository: netobserv/netobserv-ebpf-agent
Length of output: 874
🏁 Script executed:
# Also read around line 528 to see the full context
cat -n vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go | sed -n '515,540p'Repository: netobserv/netobserv-ebpf-agent
Length of output: 842
Remove vendor file modification or establish proper dependency override.
This vendored third-party file from minio-go/v7 is being modified in a PR about TLS tracking—these are unrelated. Manual patches to vendor/ will be lost on the next go mod vendor or dependency update.
Either:
- Contribute the change upstream to minio-go, or
- Use a
replacedirective in go.mod to point to your fork
Also note: Line 355 in the same file has a pre-existing bug (err != io.EOF should be rerr != io.EOF), but that's a separate issue for upstream, not to be fixed here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go` at line 528,
This change modifies a vendored third-party file (the condition using rerr in
api-put-object-streaming.go) which must not be edited in-tree; either revert
this edit and upstream the fix or add a go.mod replace directive to point to a
fork that contains the change. Locate the conditional using the symbol rerr (and
related checks against io.ErrUnexpectedEOF and io.EOF) and remove/revert your
modifications in the vendor copy; instead implement the correction in your fork
and update go.mod with a replace to that fork or submit the patch upstream to
minio-go. Do not attempt to patch other pre-existing bugs in the vendored file
(for example the rerr vs err issue mentioned) in this PR.
| arguments: | ||
| - ["ID"] # AllowList | ||
| - ["Otel", "Aws", "Gcp"] # DenyList | ||
| - - skip-package-name-collision-with-go-std: true |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
python - <<'PY'
import sys, subprocess, pathlib
try:
import yaml
except Exception:
subprocess.check_call([sys.executable, "-m", "pip", "install", "pyyaml", "-q"])
import yaml
p = pathlib.Path("vendor/go.opentelemetry.io/otel/.golangci.yml")
cfg = yaml.safe_load(p.read_text())
rules = cfg["linters"]["settings"]["revive"]["rules"]
vn = next(r for r in rules if r.get("name") == "var-naming")
args = vn.get("arguments", [])
print("arguments:", args)
print("third_arg_type:", type(args[2]).__name__ if len(args) > 2 else "missing")
PYRepository: netobserv/netobserv-ebpf-agent
Length of output: 191
Remove nested list in var-naming arguments at line 197.
The double dash (- -) wraps the argument in an extra list layer, making the revive rule receive [{...}] instead of {...}. This prevents the option from taking effect.
Fix
- - - skip-package-name-collision-with-go-std: true
+ - skip-package-name-collision-with-go-std: true📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - - skip-package-name-collision-with-go-std: true | |
| - skip-package-name-collision-with-go-std: true |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@vendor/go.opentelemetry.io/otel/.golangci.yml` at line 197, The var-naming
rule's arguments contain an extra nested list marker ("- -") so the revive
option is passed as a single-item list instead of a map; edit the var-naming
arguments to remove the extra dash so the entry for
skip-package-name-collision-with-go-std appears as a normal map entry (i.e.,
change the "- - skip-package-name-collision-with-go-std: true" line to a
single-level "- skip-package-name-collision-with-go-std: true") so revive will
correctly recognize the option.
| req.bodyReader = bodyReader(b.Bytes()) | ||
| req.GetBody = bodyReaderErr(body) |
There was a problem hiding this comment.
Fix GetBody mismatch for gzip requests.
req.bodyReader uses compressed bytes, but req.GetBody is wired to uncompressed body. This can resend the wrong payload when GetBody is used.
Suggested fix
req.bodyReader = bodyReader(b.Bytes())
- req.GetBody = bodyReaderErr(body)
+ req.GetBody = bodyReaderErr(b.Bytes())📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| req.bodyReader = bodyReader(b.Bytes()) | |
| req.GetBody = bodyReaderErr(body) | |
| req.bodyReader = bodyReader(b.Bytes()) | |
| req.GetBody = bodyReaderErr(b.Bytes()) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@vendor/go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp/client.go`
around lines 266 - 267, The request's GetBody is currently returning the
uncompressed `body` while `req.bodyReader` uses the compressed bytes
`b.Bytes()`, so when the request is retried GetBody will replay the wrong
(uncompressed) payload; change `req.GetBody` to return the same compressed
reader used by `req.bodyReader` (i.e. use `bodyReaderErr(b.Bytes())` or an
equivalent helper that wraps the compressed buffer) so `bodyReader`,
`req.bodyReader`, and `req.GetBody` are all consistent for gzip requests (refer
to `req.bodyReader`, `req.GetBody`, `bodyReader`, `bodyReaderErr`, `body`, and
`b.Bytes()`).
| # Semconv v1.40.0 | ||
|
|
||
| [](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.40.0) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify semconv v1.40.0 was properly vendored and check go.mod for usage
# Check if the full package exists in vendor
fd -t d 'v1.40.0' vendor/go.opentelemetry.io/otel/semconv/
# Check go.mod for semconv references
rg -n 'go.opentelemetry.io/otel/semconv' go.mod
# Look for actual usage of semconv v1.40.0 in non-vendor code
rg -l 'semconv/v1.40.0' --glob '!vendor/**'Repository: netobserv/netobserv-ebpf-agent
Length of output: 122
Remove unused vendored semconv v1.40.0 or justify its presence.
The semconv v1.40.0 package is vendored but has no references in go.mod and is not used in the codebase. Either remove it from vendor if it's not needed, or add it as an explicit dependency in go.mod if it is required.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@vendor/go.opentelemetry.io/otel/semconv/v1.40.0/README.md` around lines 1 -
3, The vendored semconv package
(vendor/go.opentelemetry.io/otel/semconv/v1.40.0, e.g., README.md for semconv
v1.40.0) appears unused; either remove the entire vendor directory for
go.opentelemetry.io/otel/semconv/v1.40.0 from the repo or add it as an explicit
dependency in go.mod (run `go get go.opentelemetry.io/otel/semconv@v1.40.0` or
the appropriate version) and regenerate the vendor folder; update or remove any
related references in build scripts as needed so the project compiles cleanly
without stale vendored files.
| func (mh *MetaHeadersFrame) rfc9218Priority(priorityAware bool) (p PriorityParam, priorityAwareAfter, hasIntermediary bool) { | ||
| var s string | ||
| for _, field := range mh.Fields { | ||
| if field.Name == "priority" { | ||
| s = field.Value | ||
| priorityAware = true | ||
| } | ||
| if slices.Contains([]string{"via", "forwarded", "x-forwarded-for"}, field.Name) { | ||
| hasIntermediary = true | ||
| } | ||
| } | ||
| // No need to check for ok. parseRFC9218Priority will return a default | ||
| // value if there is no priority field or if the field cannot be parsed. | ||
| p, _ = parseRFC9218Priority(s, priorityAware && !hasIntermediary) | ||
| return p, priorityAware, hasIntermediary |
There was a problem hiding this comment.
Combine repeated Priority headers instead of overwriting them.
HTTP field instances are comma-joined. With the current s = field.Value, priority: u=0 followed by priority: i is parsed as only i, which changes the computed stream priority.
Proposed fix
for _, field := range mh.Fields {
if field.Name == "priority" {
- s = field.Value
+ if s == "" {
+ s = field.Value
+ } else {
+ s += "," + field.Value
+ }
priorityAware = true
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (mh *MetaHeadersFrame) rfc9218Priority(priorityAware bool) (p PriorityParam, priorityAwareAfter, hasIntermediary bool) { | |
| var s string | |
| for _, field := range mh.Fields { | |
| if field.Name == "priority" { | |
| s = field.Value | |
| priorityAware = true | |
| } | |
| if slices.Contains([]string{"via", "forwarded", "x-forwarded-for"}, field.Name) { | |
| hasIntermediary = true | |
| } | |
| } | |
| // No need to check for ok. parseRFC9218Priority will return a default | |
| // value if there is no priority field or if the field cannot be parsed. | |
| p, _ = parseRFC9218Priority(s, priorityAware && !hasIntermediary) | |
| return p, priorityAware, hasIntermediary | |
| func (mh *MetaHeadersFrame) rfc9218Priority(priorityAware bool) (p PriorityParam, priorityAwareAfter, hasIntermediary bool) { | |
| var s string | |
| for _, field := range mh.Fields { | |
| if field.Name == "priority" { | |
| if s == "" { | |
| s = field.Value | |
| } else { | |
| s += "," + field.Value | |
| } | |
| priorityAware = true | |
| } | |
| if slices.Contains([]string{"via", "forwarded", "x-forwarded-for"}, field.Name) { | |
| hasIntermediary = true | |
| } | |
| } | |
| // No need to check for ok. parseRFC9218Priority will return a default | |
| // value if there is no priority field or if the field cannot be parsed. | |
| p, _ = parseRFC9218Priority(s, priorityAware && !hasIntermediary) | |
| return p, priorityAware, hasIntermediary | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@vendor/golang.org/x/net/http2/frame.go` around lines 1651 - 1665, The
rfc9218Priority method on MetaHeadersFrame currently overwrites multiple
"priority" header instances by assigning s = field.Value; change it to combine
repeated priority header values the same way HTTP does (comma-join instances)
before calling parseRFC9218Priority. Locate MetaHeadersFrame.rfc9218Priority and
where s is set from mh.Fields and, instead of replacing s, append field.Value to
s separated by a comma when s is non-empty (trimming whitespace as needed) so
parseRFC9218Priority receives the combined header string; keep priorityAware and
hasIntermediary logic unchanged.
| case SettingNoRFC7540Priorities: | ||
| if s.Val > 1 { | ||
| return ConnectionError(ErrCodeProtocol) | ||
| } |
There was a problem hiding this comment.
Treat NO_RFC7540_PRIORITIES=1 as a priority-awareness signal.
Right now we only validate the setting. A peer that already sent SETTINGS_NO_RFC7540_PRIORITIES=1 still gets the legacy incremental default until it also sends a Priority header or PRIORITY_UPDATE, which defeats this negotiation for the first requests on the connection.
Proposed fix
case SettingNoRFC7540Priorities:
if s.Val > 1 {
return ConnectionError(ErrCodeProtocol)
}
+ if s.Val == 1 {
+ sc.priorityAware = true
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case SettingNoRFC7540Priorities: | |
| if s.Val > 1 { | |
| return ConnectionError(ErrCodeProtocol) | |
| } | |
| case SettingNoRFC7540Priorities: | |
| if s.Val > 1 { | |
| return ConnectionError(ErrCodeProtocol) | |
| } | |
| if s.Val == 1 { | |
| sc.priorityAware = true | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@vendor/golang.org/x/net/http2/server.go` around lines 1831 - 1834, When
handling SettingNoRFC7540Priorities, besides validating s.Val, treat s.Val==1 as
the peer signalling RFC7540 priority-awareness: set the connection-level
flag/state so the server uses RFC7540-style priorities for new requests (instead
of the legacy incremental default) and avoid waiting for a per-stream Priority
header or PRIORITY_UPDATE; keep the existing ConnectionError(ErrCodeProtocol)
return for s.Val>1. Locate the switch case on SettingNoRFC7540Priorities and
update the connection state (the flag used by stream creation/priority logic)
and call the codepath that applies/recalculates initial priorities for streams
that have no explicit Priority yet so the first requests on the connection use
RFC7540 semantics.
| s = s[countLeftWhitespace(s):] | ||
| if len(s) == 0 { | ||
| break | ||
| } | ||
| if s[0] == ',' { | ||
| s = s[1:] | ||
| } | ||
| s = s[countLeftWhitespace(s):] | ||
| if len(s) == 0 { | ||
| return false | ||
| } |
There was a problem hiding this comment.
Reject dictionary members without a comma.
ParseDictionary falls through when more input remains but the next byte is not ,. That makes malformed structured fields like u=0 i parse successfully, so invalid Priority / PRIORITY_UPDATE values can be accepted and applied instead of rejected.
Proposed fix
if len(s) == 0 {
break
}
- if s[0] == ',' {
- s = s[1:]
- }
+ if s[0] != ',' {
+ return false
+ }
+ s = s[1:]
s = s[countLeftWhitespace(s):]
if len(s) == 0 {
return false
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| s = s[countLeftWhitespace(s):] | |
| if len(s) == 0 { | |
| break | |
| } | |
| if s[0] == ',' { | |
| s = s[1:] | |
| } | |
| s = s[countLeftWhitespace(s):] | |
| if len(s) == 0 { | |
| return false | |
| } | |
| s = s[countLeftWhitespace(s):] | |
| if len(s) == 0 { | |
| break | |
| } | |
| if s[0] != ',' { | |
| return false | |
| } | |
| s = s[1:] | |
| s = s[countLeftWhitespace(s):] | |
| if len(s) == 0 { | |
| return false | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@vendor/golang.org/x/net/internal/httpsfv/httpsfv.go` around lines 244 - 254,
ParseDictionary currently advances past a parsed item then trims whitespace but
allows fall-through when the next byte isn't a comma, letting inputs like "u=0
i" be accepted; in ParseDictionary (the loop handling s, countLeftWhitespace and
commas) update the logic so that after trimming whitespace and confirming
len(s)>0 you explicitly check if s[0] != ',' and if so return false (reject the
dictionary member), otherwise consume the comma and continue; ensure you still
trim whitespace after consuming the comma as in the existing flow.
|
@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
New images: quay.io/netobserv/netobserv-ebpf-agent:893fd75
quay.io/netobserv/ebpf-bytecode:893fd75
They will expire in two weeks. To deploy this build, run from the operator repo, assuming the operator is running: USER=netobserv VERSION=893fd75 make set-agent-image |
|
@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/label qe-approved |
|
@jotak: This pull request references NETOBSERV-2471 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/test qe-e2e-tests |
|
@jotak: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
* NETOBSERV-2471: TLS usage tracking - Start implementing TLS, by reading the TLS header when present - Extract SSL version (not done yet for the handshake message) - Report the TLS version in output records * Check handshake version, use bpf_skb_load_bytes * Show ~ on version mismatch * Implement 1.3 / handshake extension reading * bump flp * Manage various edge cases on version mismatch * Add curves, polishing * fix format * Make TLS tracking configurable * remove unused constants * add comments, remove unused code * remove replace directive
Fixes netobserv/netobserv-operator#2492
Dependencies
Summary by CodeRabbit
New Features
Chores