Skip to content

fix: add input validation for protocol decoding and fragment reassembly#666

Open
0x0v1 wants to merge 6 commits intopermissionlesstech:mainfrom
0x0v1:main
Open

fix: add input validation for protocol decoding and fragment reassembly#666
0x0v1 wants to merge 6 commits intopermissionlesstech:mainfrom
0x0v1:main

Conversation

@0x0v1
Copy link

@0x0v1 0x0v1 commented Feb 6, 2026

No description provided.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 85a0499ae3

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

0x0v1 and others added 2 commits February 6, 2026 14:03
…cate/retransmitted fragments don't inflate the counter
fix:subtract the old entry's size before adding the new one
Copy link
Collaborator

@callebtc callebtc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this PR! The OOM protections and fragment caps are exactly what is needed to prevent memory exhaustion and malicious payload attacks. The fix in BinaryProtocol.kt handling Throwable for OOM errors is a great touch.

However, there is a critical concurrency issue in FragmentManager.kt that must be addressed before this can be merged. handleFragment is a suspend function executed concurrently, and the current implementation introduces check-then-act race conditions and uses a non-thread-safe inner collection (mutableMapOf()), which can lead to ConcurrentModificationExceptions, dropped fragments, or bypassed size caps.

The minimal fix:

Replace the 29 lines of size tracking and insertion with an atomic compute block, and initialize the inner map as a ConcurrentHashMap.

            val fragmentIDString = fragmentPayload.getFragmentIDString()
            
            Log.d(TAG, "Received fragment ${fragmentPayload.index}/${fragmentPayload.total} for fragmentID: $fragmentIDString, originalType: ${fragmentPayload.originalType}")

            val maxFragments = com.bitchat.android.util.AppConstants.Fragmentation.MAX_FRAGMENTS_PER_ID
            if (fragmentPayload.total > maxFragments) {
                Log.w(TAG, "Rejecting fragment with excessive total count: ${fragmentPayload.total} > $maxFragments")
                return null
            }

            val maxTotalBytes = com.bitchat.android.util.AppConstants.Fragmentation.MAX_FRAGMENT_TOTAL_BYTES
            var rejected = false
            
            // Use compute to atomically update the maps for this specific fragmentID
            incomingFragments.compute(fragmentIDString) { _, currentMap ->
                val map = currentMap ?: java.util.concurrent.ConcurrentHashMap<Int, ByteArray>().also {
                    // Initialize metadata for new fragments
                    fragmentMetadata[fragmentIDString] = Triple(
                        fragmentPayload.originalType,
                        fragmentPayload.total,
                        System.currentTimeMillis()
                    )
                    fragmentCumulativeSize[fragmentIDString] = 0
                }

                val currentSize = fragmentCumulativeSize[fragmentIDString] ?: 0
                val oldEntrySize = map[fragmentPayload.index]?.size ?: 0
                val newSize = currentSize - oldEntrySize + fragmentPayload.data.size

                if (newSize > maxTotalBytes) {
                    Log.w(TAG, "Rejecting fragment for $fragmentIDString: cumulative size $newSize exceeds cap $maxTotalBytes")
                    fragmentMetadata.remove(fragmentIDString)
                    fragmentCumulativeSize.remove(fragmentIDString)
                    rejected = true
                    return@compute null // Returning null removes the entry from incomingFragments
                }

                fragmentCumulativeSize[fragmentIDString] = newSize
                map[fragmentPayload.index] = fragmentPayload.data
                map
            }

            if (rejected) return null

            val fragmentMap = incomingFragments[fragmentIDString]
            if (fragmentMap != null && fragmentMap.size == fragmentPayload.total) {
                Log.d(TAG, "All fragments received for $fragmentIDString, reassembling...")

This guarantees atomicity per fragmentIDString and eliminates the risk of ConcurrentModificationException while correctly applying the byte size caps under concurrent load. Please apply this change and I'll gladly approve!

0x0v1 added 2 commits March 5, 2026 16:19
…g., fragments for the same fragmentID arriving from multiple peers/relays). In order for this to happen multiple devices would be needed to connect to the mesh. even with a per-fragmentID byte cap, an attacker could open many fragment IDs at once and force the device to buffer lots of fragment data overall (risking memory pressure/OOM). In this fix we made fragments atomic and thread safe. When a fragment index is retransmitted, we compute the size delta (new - old) so duplicates don’t inflate counters and can’t be used to bypass limits. Under heavy load/attack the app will drop/reject fragment sets earlier instead of growing memory usage without bound to reduce risk of oom. tested on pixel 6 and pixel 8.
@0x0v1 0x0v1 requested a review from callebtc March 5, 2026 17:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants