-
-
Notifications
You must be signed in to change notification settings - Fork 999
Improve Dataflash Save To File #4627
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
WalkthroughProtocol handler stops logging unknown PROTO_IDENTIFIER chunks and defaults to IDLE. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant UI as Onboard Logging UI
participant MSPH as MSPHelper.dataflashRead
participant MSP as MSP Transport
participant Codec as Huffman Decoder
User->>UI: Start flash save
loop For each address block
UI->>MSPH: dataflashRead(address, blockSize, onDataCallback)
MSPH->>MSP: Send request {address, blockSize, compressionFlag}
MSP-->>MSPH: Response {chunkAddress, dataSize, dataCompressionType, payload}
alt Address matches
alt dataCompressionType == 1
MSPH->>Codec: Decode(payload)
Codec-->>MSPH: decompressed DataView
MSPH-->>UI: onDataCallback(address, DataView, bytesCompressed)
else dataCompressionType == 0
MSPH-->>UI: onDataCallback(address, DataView(payload), dataSize)
end
else Address mismatch / missing
MSPH-->>UI: onDataCallback(address, null, 0)
end
alt payload valid and non-empty
UI->>UI: Write to file, update progress and totals
else if empty (EOF)
UI->>UI: Complete and close file
else invalid/null
UI->>UI: Retry up to MAX_SIMPLE_RETRIES with backoff, else skip ahead by blockSize and log warning
end
end
UI-->>User: Final stats (kB / s, mean compression factor)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~30 minutes Suggested labels
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (3)
src/js/msp/MSPHelper.js (3)
2467-2472
: Minor: simplify MSP instance lookup.Given MSP is imported at module scope, the typeof check is unnecessary. Use the imported MSP directly and keep the optional instance override.
- const mspObj = this.msp || (typeof MSP !== "undefined" ? MSP : null); + const mspObj = this.msp || MSP;
2482-2488
: Handle unknown compression types by delivering raw payload.If dataCompressionType is neither 0 nor 1, we currently return null. Safer to deliver the raw payload so the caller can still advance.
if (chunkAddress === address) { try { if (dataCompressionType === 0) { payloadView = new DataView( response.data.buffer, response.data.byteOffset + headerSize, dataSize, ); bytesCompressed = dataSize; // treat uncompressed as same size - } else if (dataCompressionType === 1) { + } else if (dataCompressionType === 1) { // ... (compressed handling as above) + } else { + // Unknown compression: deliver raw payload + payloadView = new DataView( + response.data.buffer, + response.data.byteOffset + headerSize, + dataSize, + ); + bytesCompressed = dataSize; } } catch (e) {Also applies to: 2513-2521
2529-2534
: Downgrade noisy logs or gate behind DEBUG flag.These per‑block logs can flood the console during large downloads. Consider using DEBUG guard or a throttled counter.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/js/msp.js
(1 hunks)src/js/msp/MSPHelper.js
(1 hunks)src/js/tabs/onboard_logging.js
(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
src/js/msp/MSPHelper.js (2)
src/js/msp.js (1)
MSP
(5-468)src/js/default_huffman_tree.js (4)
defaultHuffmanTree
(3-261)defaultHuffmanTree
(3-261)defaultHuffmanLenIndex
(263-273)defaultHuffmanLenIndex
(263-273)
src/js/tabs/onboard_logging.js (2)
src/js/msp/MSPHelper.js (2)
bytesCompressed
(2480-2480)mspHelper
(2882-2882)src/js/serial_backend.js (1)
mspHelper
(32-32)
🔇 Additional comments (5)
src/js/msp.js (1)
125-127
: LGTM: removed noisy identifier logging.Silent fallback to IDLE reduces log spam without changing behavior.
src/js/tabs/onboard_logging.js (4)
446-447
: LGTM: progress log formatting fix.Spacing/format consistency looks good.
449-456
: Compression factor logging will be accurate after fixing bytesCompressed source.The computation is correct; it relies on MSPHelper providing true transmitted-byte counts for each chunk.
After applying the MSPHelper fix, verify a sample download logs a mean compression factor > 1 when compression is active.
495-516
: Good: always writing non-empty chunks and advancing ensures progress.This matches the PR goal to avoid stalls. Progress updates and termination checks look correct.
516-531
: Graceful handling of EOF and null blocks.EOF on zero-length and skip-ahead on null yields resilient behavior.
let outData = [ | ||
address & 0xff, | ||
(address >> 8) & 0xff, | ||
(address >> 16) & 0xff, | ||
(address >> 24) & 0xff, | ||
blockSize & 0xff, | ||
(blockSize >> 8) & 0xff, | ||
1, // allow compression | ||
]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
🧩 Analysis chain
Gate the new request format by API version (avoid breaking older FCs).
You’ve changed MSP_DATAFLASH_READ to include blockSize and a compression flag. Some firmware builds expect only the 4‑byte address. Add an API guard and fall back to the legacy 4‑byte request for older targets.
- let outData = [
- address & 0xff,
- (address >> 8) & 0xff,
- (address >> 16) & 0xff,
- (address >> 24) & 0xff,
- blockSize & 0xff,
- (blockSize >> 8) & 0xff,
- 1, // allow compression
- ];
+ const supportsLenAndCompression = semver.gte(FC.CONFIG.apiVersion, API_VERSION_1_47); // confirm required version
+ let outData = [
+ address & 0xff,
+ (address >> 8) & 0xff,
+ (address >> 16) & 0xff,
+ (address >> 24) & 0xff,
+ ];
+ if (supportsLenAndCompression) {
+ outData.push(
+ blockSize & 0xff,
+ (blockSize >> 8) & 0xff,
+ 1, // allow compression
+ );
+ }
If you’re unsure of the minimal API version that accepts (addr, len, compressFlag), please confirm before merging.
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo ">>> rg for MSP_DATAFLASH_READ"
rg -n --hidden "MSP_DATAFLASH_READ" || true
echo -e "\n>>> locate MSPHelper.js"
rg -n --hidden "MSPHelper.js" || true
echo -e "\n>>> print src/js/msp/MSPHelper.js around 2440-2475 if present"
if [ -f "src/js/msp/MSPHelper.js" ]; then
sed -n '2438,2475p' src/js/msp/MSPHelper.js || true
else
echo "file not found: src/js/msp/MSPHelper.js"
fi
echo -e "\n>>> search for API_VERSION_* and apiVersion"
rg -n --hidden "API_VERSION_" || true
rg -n --hidden "\bapiVersion\b" || true
echo -e "\n>>> search for FC.CONFIG"
rg -n --hidden "FC\.CONFIG" || true
echo -e "\n>>> search for semver.gte usage"
rg -n --hidden "semver\.gte" || true
echo -e "\n>>> search for dataflash references"
rg -n --hidden "dataflash|DATAFLASH" || true
Length of output: 148916
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo ">>> show MSPHelper.js 1408-1465"
sed -n '1408,1465p' src/js/msp/MSPHelper.js
Length of output: 2817
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo ">>> show test/js/msp/MSPHelper.test.js"
sed -n '1,240p' test/js/msp/MSPHelper.test.js || true
Length of output: 7050
Gate MSP_DATAFLASH_READ format behind an API-version check (avoid breaking older FCs).
New format adds blockSize + compression flag; older firmware can expect only the 4‑byte address — guard and fall back to the legacy 4‑byte request.
Location: src/js/msp/MSPHelper.js — dataflashRead (around lines 2456–2475)
- let outData = [
- address & 0xff,
- (address >> 8) & 0xff,
- (address >> 16) & 0xff,
- (address >> 24) & 0xff,
- blockSize & 0xff,
- (blockSize >> 8) & 0xff,
- 1, // allow compression
- ];
+ const supportsLenAndCompression = semver.gte(FC.CONFIG.apiVersion, API_VERSION_1_47); // confirm required version
+ let outData = [
+ address & 0xff,
+ (address >> 8) & 0xff,
+ (address >> 16) & 0xff,
+ (address >> 24) & 0xff,
+ ];
+ if (supportsLenAndCompression) {
+ outData.push(
+ blockSize & 0xff,
+ (blockSize >> 8) & 0xff,
+ 1, // allow compression
+ );
+ }
Confirm the minimal API version that accepts (addr, len, compressFlag) before merging.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/js/msp/MSPHelper.js around lines 2456–2475 the dataflashRead currently
always builds a 7‑byte request (addr + blockSize + compression flag) which will
break older flight controllers that expect only a 4‑byte address; change
dataflashRead to gate the new format behind an API/version check and fall back
to the legacy 4‑byte request when the FC reports an older API: detect the
MSP/API version available (or use the agreed minimal API constant — replace the
placeholder with the confirmed minimal version before merging), and if version
>= minimal use the current 7‑byte layout (address low->high, blockSize
low->high, compression flag) otherwise send only the 4 bytes of address; ensure
the code paths are explicit and documented with a TODO to confirm and set the
exact minimal API version.
const compressedCharCount = response.data.readU16(); | ||
const compressedArray = new Uint8Array( | ||
response.data.buffer, | ||
response.data.byteOffset + headerSize + 2, | ||
dataSize - 2, | ||
); | ||
const decompressedArray = huffmanDecodeBuf( | ||
compressedArray, | ||
compressedCharCount, | ||
defaultHuffmanTree, | ||
defaultHuffmanLenIndex, | ||
); | ||
payloadView = new DataView(decompressedArray.buffer); | ||
bytesCompressed = compressedCharCount; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bytesCompressed is wrong for compressed chunks (breaks compression stats).
For type 1 (Huffman), bytesCompressed should reflect on‑wire payload size, not the uncompressed char count. This skews the mean compression factor.
- } else if (dataCompressionType === 1) {
- const compressedCharCount = response.data.readU16();
+ } else if (dataCompressionType === 1) {
+ const uncompressedLen = response.data.readU16();
const compressedArray = new Uint8Array(
response.data.buffer,
response.data.byteOffset + headerSize + 2,
- dataSize - 2,
+ dataSize - 2,
);
const decompressedArray = huffmanDecodeBuf(
compressedArray,
- compressedCharCount,
+ uncompressedLen,
defaultHuffmanTree,
defaultHuffmanLenIndex,
);
- payloadView = new DataView(decompressedArray.buffer);
- bytesCompressed = compressedCharCount;
+ payloadView = new DataView(decompressedArray.buffer, 0, uncompressedLen);
+ // Count the transmitted payload bytes (includes the 2-byte uncompressedLen header)
+ bytesCompressed = dataSize;
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const compressedCharCount = response.data.readU16(); | |
const compressedArray = new Uint8Array( | |
response.data.buffer, | |
response.data.byteOffset + headerSize + 2, | |
dataSize - 2, | |
); | |
const decompressedArray = huffmanDecodeBuf( | |
compressedArray, | |
compressedCharCount, | |
defaultHuffmanTree, | |
defaultHuffmanLenIndex, | |
); | |
payloadView = new DataView(decompressedArray.buffer); | |
bytesCompressed = compressedCharCount; | |
} | |
const uncompressedLen = response.data.readU16(); | |
const compressedArray = new Uint8Array( | |
response.data.buffer, | |
response.data.byteOffset + headerSize + 2, | |
dataSize - 2, | |
); | |
const decompressedArray = huffmanDecodeBuf( | |
compressedArray, | |
uncompressedLen, | |
defaultHuffmanTree, | |
defaultHuffmanLenIndex, | |
); | |
payloadView = new DataView(decompressedArray.buffer, 0, uncompressedLen); | |
// Count the transmitted payload bytes (includes the 2-byte uncompressedLen header) | |
bytesCompressed = dataSize; | |
} |
🤖 Prompt for AI Agents
In src/js/msp/MSPHelper.js around lines 2498 to 2512, bytesCompressed is
incorrectly set to the uncompressed character count for Huffman-compressed
chunks; update the assignment so bytesCompressed reflects the on-wire compressed
payload size (use the compressedArray length or dataSize - 2) after constructing
compressedArray/decompressedArray so compression stats use the actual
transmitted byte count.
i'm curious to the source of the issue, because i only ever use the "unsupported" "save as file" and do not have issues. |
On MacOSX, as a Chrome web app, saving the flash file stalls nearly every time, unless I use this PR. Over the last year or so it has gone from being usable, in previous years, to being totally impossible. It seems like it chokes when attempting to retry after a zcrc failure. |
@ctzsnooze please check latest build of the configurator app |
two Linux variants |
@nerdCopter this is a typical console log on mac osx for a save to file using app.betaflight.com in Chrome. hangs every time.
|
with the PR, it sometimes works. the console log:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/js/tabs/onboard_logging.js
(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/js/tabs/onboard_logging.js (2)
src/js/msp/MSPHelper.js (3)
chunkAddress
(2484-2484)bytesCompressed
(2480-2480)mspHelper
(2882-2882)src/js/serial_backend.js (1)
mspHelper
(32-32)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (3)
src/js/tabs/onboard_logging.js (3)
449-456
: Compression metric can be misleading when blocks are skippedmeanCompressionFactor uses totalBytes (nextAddress), which also advances on skipped blocks. Track delivered (actually written) bytes and pass that to mark_saving_dialog_done to keep the metric and kB/s accurate.
Apply this minimal change:
@@ - let nextAddress = 0; - let totalBytesCompressed = 0; + let nextAddress = 0; + let deliveredBytes = 0; + let totalBytesCompressed = null; @@ - if (chunkDataView && chunkDataView.byteLength > 0) { + if (chunkDataView && chunkDataView.byteLength > 0) { @@ - FileSystem.writeChunck(openedFile, blob); + // write, then account delivered bytes (see writeChunck typo fix below) + FileSystem.writeChunk(openedFile, blob); + deliveredBytes += chunkDataView.byteLength; @@ - if (typeof bytesCompressed === "number") { - if (totalBytesCompressed == null) totalBytesCompressed = 0; + if (typeof bytesCompressed === "number") { + if (totalBytesCompressed == null) totalBytesCompressed = 0; totalBytesCompressed += bytesCompressed; } @@ - if (saveCancelled || nextAddress >= maxBytes) { - mark_saving_dialog_done(startTime, nextAddress, totalBytesCompressed); + if (saveCancelled || nextAddress >= maxBytes) { + mark_saving_dialog_done(startTime, deliveredBytes, totalBytesCompressed); FileSystem.closeFile(openedFile); } else { @@ - } else if (chunkDataView && chunkDataView.byteLength === 0) { + } else if (chunkDataView && chunkDataView.byteLength === 0) { @@ - mark_saving_dialog_done(startTime, nextAddress, totalBytesCompressed); + mark_saving_dialog_done(startTime, deliveredBytes, totalBytesCompressed); FileSystem.closeFile(openedFile); @@ - if (saveCancelled || nextAddress >= maxBytes) { - mark_saving_dialog_done(startTime, nextAddress, totalBytesCompressed); + if (saveCancelled || nextAddress >= maxBytes) { + mark_saving_dialog_done(startTime, deliveredBytes, totalBytesCompressed); FileSystem.closeFile(openedFile); } else {
490-513
: Initialize totalBytesCompressed coherentlyYou set totalBytesCompressed = 0 and later guard “== null”. Initialize to null so the guard is meaningful, or drop the guard. The diff above switches to null.
498-499
: Consider validating chunkAddress vs expected offsetIgnoring chunkAddress removes a sanity check that could detect out‑of‑sequence or duplicate chunks. At minimum, warn on mismatch; optionally pad gaps with zeros if preserving offsets matters.
Example:
- async function onChunkRead(chunkAddress, chunkDataView, bytesCompressed) { + async function onChunkRead(chunkAddress, chunkDataView, bytesCompressed) { + if (chunkDataView && chunkDataView.byteLength > 0 && chunkAddress !== nextAddress) { + console.warn(`Address mismatch: expected ${nextAddress}, got ${chunkAddress}`); + }
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/js/tabs/onboard_logging.js
(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/js/tabs/onboard_logging.js (1)
src/js/msp/MSPHelper.js (3)
chunkAddress
(2484-2484)bytesCompressed
(2480-2480)mspHelper
(2882-2882)
🔇 Additional comments (3)
src/js/tabs/onboard_logging.js (3)
446-446
: Log formatting tweak — LGTMThe “kB / s” spacing/readability change looks good.
551-551
: startTime placement — likely fine; verify no TDZ/lint issuesonChunkRead closes over startTime but won’t be invoked until after it’s assigned. Ensure linters don’t flag use-before-define.
528-546
: Cancellation still falls through the retry/skip pathWhen chunkDataView is null you still issue a retry before checking saveCancelled. Bail out early to make “Cancel” responsive. This mirrors a prior review; repeating here for the updated code.
- } else { - // Null/missing block + } else { + // Null/missing block + if (saveCancelled || nextAddress >= maxBytes) { + mark_saving_dialog_done(startTime, deliveredBytes, totalBytesCompressed); + FileSystem.closeFile(openedFile); + return; + } if (simpleRetryCount < MAX_SIMPLE_RETRIES) { simpleRetryCount++; console.warn(`Null/missing block at ${nextAddress}, retry ${simpleRetryCount}`); mspHelper.dataflashRead(nextAddress, self.blockSize, onChunkRead); } else {
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/js/FileSystem.js
(1 hunks)src/js/tabs/logging.js
(1 hunks)src/js/tabs/onboard_logging.js
(2 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-28T12:38:59.502Z
Learnt from: ctzsnooze
PR: betaflight/betaflight-configurator#4627
File: src/js/tabs/onboard_logging.js:504-506
Timestamp: 2025-09-28T12:38:59.502Z
Learning: In the Betaflight Configurator codebase, the FileSystem class has a method named `writeChunck` (with 'ck') - this is the correct method name to use despite the apparent typo. The FileSystem API includes writeChunck(writable, chunk), openFile(file), and closeFile(writable).
Applied to files:
src/js/FileSystem.js
src/js/tabs/logging.js
src/js/tabs/onboard_logging.js
🧬 Code graph analysis (1)
src/js/tabs/onboard_logging.js (2)
src/js/msp/MSPHelper.js (3)
chunkAddress
(2484-2484)bytesCompressed
(2480-2480)mspHelper
(2882-2882)src/js/serial_backend.js (1)
mspHelper
(32-32)
🔇 Additional comments (1)
src/js/tabs/onboard_logging.js (1)
528-545
: Cancellation still bypassed on retry pathWe still re-issue
dataflashRead
for null blocks without first re-checkingsaveCancelled
/maxBytes
, so cancelling during a stall continues to loop until flash is exhausted. Please bail out before retrying, matching the guard on the success path. (Same issue noted earlier.)// Null/missing block if (simpleRetryCount < MAX_SIMPLE_RETRIES) { + if (saveCancelled || nextAddress >= maxBytes) { + mark_saving_dialog_done(startTime, nextAddress, totalBytesCompressed); + FileSystem.closeFile(openedFile); + return; + } + simpleRetryCount++; console.warn(`Null/missing block at ${nextAddress}, retry ${simpleRetryCount}`); mspHelper.dataflashRead(nextAddress, self.blockSize, onChunkRead);
async writeChunk(writable, chunk) { | ||
writable.write(chunk); | ||
} | ||
|
||
async closeFile(writable) { | ||
await writable.close(); | ||
writable.close(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Restore promise chaining in FileSystem.writeChunk/closeFile
By dropping the await
(or a returned promise) we now resolve these async wrappers immediately, so downstream .catch
handlers never fire and any write
/close
failure becomes an unhandled rejection. Please keep propagating the original promises so callers can observe failures and so we don’t mark the transfer complete before the data actually hits disk.
async writeChunk(writable, chunk) {
- writable.write(chunk);
+ return writable.write(chunk);
}
async closeFile(writable) {
- writable.close();
+ return writable.close();
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async writeChunk(writable, chunk) { | |
writable.write(chunk); | |
} | |
async closeFile(writable) { | |
await writable.close(); | |
writable.close(); | |
} | |
async writeChunk(writable, chunk) { | |
return writable.write(chunk); | |
} | |
async closeFile(writable) { | |
return writable.close(); | |
} |
🤖 Prompt for AI Agents
In src/js/FileSystem.js around lines 101 to 107, the async wrappers drop the
underlying promises by not returning the results of writable.write/close;
restore promise chaining so errors propagate by returning (or awaiting) the
original promises instead of resolving immediately — e.g. change writeChunk to
return writable.write(chunk) (or await and return) and change closeFile to
return writable.close() (or await and return) so callers can catch write/close
failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to fix ther async await as above ,but this just made it stallmuch more frequently, with logs like this:
Block at 3328 received (3584 bytes)
main-a-P9nJjl.js:3959 Block at 6912 received (11520 bytes)
main-a-P9nJjl.js:3959 Block at 18432 received (11264 bytes)
main-a-P9nJjl.js:3959 Block at 29696 received (11520 bytes)
main-a-P9nJjl.js:3959 Block at 41216 received (11776 bytes)
main-a-P9nJjl.js:3959 Block at 52992 received (11520 bytes)
main-a-P9nJjl.js:3871 MSP: data request timed-out: 71 ID: undefined TAB: onboard_logging TIMEOUT: 200 QUEUE: 3 (71,116,110)
(anonymous) @ main-a-P9nJjl.js:3871
setTimeout
send_message @ main-a-P9nJjl.js:3871
St.dataflashRead @ main-a-P9nJjl.js:3959
P @ onboard_logging-BWWMMWsx.js:1
E @ onboard_logging-BWWMMWsx.js:1
setTimeout
(anonymous) @ onboard_logging-BWWMMWsx.js:1
await in (anonymous)
(anonymous) @ main-a-P9nJjl.js:3959
St.process_data @ main-a-P9nJjl.js:3959
(anonymous) @ main-a-P9nJjl.js:3871
notify @ main-a-P9nJjl.js:3871
_dispatch_message @ main-a-P9nJjl.js:3871
read @ main-a-P9nJjl.js:3871
uk @ main-a-P9nJjl.js:3925
um @ main-a-P9nJjl.js:3925
(anonymous) @ main-a-P9nJjl.js:3871
readLoop @ main-a-P9nJjl.js:3871
await in readLoop
connect @ main-a-P9nJjl.js:3871
await in connect
connect @ main-a-P9nJjl.js:3871
ko @ main-a-P9nJjl.js:3925
dispatch @ DarkTheme-DuP9Rj_R.js:10
nt.handle @ DarkTheme-DuP9Rj_R.js:10Understand this warning
main-a-P9nJjl.js:3871 MSP: data request timed-out: 116 ID: undefined TAB: onboard_logging TIMEOUT: 202 QUEUE: 3 (71,116,110)
(anonymous) @ main-a-P9nJjl.js:3871
setTimeout
send_message @ main-a-P9nJjl.js:3871
(anonymous) @ main-a-P9nJjl.js:3871
promise @ main-a-P9nJjl.js:3871
dk @ main-a-P9nJjl.js:3925
await in dk
fk @ main-a-P9nJjl.js:3925
setInterval
hk @ main-a-P9nJjl.js:3925
_1 @ main-a-P9nJjl.js:3925
ck @ main-a-P9nJjl.js:3925
St.process_data @ main-a-P9nJjl.js:3959
(anonymous) @ main-a-P9nJjl.js:3871
notify @ main-a-P9nJjl.js:3871
_dispatch_message @ main-a-P9nJjl.js:3871
read @ main-a-P9nJjl.js:3871
uk @ main-a-P9nJjl.js:3925
um @ main-a-P9nJjl.js:3925
(anonymous) @ main-a-P9nJjl.js:3871
readLoop @ main-a-P9nJjl.js:3871
await in readLoop
connect @ main-a-P9nJjl.js:3871
await in connect
connect @ main-a-P9nJjl.js:3871
ko @ main-a-P9nJjl.js:3925
dispatch @ DarkTheme-DuP9Rj_R.js:10
nt.handle @ DarkTheme-DuP9Rj_R.js:10Understand this warning
main-a-P9nJjl.js:3959 code: 71 (MSP_DATAFLASH_READ) - crc failed
St.process_data @ main-a-P9nJjl.js:3959
(anonymous) @ main-a-P9nJjl.js:3871
notify @ main-a-P9nJjl.js:3871
_dispatch_message @ main-a-P9nJjl.js:3871
read @ main-a-P9nJjl.js:3871
uk @ main-a-P9nJjl.js:3925
um @ main-a-P9nJjl.js:3925
(anonymous) @ main-a-P9nJjl.js:3871
readLoop @ main-a-P9nJjl.js:3871
await in readLoop
connect @ main-a-P9nJjl.js:3871
await in connect
connect @ main-a-P9nJjl.js:3871
ko @ main-a-P9nJjl.js:3925
dispatch @ DarkTheme-DuP9Rj_R.js:10
nt.handle @ DarkTheme-DuP9Rj_R.js:10Understand this warning
main-a-P9nJjl.js:3959 Expected address 64512 but received null
onboard_logging-BWWMMWsx.js:1 Null/missing block at 64512, retry 1, backoff 50ms
(anonymous) @ onboard_logging-BWWMMWsx.js:1
(anonymous) @ main-a-P9nJjl.js:3959
St.process_data @ main-a-P9nJjl.js:3959
(anonymous) @ main-a-P9nJjl.js:3871
notify @ main-a-P9nJjl.js:3871
_dispatch_message @ main-a-P9nJjl.js:3871
read @ main-a-P9nJjl.js:3871
uk @ main-a-P9nJjl.js:3925
um @ main-a-P9nJjl.js:3925
(anonymous) @ main-a-P9nJjl.js:3871
readLoop @ main-a-P9nJjl.js:3871
await in readLoop
connect @ main-a-P9nJjl.js:3871
await in connect
connect @ main-a-P9nJjl.js:3871
ko @ main-a-P9nJjl.js:3925
dispatch @ DarkTheme-DuP9Rj_R.js:10
nt.handle @ DarkTheme-DuP9Rj_R.js:10Understand this warning
main-a-P9nJjl.js:3959 CRC error or missing data at address 64512 - delivering whatever we got
main-a-P9nJjl.js:3871 MSP: data request timed-out: 110 ID: undefined TAB: onboard_logging TIMEOUT: 201 QUEUE: 3 (116,110,71)
(anonymous) @ main-a-P9nJjl.js:3871
setTimeout
send_message @ main-a-P9nJjl.js:3871
(anonymous) @ main-a-P9nJjl.js:3871
promise @ main-a-P9nJjl.js:3871
dk @ main-a-P9nJjl.js:3925
fk @ main-a-P9nJjl.js:3925
setInterval
hk @ main-a-P9nJjl.js:3925
_1 @ main-a-P9nJjl.js:3925
ck @ main-a-P9nJjl.js:3925
St.process_data @ main-a-P9nJjl.js:3959
(anonymous) @ main-a-P9nJjl.js:3871
notify @ main-a-P9nJjl.js:3871
_dispatch_message @ main-a-P9nJjl.js:3871
read @ main-a-P9nJjl.js:3871
uk @ main-a-P9nJjl.js:3925
um @ main-a-P9nJjl.js:3925
(anonymous) @ main-a-P9nJjl.js:3871
readLoop @ main-a-P9nJjl.js:3871
await in readLoop
connect @ main-a-P9nJjl.js:3871
await in connect
connect @ main-a-P9nJjl.js:3871
ko @ main-a-P9nJjl.js:3925
dispatch @ DarkTheme-DuP9Rj_R.js:10
nt.handle @ DarkTheme-DuP9Rj_R.js:10Understand this warning
main-a-P9nJjl.js:3871 MSP: data request timed-out: 71 ID: undefined TAB: onboard_logging TIMEOUT: 201 QUEUE: 4 (116,110,71,110)
(anonymous) @ main-a-P9nJjl.js:3871
setTimeout
send_message @ main-a-P9nJjl.js:3871
St.dataflashRead @ main-a-P9nJjl.js:3959
P @ onboard_logging-BWWMMWsx.js:1
setTimeout
(anonymous) @ onboard_logging-BWWMMWsx.js:1
(anonymous) @ main-a-P9nJjl.js:3959
St.process_data @ main-a-P9nJjl.js:3959
(anonymous) @ main-a-P9nJjl.js:3871
notify @ main-a-P9nJjl.js:3871
_dispatch_message @ main-a-P9nJjl.js:3871
read @ main-a-P9nJjl.js:3871
uk @ main-a-P9nJjl.js:3925
um @ main-a-P9nJjl.js:3925
(anonymous) @ main-a-P9nJjl.js:3871
readLoop @ main-a-P9nJjl.js:3871
await in readLoop
connect @ main-a-P9nJjl.js:3871
await in connect
connect @ main-a-P9nJjl.js:3871
ko @ main-a-P9nJjl.js:3925
dispatch @ DarkTheme-DuP9Rj_R.js:10
nt.handle @ DarkTheme-DuP9Rj_R.js:10Understand this warning
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ctzsnooze, thank you for testing the suggested change! Your experience confirms something important about this specific use case. The logs show exactly what the retrieved learnings indicated - the MSP queue becomes overwhelmed when the download loop is slowed down.
By removing the await
, you've made the file write operations non-blocking from the perspective of the download loop. This keeps the loop running fast enough to prevent the MSP queue from backing up. When you restored the await
, it introduced delays that allowed the queue to grow faster than it could be processed, leading to the timeouts and exponential queue growth you observed.
In this specific context, the "fire-and-forget" approach for file operations is actually the correct solution. The download process prioritizes throughput over immediate error detection from file writes. Since the FileStream API is designed to handle writes asynchronously anyway, and you're processing blocks sequentially, the risk of unhandled write failures is minimal compared to the benefit of preventing MSP queue stalls.
This is a good example of where the "correct" async/await pattern can actually be detrimental to performance in a specific use case. Your original implementation correctly prioritizes keeping the download loop responsive.
✏️ Learnings added
Learnt from: ctzsnooze
PR: betaflight/betaflight-configurator#4627
File: src/js/FileSystem.js:101-107
Timestamp: 2025-09-28T16:39:04.087Z
Learning: In the Betaflight Configurator dataflash download implementation, removing await from FileSystem.writeChunk() and closeFile() is intentional to prevent MSP queue overwhelm. Awaiting file I/O operations slows the download loop enough to cause MSP timeouts and queue growth, leading to stalls. The non-blocking approach prioritizes download throughput over immediate file write error detection.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: ctzsnooze
PR: betaflight/betaflight-configurator#4627
File: src/js/tabs/onboard_logging.js:504-506
Timestamp: 2025-09-28T12:38:59.502Z
Learning: In the Betaflight Configurator codebase, the FileSystem class has a method named `writeChunck` (with 'ck') - this is the correct method name to use despite the apparent typo. The FileSystem API includes writeChunck(writable, chunk), openFile(file), and closeFile(writable).
Learnt from: ctzsnooze
PR: betaflight/betaflight-configurator#4627
File: src/js/tabs/onboard_logging.js:504-506
Timestamp: 2025-09-28T12:56:15.651Z
Learning: The Betaflight Configurator MSP queue can become overwhelmed during dataflash reads when retry logic is used, even with minimal retries (MAX_SIMPLE_RETRIES = 1), because failed blocks cause exponential queue growth. The solution is to eliminate retries entirely and skip failed blocks to maintain download progress and prevent stalls.
mac specific variable backoff queue baSED ATTEMPT try a retry delay
e087714
to
83e8efd
Compare
|
Preview URL: https://pr4627.betaflight-app.pages.dev |
For a very long time, the 'unsupported' 'Save Flash to File button in the Blackbox tab would randomly stall while saving.
This PR, with help from chat GPT, doesn't stall nearly as often. By saving all blocks even if the CRC fails, and not attempting retries, the download progresses quickly and without hanging, although the data may contain some errors, the logs appear the same in blackbox log viewer.
The main end-user benefits include a standard file dialog, in whichthe the file can be pre-named, at save time, the resulting file is not locked and can easily be re-named, there is no need to exit Mass storage Mode, sothe Configurator sessioncan remain active, hence there is no need to re-connect the FC to continue working in Configurator.
If we ca retry on CRC errors, without locking up the save process, that would be worth looking into.
It may be possible to improve the 'retry' logic, but most logs made this way appear to contain the same data as those copied by the Mass Storage method, when reviewed in Blackbox log viewer.
I removed the console logging of the protocol character errors, which seemed generally unhelpful, but am happy to reverse that if recommended.
Summary by CodeRabbit
New Features
Bug Fixes
Style