Skip to content

fix: replace VLA with std::vector in LoRa driver#54

Merged
gdellis merged 1 commit into
mainfrom
fix/core-driver-fixes
Apr 19, 2026
Merged

fix: replace VLA with std::vector in LoRa driver#54
gdellis merged 1 commit into
mainfrom
fix/core-driver-fixes

Conversation

@gdellis
Copy link
Copy Markdown
Owner

@gdellis gdellis commented Apr 17, 2026

Summary

Fixes issue #42: VLA (Variable Length Array) in LoRa driver receive()

Problem: receive() used a C99 VLA: uint8_t rx_data[*actual_len + 1] which is dangerous on embedded systems with limited stack space.

Solution: Replaced VLA with std::vector<uint8_t> which provides dynamic allocation without stack pressure.

Changes

  • firmware/main/lora/sx1262.cpp:
    • Added #include <vector>
    • Changed uint8_t rx_data[*actual_len + 1] to std::vector<uint8_t> rx_data(*actual_len + 1)
    • Updated read_reg and memcpy calls to use .data() method

Notes

Testing

Build passes for ESP32-S3 target.

@gdellis gdellis force-pushed the fix/core-driver-fixes branch from 727b2d7 to ae86afe Compare April 18, 2026 23:58
@github-actions
Copy link
Copy Markdown

PR #54 Review: fix: replace VLA with std::vector in LoRa driver

CI Status

Some checks still IN_PROGRESS (Build Firmware, Host Tests, CodeQL). Completed checks passed:

  • ✅ Lint (pre-commit)
  • ✅ Lint C++
  • ✅ Check Firmware Changed
  • ⚠️ CodeQL (neutral - no findings)

Code Changes Review

-		uint8_t rx_data[*actual_len + 1];
-		read_reg (cmd, rx_data, *actual_len + 1);
-		memcpy (data, rx_data + rx_start, *actual_len);
+		std::vector<uint8_t> rx_data (*actual_len + 1);
+		read_reg (cmd, rx_data.data (), *actual_len + 1);
+		memcpy (data, rx_data.data () + rx_start, *actual_len);

Issues Found

[BEST PRACTICE] [LOW] - Consider heap usage on embedded device

The PR title notes this is about avoiding stack pressure, but std::vector with dynamic allocation still uses heap memory. On embedded systems with fragmented heap or in interrupt context, this could still cause issues. The change is still an improvement over VLAs, but be aware that std::vector allocation is not deterministic.


Positive Observations

  1. [BEST PRACTICE] [LOW] VLA removal is correct—VLAs are banned in C++14 and dangerous on embedded systems with limited stack
  2. [BEST PRACTICE] [LOW] Proper use of .data() method for accessing vector contents
  3. [BEST PRACTICE] [LOW] Memory sizing matches original behavior (*actual_len + 1)
  4. [BEST PRACTICE] [LOW] The +1 accounts for the implicit length byte that chip returns, which is correct

No Critical Issues

  • No buffer overflow concerns with the vector size calculation
  • rx_start offset handling is correct (chip provides this value)
  • memcpy size parameter matches *actual_len correctly
  • read_reg signature accepts uint8_t* so .data() is appropriate

Summary

No issues found. Good job! The VLA replacement is a solid fix. CI is still running but no failures detected. The change is minimal, focused, and achieves its goal of eliminating stack-based VLA usage.

New%20session%20-%202026-04-18T23%3A58%3A56.336Z
opencode session  |  github run

@gdellis
Copy link
Copy Markdown
Owner Author

gdellis commented Apr 19, 2026

Acknowledged the heap usage consideration. std::vector was chosen over static arrays because:

  1. The receive buffer size is only known at runtime from the chip
  2. It avoids stack pressure that VLAs cause on embedded systems
  3. Heap allocation, while not deterministic, is more predictable than stack overflow on limited-memory devices

For this specific use case (not in interrupt context), the trade-off is acceptable. If deterministic allocation becomes critical, a static buffer with max size could be used instead.

Thanks for the review!

@gdellis gdellis merged commit ece6748 into main Apr 19, 2026
11 checks passed
@gdellis gdellis deleted the fix/core-driver-fixes branch April 19, 2026 00:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant