Skip to content

Conversation

@wafgo
Copy link

@wafgo wafgo commented Aug 13, 2025

Previously, a new buffer pool was allocated for every received frame-sized package. This led to several critical issues:

  • Excessive memory consumption: H264 frames have large variations in size, causing significant memory usage over time.

  • File descriptor leaks: Each invocation of gstreamer::BufferPool::new() created a new socketpair, resulting in a steady increase in open file descriptors. This can be observed with:

    watch -n1 "ls -l /proc/PID/fd | wc -l"

    Over time, this would exhaust available file descriptors.

  • Application instability: On devices such as the Lumus cam, memory usage would continuously rise (over 7-8GiB after 5 hours), eventually leading to a crash.

This commit resolves these issues by reusing buffer pools where possible and preventing unnecessary allocation of resources. This allocates a little bit too much memory for the frames, as it takes the next power of two for the buffer, but its worth it to stabilize the application

Tested-by: Wadim Mueller [email protected]

Previously, a new buffer pool was allocated for every received frame-sized package. This led to several critical issues:

- Excessive memory consumption: H264 frames have large variations in size, causing significant memory usage over time.
- File descriptor leaks: Each invocation of `gstreamer::BufferPool::new()` created a new socketpair, resulting in a steady increase in open file descriptors. This can be observed with:

  watch -n1 "ls -l /proc/<PID>/fd | wc -l"

  Over time, this would exhaust available file descriptors.

- Application instability: On devices such as the Lumus cam, memory usage would continuously rise (over 7-8GiB after 5 hours), eventually leading to a crash.

This commit resolves these issues by reusing buffer pools where
possible and preventing unnecessary allocation of resources. This
allocates a little bit too much memory for the frames, as it takes the
next power of two for the buffer, but its worth it to stabilize the application

Tested-by: Wadim Mueller <[email protected]>
@jmoney7823956789378
Copy link

Seems like even the max buffer isnt large enough for some 4K cameras, as it filled instantly and still was unable to stream. I have 16 of them though, so maybe my use-case is an outlier.

@jsonn
Copy link

jsonn commented Oct 12, 2025

I've seen packets up to ~500KB as well, so it seems the 64KB size is too limited.

@wafgo
Copy link
Author

wafgo commented Oct 12, 2025

Fair enough! I increased the Bucket size to 1M, hopefully this should be enough for all frames. But is this project dead? Looks like nobody is planning to merge this :(

@jmoney7823956789378
Copy link

jmoney7823956789378 commented Oct 12, 2025

@wafgo I'll give it a try today! Don't lose hope :)

edit: no luck, still running into an absolutely obliterated buffer even when testing 1 camera:

[2025-10-12T17:11:16Z WARN  neolink_core::bc_protocol::connection::bcconn] Reaching limit of channel
[2025-10-12T17:11:16Z INFO  neolink::rtsp::factory] Buffer full on vidsrc pausing stream until client consumes frames
[2025-10-12T17:11:16Z INFO  neolink::rtsp::factory] Failed to send to source: App source is closed
[2025-10-12T17:11:16Z WARN  neolink_core::bc_protocol::connection::bcconn] Remaining: 0 of 100 message space for 28 (ID: 3)
[2025-10-12T17:11:16Z WARN  neolink_core::bc_protocol::connection::bcconn] Reaching limit of channel
[2025-10-12T17:11:16Z WARN  neolink_core::bc_protocol::connection::bcconn] Remaining: 0 of 100 message space for 28 (ID: 3)
[2025-10-12T17:11:16Z INFO  neolink::rtsp::factory] New BufferPool (Bucket) allocated: size=65536
[2025-10-12T17:11:16Z INFO  neolink::rtsp::factory] Buffer full on audsrc pausing stream until client consumes frames
[2025-10-12T17:11:16Z INFO  neolink::rtsp::factory] Buffer full on audsrc pausing stream until client consumes frames
[2025-10-12T17:11:16Z INFO  neolink::rtsp::factory] Buffer full on audsrc pausing stream until client consumes frames
[2025-10-12T17:11:16Z INFO  neolink::rtsp::factory] Buffer full on audsrc pausing stream until client consumes frames
[2025-10-12T17:11:18Z INFO  neolink::rtsp::factory] Failed to send to source: App source is closed
[2025-10-12T17:11:18Z INFO  neolink::rtsp::factory] New BufferPool (Bucket) allocated: size=65536
[2025-10-12T17:11:18Z INFO  neolink::rtsp::factory] New BufferPool (Bucket) allocated: size=1024
[2025-10-12T17:11:18Z INFO  neolink::rtsp::factory] New BufferPool (Bucket) allocated: size=512

This is even after attempting const MAX_BUCKET: usize = 131072 * 1024; so I'm unsure if this means we need to go even bigger or if we're approaching it wrong. (I know nothing about rust)

}
Ok(buf)
} else {
// Fallback without pooling

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if we should not pool at all. In my case with a single camera (Reolink Doorbell) there are SO many different sizes coming out of data.len() that the hash map ends up growing like crazy over time. Eventually it will become so saturated that it will not be possible to hold buffers in the hash map any longer.

We could GC the hash map once in a while but it sort of defeats the pooling a bit. It's also a tradeoff on extra cycles as the map will need to be cleaned out and resized periodically which is not great.

Is there a strong reason to track buffers in the hash map to begin with? I don't think I understand the reason for why we got here in the first place.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some other thoughts: if the pooling is in place to try and reduce memory pressure and cycles from creating buffers (how bad is this though?) over and over then maybe we can do something where we have a fixed set of new buffers in the map that can be reused. For example:

  1. Say you have 800,000 different sizes coming at you from data.len() over the course of some amount of hours
  2. The hashmap holds buckets of buffers, with the buffers being 256 bytes, 512 bytes, 1k, 2k, 4k, etc. all the way up until 1MB
  3. When data comes in it will be placed into the largest bucket. For example, if 342K comes in then it goes into the buffer in the bucket which holds 512K
  4. Once the data in this buffer has been used is it cleared out, but the buffer itself is not deallocated. It stays in the map waiting to handle the next round of data that requires it

We could extend the max size of buffers in the hash map to be 8MB or whatever's needed. What's important is that the hash map does not ever grow unbounded. This would also allow for getting rid of the logic that falls back to not pooling anything.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants