Replies: 2 comments 5 replies
-
@vaimer TTL is a massive can of worms and a set of completely different code paths. Some major difference with how expiration is not "an equivalent of acknowledgement" has been documented for years: queues are discarded due to expiration only when they reach the head of the queue, which implies an active consumer. There is also documentation on quorum queue tuning for large messages. 4.1.0 made segment cleanup significantly more aggressive for the typical case where messages are consumed. I can see how slow segment cleanup in that case could be considered a bug but message expiration per TTL is a completely different beast and I somewhat doubt that our team will get to it before 4.4 or so. It's simply not a common enough operational problem (the common one was addressed in That said, RabbitMQ is open source software and you are welcome to investigate what can be done, convince our team that a specific approach is worth its downsides, and submit a PR. |
Beta Was this translation helpful? Give feedback.
-
could you run |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the bug
RabbitMQ version 4.1.3
raft.wal_max_size_bytes = 64000000
raft.segment_max_entries = 4096
Set-up:
fair.retryTest.1
durable quorum queue withTLL
= 16000, after expiration, messages are moved to thedeadletter-fair-test
durable quorum queue by routing keyfair-test
.Issue:
Segments for
fair.retryTest.1
aren't cleaned up after moving messages, which causes disk memory leaks.In some cases, segments might be cleaned up only by queue removal.
Purging messages from
deadletter-fair-test
doesn't clean up the whole consumed memory.The same behaviour is not reproducible on the 3.11.3 version.
Reproduction steps
Expected behavior
Segments are cleaned up after moving messages to another queue. (same behaviour as acknowledge)
Additional context
First test case
Publishing messages, approximate size of message 300KB
leader for fair.retryTest.1 memory consumption before and after the publishing process
Almost 150 MB is still in use by the queue and its segments
Second test case
Publishing message, approximate size of message is more than 15MB
The same situation as in the first case, with higher memory consumption.
Beta Was this translation helpful? Give feedback.
All reactions