-
Notifications
You must be signed in to change notification settings - Fork 764
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add optional async API for non-lossy non-blocking writes #3238
Comments
I don't think what you're describing here makes sense. How does just switching the Synchronous (non- So, unfortunately, I don't think this is something that we could do. |
Thanks for the eyes @hawkw ! I work with Amberly/@rcoh so jumping in to share some context. To be clear - she/I are glad to spend the time to make any such contributions, make sure they are groomed to reduce overhead for you and other maintainers, etc. And if you don't have time to discuss / review this right now, completely understood! Mostly just wanted to float the idea to get feedback :) Intended use case
The use case this is targeting is less about interfacing with the tracing APIs, which can't be async. It's more about cases where somebody is using I see this use case frequently in my company outside of logging contexts, where there are some other sorts of records being written to disk. That also tends to be when the non-lossy one is used, since log records you generally want to drop in the case of backpressure, but other records might be more critical. How it would be usedThe usage mode would need to be a net new API (presumably Meaning, the default behavior of I personally have projects that would use this, where I currently am using non-lossy mode to flush non-log records in an async context that is important but in the background compared to request/response flow. When my application is under heavy load, I am nervous about backpressure degrading latency on tasks that would introduce end to end latency, but I can't afford to drop records, so I accept that for now. So, this API would be useful for me! I imagine others. We could clearly document the cases where non-lossy NonBlockingAppender will block, what API that eg a Is this crate the right place for this?It seems like even though But, it just felt like it would fragment the ecosystem / not be very useful for other consumers of Clearly we shouldn't do anything that adds overhead / dependencies to the existing usage or otherwise worsens the tracing integration. But this seems like an additive change if it is opt-in under feature flag? Future extensions (that do interop with tracing)I'd love to see something along the lines of, another usage mode for LOSSY appender that is just a generic sync |
Oh, I see, I hadn't realized people were using just the non-blocking writer part of |
Yeah, this is really the key question. We kind of went in circles on it. At the end of the day we felt, If we were going to go the standalone library route, it would probably start as a fork of this package. Which to me was a yellow flag that it might be nice to figure out how to play nicely with the existing API, rather than fragment. Since, that way we can continue to contribute to It also would mean that we are keeping an eye on the PR/issue queue and could try to help with triage or other maintenance overhead (if that is useful to you). Of course would defer to you and other maintainers on whether any of this makes sense! |
Feature Request
Optional Async API for Non-lossy Non-blocking Writes
Crates
tracing-appender
Motivation
Currently, the
NonBlocking
writer's non-lossy mode blocks when the channel is full. Users have two sub-optimal choices:We want to provide a third option for users who:
Users who want to avoid async runtime dependencies and are ok with the blocking behavior can stick to the current flow.
Proposal
Add an optional Tokio-based async API behind a feature flag while maintaining the existing synchronous implementation:
In Cargo.toml file, add:
AsyncWrite
trait behind the "async" feature flagstd::io::Write
implementation unchangedSample usage:
The text was updated successfully, but these errors were encountered: