-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(p2p): cache responses to serve without roundtrip to db #2352
base: master
Are you sure you want to change the base?
Conversation
dcdaaf7
to
8079739
Compare
crates/services/p2p/src/service.rs
Outdated
impl CachedView { | ||
fn new(metrics: bool) -> Self { | ||
Self { | ||
sealed_block_headers: DashMap::new(), | ||
transactions_on_blocks: DashMap::new(), | ||
metrics, | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we probably want to also support sub-ranges or even partial ranges here, but can be in the future :)
009dc14
to
5b03fa0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for implementing this. I Hate to be annoying here, but to approve this I need to:
- See the Changelog updated.
- Understand the reasoning behind the current caching strategy and the benefits/drawbacks over an LRU cache.
- Be certain that we don't open the door to OOM attacks by allowing our cache to be overloaded.
Let me know your thoughts on 2 and 3. I'm happy to jump on a call to discuss this and figure out a good path forward.
pub struct CachedView { | ||
sealed_block_headers: DashMap<Range<u32>, Vec<SealedBlockHeader>>, | ||
transactions_on_blocks: DashMap<Range<u32>, Vec<Transactions>>, | ||
metrics: bool, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit hesitant to the current approach of storing everything and clearing on a regular interval. Right now, there is no memory limit of the cache, and we use ranges as keys. So if someone queries the ranges (1..=4, 1..=2, 3..=4), we'd store all blocks in the 1..=4 range twice - and this could theoretically grow quadratically for larger ranges.
I would assume that the most popular queries at a given time are quite similar. Why not use a normal LRU cache with fixed memory size? Alternatively just maintain a cache over the last
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup, its still wip.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah right, I see this PR is still a draft :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we now use block height as the key in 6422210
we will retain the time-based eviction strategy for now
… instead of a per-range basis
Co-authored-by: Mårten Blankfors <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a suggestion to get rid of an unwrap that's bugging me right now, otherwise looks good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Partial review
Review completed.
let mut items = Vec::new(); | ||
let mut missing_start = None; | ||
|
||
for height in range.clone() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It came as a surprise that Range isn't Copy
, but seems like it's not: rust-lang/rust#21846 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
Co-authored-by: Rafał Chabowski <[email protected]>
Linked Issues/PRs
Description
When we request transactions for a given block range, we shouldn't only keep using the same peer causing pressure on it. we should pick a random one with the same height and try to get the transactions from that instead.This PR caches p2p responses (ttl 10 seconds by default) and serves requests from cache falling back to db for others.
Checklist
Before requesting review
After merging, notify other teams
[Add or remove entries as needed]