Conversation
WalkthroughThe changes remove redundant backup file handling in the in-memory share chain, simplifying initialization by directly using the current block cache. The LMDB block storage’s Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant P2Chain
participant BlockCache
Client->>P2Chain: try_load(new_chain.block_cache)
P2Chain->>BlockCache: Load chain data
P2Chain->>P2Chain: Record and log load duration
P2Chain->>Client: Trigger panic("close")
sequenceDiagram
participant Client
participant ChainLevel
participant BlockCache
Client->>ChainLevel: add_block(block, force)
ChainLevel->>BlockCache: insert(hash, block, force)
BlockCache-->>ChainLevel: Return insert result
ChainLevel-->>Client: Block addition complete
Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 2
🔭 Outside diff range comments (3)
p2pool/src/sharechain/p2chain.rs (1)
235-263:⚠️ Potential issueCritical issue: Panic statement will prevent application from functioning properly
The changes in the
try_loadmethod add timing and logging functionality, but thepanic!("close")statement on line 263 will unconditionally terminate the program after loading the chain. This appears to be a debugging artifact that shouldn't be in production code.- panic!("close");p2pool/src/sharechain/p2chain_level.rs (1)
139-163:⚠️ Potential issueMethod updated to support force parameter
The
add_blockmethod has been updated to accept and propagate theforceparameter to the block cache insertion. This change provides more control over whether to overwrite existing blocks.However, the test code is not updated to match the new signature. The tests need to be fixed:
- chain_level.add_block(block_2.clone()).unwrap(); + chain_level.add_block(block_2.clone(), false).unwrap(); - chain_level.add_block(block_3.clone()).unwrap(); + chain_level.add_block(block_3.clone(), false).unwrap();p2pool/src/sharechain/lmdb_block_storage.rs (1)
276-278:⚠️ Potential issueTests need updating to use the new insert signature
The test cases call the
insertmethod without the newforceparameter. These need to be updated.- cache.insert(hash, block.clone()); + cache.insert(hash, block.clone(), false); - cache.insert(hash, block.clone()); + cache.insert(hash, block.clone(), false);
🧹 Nitpick comments (1)
p2pool/src/sharechain/in_memory.rs (1)
92-107: Removed Legacy Backup File Handling LogicThe lines that previously constructed, managed, and renamed a backup block cache file have been entirely commented out. This simplification aligns with the PR objective of "Don't recreate the database on start" by directly relying on the current block cache file without making a backup.
Points to Consider:
- Ensure that this removal does not compromise recovery in situations where the block cache might be damaged or inconsistent, as there is no longer a fallback backup.
- Verify that any related error handling upstream (e.g., in
P2Chain::try_load) appropriately handles potential database corruption.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
p2pool/src/sharechain/in_memory.rs(1 hunks)p2pool/src/sharechain/lmdb_block_storage.rs(3 hunks)p2pool/src/sharechain/p2chain.rs(5 hunks)p2pool/src/sharechain/p2chain_level.rs(3 hunks)
🧰 Additional context used
🧬 Code Definitions (2)
p2pool/src/sharechain/in_memory.rs (2)
p2pool/src/sharechain/lmdb_block_storage.rs (1)
new_from_path(68-114)p2pool/src/sharechain/p2chain.rs (1)
try_load(202-265)
p2pool/src/sharechain/p2chain.rs (4)
p2pool/src/sharechain/in_memory.rs (2)
new_chain(953-991)new(74-176)p2pool/src/sharechain/p2chain_level.rs (1)
new(64-92)p2pool/src/sharechain/p2block.rs (2)
new(163-180)new(299-301)p2pool/src/sharechain/mod.rs (1)
new(63-73)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: ci
- GitHub Check: cargo check with stable
- GitHub Check: test (esmeralda)
- GitHub Check: Cucumber tests / Base Layer
🔇 Additional comments (5)
p2pool/src/sharechain/in_memory.rs (1)
108-109: Direct Initialization of the Block CacheWith the legacy backup code removed, the block cache is now initialized using:
let block_cache = LmdbBlockStorage::new_from_path(&data_path);This ensures that the system loads the block cache directly from the current data path without attempting to manage an old backup.
Action:
- Double-check that this direct initialization meets the overall system resilience requirements and that any issues (e.g., corruption) can still be detected and handled gracefully.
p2pool/src/sharechain/p2chain.rs (2)
712-712: Update toadd_blockcall with new force parameterThe addition of the
trueparameter tolevel.add_blockensures that during block verification, updated blocks will always be inserted into the cache regardless of whether they already exist, which is necessary for updating verification status.
1089-1089: Update toadd_blockcall with new force parameterThe addition of the
falseparameter tolevel.add_blockindicates that during normal block addition, we don't want to force-write blocks that already exist in the cache, which prevents overwriting blocks unnecessarily.p2pool/src/sharechain/p2chain_level.rs (1)
84-84: Update toinsertcall with new force parameterThe code now passes
falseto the block cache insert method, indicating that during initialization of a new chain level, blocks should not be force-inserted if they already exist.p2pool/src/sharechain/lmdb_block_storage.rs (1)
231-231: Updated trait definition to include force parameterThe
BlockCachetrait'sinsertmethod now includes aforceparameter, which is a good interface change that allows for more flexible block insertion behavior.
| fn insert(&self, hash: BlockHash, block: Arc<P2Block>, force: bool) { | ||
| //First we check if the block already exists, if it does we don't insert it, if force is set, we overwrite it | ||
| if force { | ||
| let env = self.file_handle.read().expect("reader"); | ||
| let store = env.open_single("block_cache_v2", StoreOptions::create()).unwrap(); | ||
| let reader = env.read().expect("reader"); | ||
| let block = store.get(&reader, hash.as_bytes()).unwrap(); | ||
| // we dont want to deserialise this block, so we just check if it exists | ||
| if block.is_some() { | ||
| return; | ||
| } | ||
| } |
There was a problem hiding this comment.
Improved block insertion logic with force parameter
The updated insert method now includes a force parameter that determines whether to check for existing blocks before insertion. This is the key change that prevents recreating the database on start.
However, the implementation logic seems inverted:
- If
forceis true, it checks if the block exists and returns early if it does - This means
forceis actually preventing insertion rather than forcing it
The parameter should be renamed for clarity or the logic should be inverted:
- fn insert(&self, hash: BlockHash, block: Arc<P2Block>, force: bool) {
- //First we check if the block already exists, if it does we don't insert it, if force is set, we overwrite it
- if force {
+ fn insert(&self, hash: BlockHash, block: Arc<P2Block>, skip_if_exists: bool) {
+ //First we check if the block already exists, if it does we don't insert it
+ if skip_if_exists {
let env = self.file_handle.read().expect("reader");
let store = env.open_single("block_cache_v2", StoreOptions::create()).unwrap();
let reader = env.read().expect("reader");
let block = store.get(&reader, hash.as_bytes()).unwrap();
// we dont want to deserialise this block, so we just check if it exists
if block.is_some() {
return;
}
}Alternatively, invert the logic if "force" should mean "overwrite existing":
- fn insert(&self, hash: BlockHash, block: Arc<P2Block>, force: bool) {
- //First we check if the block already exists, if it does we don't insert it, if force is set, we overwrite it
- if force {
+ fn insert(&self, hash: BlockHash, block: Arc<P2Block>, force: bool) {
+ //Check if block exists and skip insertion unless force is set
+ if !force {
let env = self.file_handle.read().expect("reader");
let store = env.open_single("block_cache_v2", StoreOptions::create()).unwrap();
let reader = env.read().expect("reader");
let block = store.get(&reader, hash.as_bytes()).unwrap();
// we dont want to deserialise this block, so we just check if it exists
if block.is_some() {
return;
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn insert(&self, hash: BlockHash, block: Arc<P2Block>, force: bool) { | |
| //First we check if the block already exists, if it does we don't insert it, if force is set, we overwrite it | |
| if force { | |
| let env = self.file_handle.read().expect("reader"); | |
| let store = env.open_single("block_cache_v2", StoreOptions::create()).unwrap(); | |
| let reader = env.read().expect("reader"); | |
| let block = store.get(&reader, hash.as_bytes()).unwrap(); | |
| // we dont want to deserialise this block, so we just check if it exists | |
| if block.is_some() { | |
| return; | |
| } | |
| } | |
| fn insert(&self, hash: BlockHash, block: Arc<P2Block>, skip_if_exists: bool) { | |
| //First we check if the block already exists, if it does we don't insert it | |
| if skip_if_exists { | |
| let env = self.file_handle.read().expect("reader"); | |
| let store = env.open_single("block_cache_v2", StoreOptions::create()).unwrap(); | |
| let reader = env.read().expect("reader"); | |
| let block = store.get(&reader, hash.as_bytes()).unwrap(); | |
| // we dont want to deserialise this block, so we just check if it exists | |
| if block.is_some() { | |
| return; | |
| } | |
| } |
| fn insert(&self, hash: BlockHash, block: Arc<P2Block>, force: bool) { | ||
| self.blocks.write().unwrap().insert(hash, block); | ||
| } |
There was a problem hiding this comment.
Needs implementation of force parameter
The InMemoryBlockCache implementation of the insert method includes the new force parameter but doesn't implement the conditional behavior seen in the LmdbBlockStorage implementation. This inconsistency could lead to different behavior between test and production code.
- fn insert(&self, hash: BlockHash, block: Arc<P2Block>, force: bool) {
- self.blocks.write().unwrap().insert(hash, block);
+ fn insert(&self, hash: BlockHash, block: Arc<P2Block>, force: bool) {
+ if !force {
+ if self.blocks.read().unwrap().contains_key(&hash) {
+ return;
+ }
+ }
+ self.blocks.write().unwrap().insert(hash, block);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn insert(&self, hash: BlockHash, block: Arc<P2Block>, force: bool) { | |
| self.blocks.write().unwrap().insert(hash, block); | |
| } | |
| fn insert(&self, hash: BlockHash, block: Arc<P2Block>, force: bool) { | |
| if !force { | |
| if self.blocks.read().unwrap().contains_key(&hash) { | |
| return; | |
| } | |
| } | |
| self.blocks.write().unwrap().insert(hash, block); | |
| } |
Description
Dont recreate the database on start
Summary by CodeRabbit
New Features
Refactor