Context: crates/backend/fiat-shamir/src/merkle_pruning.rs
Description
The restore() performs allocations and work proportional to attacker-controlled proof fields: n = self.paths.len(), h = self.merkle_height, and the lengths of leaf_data.
In particular, it allocates subtree_hashes for all n paths and then pushes up to levels(i)+1 hashes per path, yielding O(n·h) additional memory and hashing work beyond what was sent in the proof.
A malicious prover can submit a very large proof (many paths and/or large leaf_data vectors) to drive the verifier into excessive allocations/CPU, potentially causing OOM or severe slowdown.
let n = self.paths.len();
let h = self.merkle_height;
self.leaf_data
.iter_mut()
.for_each(|d| d.resize(d.len() + self.n_trailing_zeros, Data::default()));
let mut subtree_hashes: Vec<Vec<[F; DIGEST_LEN_FE]>> = vec![vec![]; n];
for i in (0..n).rev() {
...
subtree_hashes[i].push(hash.clone());
for lvl in 0..levels(i) {
...
subtree_hashes[i].push(hash.clone());
}
}
let mut restored: Vec<MerklePath<Data, F>> = Vec::with_capacity(n);
for i in 0..n {
...
let mut siblings = Vec::with_capacity(h);
for lvl in 0..levels(i) {
...
siblings.push(sibling);
}
...
}
Recommendation
Enforce explicit bounds before restoration (ideally at proof decoding/verification entry points):
- Maximum number of paths (
n), maximum merkle_height (h), and maximum total leaf_data elements across all leaves.
- Consider changing the algorithm to avoid storing
subtree_hashes for all paths at once (stream/compute only what’s necessary).
- Reject proofs whose serialized size exceeds a configured limit.
Context: crates/backend/fiat-shamir/src/merkle_pruning.rs
Description
The
restore()performs allocations and work proportional to attacker-controlled proof fields:n = self.paths.len(),h = self.merkle_height, and the lengths ofleaf_data.In particular, it allocates
subtree_hashesfor allnpaths and then pushes up tolevels(i)+1hashes per path, yielding O(n·h) additional memory and hashing work beyond what was sent in the proof.A malicious prover can submit a very large proof (many paths and/or large
leaf_datavectors) to drive the verifier into excessive allocations/CPU, potentially causing OOM or severe slowdown.Recommendation
Enforce explicit bounds before restoration (ideally at proof decoding/verification entry points):
n), maximummerkle_height(h), and maximum totalleaf_dataelements across all leaves.subtree_hashesfor all paths at once (stream/compute only what’s necessary).