diff --git a/CHIPs/chip-0048.md b/CHIPs/chip-0048.md
new file mode 100644
index 00000000..fb50e351
--- /dev/null
+++ b/CHIPs/chip-0048.md
@@ -0,0 +1,811 @@
+
+| CHIP Number | 0048 |
+| ------------ | --------------------------------------------------------------------------- |
+| Title | New Proof of Space |
+| Description | Add a new Proof of Space and format to the Chia blockchain's core consensus |
+| Author | [Dr. Nick](https://github.com/drnick23) |
+| Editor | [Dan Perry](https://github.com/danieljperry) |
+| Comments-URI | [CHIPs repo, PR #160](https://github.com/Chia-Network/chips/pull/160) |
+| Status | Review |
+| Category | Standards Track |
+| Sub-Category | Core |
+| Created | 2025-05-19 |
+| Requires | None |
+
+
+## 1. Abstract
+
+We propose a new Proof of Space protocol for the Chia blockchain that addresses weaknesses in energy efficiency, rental attack resistance, and plot format stability. Our design shifts security costs to infrequent events—winning a block or pool partial—allowing honest farmers to operate with minimal ongoing energy use. Farmers store Proof Fragments (bit-dropped, encrypted x-values from a proof) that are chained into a Quality Chain. The design achieves near-zero viable plot compression on current hardware to eliminate economic advantages of compressed plots and secure long-term fairness. Farmers select plot configurations based on desired disk access frequency, trading off plotting time for lower challenge activity, opening options for long disk idle times. The format creates small plots that can be grouped to optimize disk load, and remains accessible to low-end devices like Raspberry Pi 5. A hard fork, followed by a phase-out of legacy plots, will establish a durable, future-proof format that reinforces Chia's commitment to a green, decentralized blockchain.
+
+## 2. Definitions
+
+Throughout this document, we'll use the following terms:
+
+- **Proof of Space**: The underlying protocol mechanism that enables storage-based consensus. It is a cryptographic primitive that proves a prover has reserved a certain amount of storage space. This mechanism underpins block creation by requiring farmers to demonstrate and validate storage commitments as part of consensus.
+- **Plot Format**: A specific structure and method for organizing data in a file so it can be used as a valid `Proof of Space`. Plot formats define how data is laid out, how proofs are generated, and the computational requirements for plotting and farming. Multiple plot formats can exist that all implement the same Proof of Space protocol, but with varying performance characteristics and energy efficiency.
+- **PoS1**: The original proof of space and plot formats.
+- **PoS2**: The new proof of space and plot formats presented in this CHIP.
+- **Plot Stability Index (PSI)**:
+A measure of a plot format's long-term economic viability without replotting. High PSI indicates resilience to protocol changes, hardware evolution, energy/storage trade-offs, and market shifts (e.g., XCH price, netspace, or block rewards). Conversely, a low PSI implies a higher likelihood of replotting based on those factors.
+- **X-value**: A k-bit value that forms the fundamental building block of a proof. X-values are initially generated and then recursively paired and matched until a full proof is formed.
+- **Proof**: an ordered set of 32 k-bit values that satisfy the matching requirements for a proof.
+- **Quality String**: data used to determine how good a proof is and if it wins a block or pool partial.
+- **Proof Fragments**: A partial, encrypted summary of a proof's child x-values with embedded partition routing. Comprises a 2 k-bit value derived from 8 k-bit x-values that are partially bit-dropped and then encrypted.
+- **Honest Plot**: A plot stored with the plot format designed for the Proof of Space to represent the network approved storage required that is not further compressible without resorting to attacks.
+- **Farmer**: An entity (human or otherwise) which performs the tasks required to participate in ensuring the security of the PoST-backed blockchain——such as reserving storage space with plots and responding to challenges with proofs using those plots.
+- **Honest Farmer**: A farmer using only honest plots.
+- **Compressed Plot**: Generally, all plots use compression methods to losslessly reduce data and then decompress that data to restore the original. In this document, compressed plots refer to plots that have reduced data compared to the honest plot, typically through the use of bit dropping.
+- **Bit Dropping**: Removing some bits of information from a stored plot, that is later re-created using compute to respond to challenges. Typically each bit dropped doubles the compute required to restore the information.
+- **Plot Grinding**: Creating new plots on the fly, possibly that specifically target plot IDs that can respond to challenges from the network, requiring no storage and just compute.
+- **Rental Attacks**: Specifically use plot grinding to take control over the network by renting enough compute power to simulate a majority of the netspace.
+- **Table-Dropping Attack**: Some tables from the plot format are dropped, and then reconstructed using hints or full plot grind.
+- **Attacker**: An individual or entity looking to exploit weaknesses in Proof of Space to gain a financially incentivized advantage to an honest farmer, typically through the use of high-performance and efficient compute (e.g. GPUs) to reduce or eliminate storage requirements.
+- **Log 2 Rule**: A compute–storage tradeoff lets an attacker drop a number of bits proportional to `log₂(attacker perf / baseline perf)`.
+- **Block Filter**: Determines whether a Proof of Space qualifies to win a block based on its quality (Quality Chain). It applies a global difficulty target that reflects the network's difficulty setting. Only proofs whose Quality Chain hashes below this dynamically adjusted threshold are eligible to create new blocks. This ensures consistent block timing (e.g. every ~18.75 seconds) and periodically adjusts with changes in netspace to maintain consensus.
+- **Pool Partial Filter**: Used in pooled farming to assess whether a submitted proof (a "partial") meets the pool's custom difficulty setting. This filter is usually less strict than the block filter and allows farmers to prove ongoing participation and receive proportional rewards. Pools set this threshold to balance network bandwidth and fairness.
+- **Bit Drop Saturation**: An event that occurs when the effort to recompute dropped bits exceeds the effort of a full plot grind.
+- **Compression Resistance**: A measure of how effective the Proof of Space is against an attacker trading increased compute for reduced storage.
+- **ANS**: Asymmetrical Numerical Systems, a type of entropy coding used in data compression.
+
+## Motivation
+
+This proposal addresses three critical issues inherited from the original Proof of Space (PoS1):
+
+1. **High energy consumption from compressed plots.** Compression techniques such as DrPlotter significantly increase ongoing energy requirements during farming, undermining Chia's commitment to energy efficiency. DrPlotter plots are ~22.1 GiB — roughly 22% of the 101.4 GiB originals — but require continuous GPU compute to reconstruct proofs at each challenge.
+2. **Rental attack risks.** Advances in GPU technology raise the concern that an attacker could rent enough powerful GPUs to spoof a majority of the netspace. In PoS1, plot ID grinding provides attackers up to 3.5× leverage, making rental attacks disproportionately cost-effective.
+3. **Low Plot Stability Index (PSI).** The coexistence of multiple plot formats (DrPlotter, Bladebit, Gigahorse, NoSSD) creates fragmentation and uncertainty. Farmers are pressured to continually reassess whether replotting to a newer format yields better efficiency or profitability. This erodes confidence in plot permanence and long-term sustainability.
+
+The design is guided by three constraints:
+
+- **HDD-first storage.** HDD remains the most economical storage medium. Existing farmers must be able to transition without replacing their storage hardware.
+- **Low-compute harvesting.** A Raspberry Pi 5 must be capable of harvesting over 1 PB of plots, keeping the network accessible and decentralized.
+- **Long-term adaptability.** The protocol must accommodate future advances in compute and storage without requiring fundamental redesign.
+
+## Backwards Compatibility
+
+### Protocol Impact
+
+The proposed new Proof of Space is not backwards compatible with the Pos1 plots in use today. A consensus change is therefore required so that:
+
+- Blocks and pool partials generated with the Quality Chain proofs are valid after the fork-height, and
+- Proofs that rely on legacy plots become invalid after a defined sunset date.
+
+Because the validity rules change and must accept a new Proof of Space that was previously invalid, this upgrade must be introduced by a hard fork, and will require a full netspace replot by the end of a phase-out period for the legacy plots.
+
+### Alternatives to Hard Fork
+
+[CHIP-12](https://github.com/Chia-Network/chips/blob/main/CHIPs/chip-0012.md) which reduces the plot filter on a schedule, is already in effect. However, this update merely keeps compression resistance on pace with developing technology, and does not sufficiently address the risk of rental attacks. It does not resolve any of the three critical issues we identified in [Motivation](#motivation). Changing the plot filter to a more aggressive schedule would also result in a hard fork.
+
+We also considered increasing the minimum k-size as an alternative. However, this approach would not adequately address the rental attack problem, would increase hardware requirements (making farming less accessible), and would not resolve the plot format fragmentation issues that reduce the Plot Stability Index.
+
+### Easing Transition
+
+A separate CHIP is proposed to schedule the hard fork date and ease the transition from the existing Proof of Space to the new one.
+
+Production level plotters and harvesters will be released in the advance of the hard fork date.
+
+Regardless of the ease of transition, we still expect some netspace will be left behind, but the majority will be incentivized to replot.
+
+## Rationale
+
+### Understanding Proof of Space
+
+Proofs of Space work by layering multiple successive rounds of compute-intensive matching to construct a sequence of values that can be validated quickly. The chance of randomly creating a sequence of values that validates is negligible. Responding to a challenge must therefore rely on precomputed work stored on disk, with a fast path to retrieve the proof.
+
+Concretely, for a pair of values L and R, the prover hashes components of each and checks whether they collide within a defined domain. When a collision occurs, the match constitutes a valid proof element. Verification is fast: the verifier re-hashes L and R and confirms the collision.
+
+Because the search space is large, finding all matches from scratch is expensive. Farmers precompute and store matches during a one-time *plotting* phase. At challenge time, the farmer looks up stored solutions rather than recomputing them. For PoS to hold, this lookup must be dramatically cheaper than recomputation; the cost to recreate matches for each challenge must exceed the cost of storing and looking up the results. This is the fundamental guarantee of proof of space: storage is the cheapest solution to a challenge.
+
+#### Compression Attacks via Bit-Dropping
+
+However, compute-time trade-offs allow attackers to reduce storage at the cost of reconstruction work. The most direct method is *bit-dropping*: storing truncated values and reconstructing the missing bits at challenge time.
+
+Matching and Bit-Dropping
+
+Consider a pair L and R, where L and R can be in the domain space 0..2^k. In (A) we find all (L,R) pairs where the `hash(L)` and `hash(R)` are paired by some matching requirements on hashes. In (B) we find one such pair (L = `0b10101101`,R = `0b01101010`). Validation hashes the full values of L and R and checks for the match. In `(C)` an attacker drops 1 bit from both L and R to store only 7 of the 8 bits (e.g., `0b1010110` instead of `0b10101101`). At challenge time, the attacker reconstructs two candidates (`0b10101100` and `0b10101101`) and validates each. One fails; one passes. The attacker has saved 1 bit from 8 bits of storage but doubled the reconstruction work per challenge. The attacker's effective farm size is therefore bounded by how many reconstructions their hardware can complete within the block time — the more powerful the compute, the larger the farm the attacker can support. This shifts the protocol toward proof of work.
+
+More formally, under the **log₂ rule**, the number of bits an attacker can profitably drop is proportional to `log₂(attacker_perf / baseline_perf)`. The wider the performance gap between the baseline supported system and an attacker's hardware, the more bits can be dropped.
+
+Hellman attacks represent another class of compute-storage trade-off, discussed extensively in the original PoS paper.
+
+#### Tables and Layered Matching
+
+To mitigate bit-dropping, PoS can match on previous matches, creating layers of matching organized into *tables*. For each additional table, the proof comprises twice the number of matched elements. For example, 3 tables produce 2³ = 8 original values per proof. With layered tables, dropping 1 bit per value saves proportionally less of the total proof, as each additional table adds more bits per entry for a proof.
+
+However, plotting and validation time also increases with each layer: each matched pair must be successively constructed, and proofs must verify through all table layers. Table structure must also be optimized to store back-pointers to disparate locations (requiring random access) in previous tables for maximal compression efficiency (see **Pos1** paper for details).
+
+In PoS1, 7 tables produced proofs of 32 pairs (64 x-values). To minimize expensive HDD lookups, only 7 random accesses were needed to retrieve a *quality string* — a compact summary that determined whether the farmer held a winning proof. Only upon finding a winner did the farmer fetch the full proof, requiring additional lookups.
+
+#### Plot Id Filter
+
+The Plot Id Filter determines how often a given proof of space (or plot) will pass every block to lookup proofs. The higher this filter, the less frequently plots must lookup the challenge for proofs.
+
+### Limitations of the Original PoS Design
+
+Several approaches to strengthening PoS1 were evaluated and found inadequate:
+
+**More tables.** Each additional table doubles the proof size and increases lookup times (linearly for quality strings, exponentially for full proofs). Eventually, fetching a full proof from an HDD becomes infeasible within the challenge response window, and on-chain proof storage becomes impractically large. Critically, this only *lessens* compression effectiveness without eliminating it — a similar variety of compressed plot formats would persist.
+
+**Higher hash difficulty.** Increasing the hash function's compute cost proportionally reduces the bits an attacker can drop, but also proportionally increases validation time and network energy consumption. Several orders of magnitude increase would be needed for meaningful impact, requiring much higher baseline hardware and thus reducing accessibility. The protocol would still only *lessen* compression, not remove it.
+
+**Tighter plot filter.** Reducing the Plot ID Filter proportionally increases attacker workload, since it directly influences how many plots an attacker can support. However, it also increases disk access frequency for the honest farmer, eventually exceeding HDD random-access limits and forcing farmers onto SSDs. The filter also has a hard floor: once minimized, the protocol loses adaptability to future hardware improvements.
+
+**Default-compressed plots.** Defining a compressed format as the baseline simply shifts the problem. Farmers require GPUs for reconstruction at every challenge, farm size remains compute-limited with a significant percentage of energy used as proof-of-work, and ecosystem fragmentation persists with the **log 2 rule**.
+
+### The Fundamental Trade-off
+
+PoS must balance inclusivity with security. Design choices that reduce disk I/O for slow HDDs, minimize energy, or support low-end devices also benefit attackers who can scale up with fast SSDs and GPUs that are *thousands* of times more performant.
+
+Two key assumptions inform the attacker model:
+
+- **Attackers have unlimited random-access performance; honest farmers are limited by HDDs.** The format assumes adversaries can scan large proof ranges instantly (from SSD or RAM), whereas honest farmers rely on slow, sequential HDD reads.
+- **Attackers have virtually unlimited memory; honest farmers have consumer hardware.** The format assumes attackers can load entire plots into memory, so security cannot rely on k-size increases alone.
+
+Two apparent fixes illustrate the impossibility of a naive solution:
+
+
+| PoS variant | log₂ gap | Drawback |
+| ------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
+| **GPU-centric** (every farmer must own a high-end GPU) | Baseline ≈ attacker, so almost no bits can be dropped | Becomes Proof of Work: high energy, expensive hardware, defeats green and accessibility goals |
+| **SSD-only** (full proof returned every challenge) | Boosts compression resistance up to 30,000× but compute gap remains; log₂ rule leaves ~10% compressibility | Destroys the entire HDD plot market. Grinding speed and rental attack vulnerability are unchanged |
+
+
+Neither option meets the green, cheap, universal target, and hybrids merely shuffle the same drawbacks.
+
+## New PoS2
+
+Our solution addresses bit-drop reconstruction limitations for low-end CPUs and random-access limitations for HDDs through five interconnected innovations:
+
+1. **Dynamically generated partial-proof quality strings with lazy proof reconstruction** — Compression attack resistance with Pi5-class CPU harvesting.
+2. **Challenges with coalesced single-table data access** — Reduced disk I/O through sequential reads.
+3. **Look-ahead grouped responses to challenges** — Further reduced disk I/O, time for HDD's to wake-up into full spin when needed
+4. **Compute and memory-bound matching algorithm with CPU-hardware support and tunable difficulty** — Better compression resistance with reduced impact on network validation time.
+5. **Effective plot filter based on variable plot strength** — Long-term security scaling and farmer-chosen disk access frequency.
+
+Before detailing each innovation, we provide a high-level overview of the challenge-to-proof flow.
+
+### System Overview: Challenge to Proof
+
+PoS2 System Overview
+
+The following steps describe how a challenge from the network produces a valid proof:
+
+1. The incoming challenge must pass an **Effective Plot Filter** associated with the plot's **strength**, **group id**, and **meta group**. These components determine which groups of plots pass into challenge lookups, optimizing for harvester CPU and disk load.
+2. For passing plots, the challenge defines two ranges of **Proof Fragments** (sets A and B). Proof Fragments within each range form the candidate pool for the chaining filter.
+3. The **Chaining Filter** constructs a chain of 16 Proof Fragments by alternately selecting fragments from sets A and B that pass threshold hashing tests.
+4. The 16 chained Proof Fragments form a **Quality String**. There may be zero or multiple Quality Strings per challenge.
+5. Each Quality String is tested against the block difficulty (or pool partial difficulty). If it passes, the farmer has a winning Proof of Space.
+6. Each Proof Fragment in the winning Quality String is decrypted to recover bit-dropped x-values, forming a **Partial Proof**.
+7. The **Solver** reconstructs the full proof from the Partial Proof.
+8. The final full proof comprises 128 complete x-values (k = 28 bits each) plus additional parameters, and can be validated quickly by the network.
+
+The following table summarizes how each mechanism contributes to security:
+
+
+| Mechanism | What it does | Why it hurts attackers |
+| ----------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| **Proof Fragments** (encrypted, bit-dropped x-values) | Stores only a partial reveal of each proof, with 75% of bits already dropped. Building blocks for challenge responses. | Extra bit-dropping requires expensive on-the-fly reconstruction at every challenge. |
+| **Chaining Filter** | Efficiently constructs a chain of Proof Fragments into a Quality String | Dropping bits on Proof Fragments creates a combinatorial explosion of candidate chains that will not validate. Cost scales out of reach. |
+| **Quality String** | Increases recompute timing headroom for honest farmers up to the full 30-second block interval for any farm size | Attacker's response window is typically milliseconds per challenge, severely limiting supported plots. The log₂ advantage is reduced by the asymmetry between a farmer's 30-second window for rare block wins and an attacker's per-challenge compute across all plots passing effective plot filter. |
+| **Single Table** | All proofs stored as a single stream of sorted, encrypted data | Eliminates alternative data structures for attackers to exploit. Coalesced data reduces HDD random-access limitations, enabling more resistant plot filters. |
+| **Plot Groups** | Colocates challenge-response data across multiple plots | Reduces disk load, enabling more resistant plot filters with smaller challenge intervals. |
+| **Plot Strength** | Increases work to construct a proof; influences effective plot filter | Negligible difference for attacker in choosing one strength over another, but enables long-term network-level security adjustments. |
+
+
+Network-wide energy is dramatically reduced: full recomputes run only on block wins or the farmer's own infrequent pool-partial wins. Compression resistance is significantly stronger because 75% of bits are already dropped in Proof Fragments (only 14 bits remain from the complete 56-bit proof set per fragment), limiting attackers to costly plot reconstruction and targeted lazy-evaluation attacks.
+
+### Innovation 1: Proof Fragments, Chaining, and the Quality String
+
+The first major innovation constructs the Quality String from bit-dropped *partial proofs* rather than full proofs.
+
+**PoS1 approach.** The quality string was derived from a single pair of x-values retrieved from the full proof. For compressed plots, these values had bits dropped, requiring reconstruction at every challenge. The more bits dropped, the more compute required per challenge — progressively shifting the system toward proof of work.
+
+**PoS2 approach.** Consensus is designed so that bit-dropped partial proofs *form* the **Quality String**. Thus, reconstruction is not needed to determine how good a proof is.
+
+The building blocks for a **Quality String** are **Proof Fragments**. Each Proof Fragment encapsulates a matched set of 8 x-values (4 pairs), drops bits, and encrypts the result. Specifically, a Proof Fragment represents x1 through x8 of a proof. The even-indexed values (x2, x4, x6, x8) are removed entirely, and the remaining odd-indexed values (x1, x3, x5, x7) are each bit-dropped by k/2 bits, yielding 2k bits total. This is then encrypted with the plot ID to randomize structure and prevent attackers from exploiting sequential patterns.
+
+Proof Fragment
+
+The number of bits dropped and the selection criteria are calibrated against the matching algorithm's solve times on a Pi5 combined with optimal table structure. The result is a workload sufficient to prevent high-end GPUs from grinding compression attacks, yet feasible for a Pi5 to solve in under 10 seconds.
+
+Because partial proofs are *expected*, there is no need to reconstruct fragments to evaluate the quality string. Full proof reconstruction is deferred until a winning proof is found. Since winning proofs are rare (roughly 1 in billions depending on netspace), the amortized reconstruction cost is negligible and farm size is no longer compute-limited. This is orders of magnitude less work than PoS1's per-challenge reconstruction requirement for compressed plots.
+
+**The Chaining Filter.** Pre-determined bit-dropped fragments alone are insufficient against further compression. An attacker could drop *n* additional bits from a Proof Fragment, generating 2^n candidate quality strings, and only reconstruct the full proof when a candidate indicates a winner. Since block wins are rare, the attacker's amortized cost stays low — mirroring the honest farmer's advantage of deferred reconstruction.
+
+To counter this, PoS2 introduces the **Chaining Filter**, which dynamically builds the quality string as a concatenated sequence of 16 Proof Fragments. For a given challenge, a hashing function tests which fragments from sets A and B to chain, alternating between the two sets. If an attacker drops additional bits, each fragment in the chain multiplies the candidate space. With 16 chain elements, even a single bit drop produces a combinatorial explosion — (2^bits_dropped)^16 candidate chains — that quickly becomes computationally infeasible.
+
+Chaining Filter
+
+All Proof Fragments in set A start a new chain. Subsequent Proof Fragments have a 1 in S chance of chaining, where S is the expected set size for Proof Fragments Set A and B. The last chain element. With a chaining factor of 1.1 and 16 links, an attacker dropping 1 bit can only support ~62 TiB of plots. With 2 bits dropped, the attacker's supported farm collapses to 0.01 TiB while also needing to perform hundreds of millions of false-positive proof recomputes. The current recommendation is 16 Quality Links with a chaining factor of 1.1, providing security against even a 100× GPU performance improvement over current hardware.
+
+One caveat: longer chains cause the distribution of valid responses to become bursty — most challenges produce zero valid Quality Strings, with occasional bursts of multiple Quality Strings. This is allievated somewhat by front-loading high probabilities and tapering results down the chain to achieve a tighter distribution.
+
+### Innovation 2: Single-Table Data Access for Challenges
+
+The second innovation restructures challenges to support coalesced data access, dramatically reducing HDD random-access requirements.
+
+Rather than increasing table count for compression resistance, PoS2 reduces to 3 tables during plotting (T1, T2, T3), then drops T1 and T2 from the stored plot, leaving a single on-disk table (T3) of sorted Proof Fragments. T1 and T2 serve as the basis for every recompute and are tuned so that a Pi5 can rebuild a k=28 proof from the Quality String in under 9 seconds. T3 enforces structural ordering and provides the grinding resistance layer, with data compressed via ANS near the theoretical limit for random data.
+
+
+| Table | Security purpose | Details | Stored bits |
+| -------------- | ------------------------------------------------------------------------------- | ------------------------------------------------- | ---------------------------------- |
+| T1–T2 (hidden) | Basis for every recompute. Inflicts bit-dropping saturation on Proof Fragments. | Tuned for Pi5 to rebuild k=28 proof in <9 seconds | 0 (not stored) |
+| T3 | Structural ordering. Plot strength and grinding resistance layer. | Challenges originate on Proof Fragment ranges. | ~2k bits per entry, ANS compressed |
+
+
+Challenges define two *ranged* sets of Proof Fragments as candidates for quality string construction. Because Proof Fragments are sorted, each range requires only one random access to locate the start, followed by a sequential read. A full challenge lookup therefore requires just **two random accesses and two sequential reads** — a dramatic improvement over PoS1's multi-table random lookups.
+
+The fragment set (scan range) size is tuned to balance compression resistance against disk bandwidth. Rather than targeting bit-drop saturation (where reconstruction exceeds a full plot grind — reached at range size ~8192 for k=28), the set size is calibrated so that no attack exceeds the effectiveness of the lazy-evaluation chaining attack (see Security section). This allows a smaller scan range with reduced disk I/O while maintaining equivalent security, since security is always anchored to the most effective known attack.
+
+The single-table design also eliminates alternative data structures for attackers to exploit. With one continuous blob of sorted data, there are no additional table pointers, back-references, or structural redundancies.
+
+**Challenge on Proof Fragments.** In PoS1, challenges began at the root (last table), which stored a redundant final hash that could be recomputed from the x-values. Attackers could rearrange leaves to open significant exploits. In PoS2, challenges begin at the leaves via an ordered-scan filter over sorted Proof Fragments. Neighboring entries decrypt to statistically unrelated x-values, so attackers cannot harvest similar neighbors or reuse partial work. If an attacker re-orders data, they must add bits to restore ordering for the scan, negating compression gains.
+
+### Innovation 3: Grouped Plot ID Filter
+
+The third innovation replaces the previous plot filter with a **grouped plot ID filter** that guarantees all plots within a group pass a challenge simultaneously.
+
+With grouped filtering, the plot format interleaves data from all plots in a group. When a group passes a challenge, all *N* plots respond using the same two random-access lookups and correspondingly larger sequential reads. HDD load thus depends on *sequential* read bandwidth rather than random access count — an orders-of-magnitude improvement.
+
+Up to 65,536 plots can be grouped together (see our specifications for recommended plot groupings). Example grouped plot sizes:
+
+
+| # plots | Grouped plot size |
+| ------- | ----------------- |
+| 1 | 1 GB |
+| 16 | 16 GB |
+| 32 | 32 GB |
+| 100 | 100 GB |
+
+
+Grouping is effectively required on HDDs to manage harvester startup time and ongoing disk access load. Single plots would place high pre-loading and indexing demands. For example, on startup it could take ~130 seconds and additional memory overhead to read headers for 13,000 individual plots on a 20 TiB disk.
+
+The grouped filter provides three additional benefits:
+
+- **Rental attack resistance.** The grouped plot ID selection pattern limits grinding leverage to just 1.25× (down from 3.5× in PoS1).
+- **HDD spin-up time.** Farmers receive at least 40 seconds advance notice of which plot groups will pass the next challenge, allowing sleeping disks to spin up.
+- **Energy efficiency.** Farmers can sleep disks between challenges and wake them only when a response is required.
+
+### Innovation 4: Tunable-Difficulty Matching Algorithm with Fast Validation
+
+The fourth innovation is a novel matching algorithm that decouples construction difficulty from validation cost, with CPU hardware support to narrow the CPU/GPU performance gap.
+
+The matching algorithm is novel in two respects:
+
+- **Per-table tunable strength.** Plotting cost (and thus resistance to rental and compression attacks) scales as O(2^N) with strength, while verification remains O(1). The benefit of additional matching bits is that difficulty can be set very high — increasing plotting time — while incurring negligible cost during proof validation. Since validation is effectively free, difficulty can be tuned as needed without adding compute expense to the network.
+- **Asymmetric hashing for pairing.** Honest farmers can recompute full proof pairs efficiently because the cost of solving the g function across the domain of x-values is expensive, but subsequent pairings are cheap (results are amortized). Conversely, an attacker solving for a subset of the proof must incur the initial expensive step repeatedly. Solving 16 Proof Fragments together takes less than 1 ms per fragment, whereas solving just 2 fragments in isolation takes over 10 ms each.
+
+**G-function design.** The g-function is the initial hash applied to x-values to generate table entries:
+
+```
+g(x) = custom_aesenc(plot_id || x) → k-bit match_info
+```
+
+Inspired by RandomX hashing, the design leverages hardware-accelerated AES instructions (AESENC), which are supported on the Pi5 and most modern CPUs. GPUs face higher overhead for these operations. Benchmarks show this bridges the performance gap between GPU and CPU by approximately 10×, meaning work performed by a low-end consumer CPU (the Pi5) is much closer to an attacker's GPU throughput than with conventional hashing (e.g., Chacha or Blake).
+
+Constructing and grinding plots using the matching algorithm is both compute-bound (intensive hashing) and memory-bound (memory sort to find matches). However, increasing plot strength with perfectly constant validation time would create an exploitable asymmetry (see Security section). The solution applies a fraction of the increased difficulty to validation, scaling it linearly with plot strength. Even with this concession, PoS2 achieves an order-of-magnitude improvement in the ratio of construction cost to verification cost compared to PoS1.
+
+Some plot reconstruction attacks can store matching bits to accelerate re-grinding, so at least one table stage applies difficulty via hashing rounds rather than matching bits alone. The proposed match parameters are:
+
+```
+T1: 2 section bits, 2 matching bits, 24 target bits, extra_hashing_rounds = strength
+T2/T3: 2 section bits, (2 + strength) matching bits, (24 − strength) target bits
+```
+
+### Innovation 5: Effective Plot Filter combines Plot Strength and Network Plot Id Filter
+
+The final innovation allows farmers to trade plotting time for reduced challenge frequency. The Effective Plot Filter combines the network-specified Network Plot Id Filter with a plots Strength. Plots created at higher strengths take longer to generate but receive a more favorable effective plot filter, lengthening the interval between challenges. The Effective Plot Filter is described in detail in a separate CHIP: [https://github.com/Chia-Network/chips/pull/161](https://github.com/Chia-Network/chips/pull/161).
+
+A farmer can choose to plot at higher strengths to reduce disk load to as little as one expected challenge per plot group per day. By placing all grouped plots on a single disk, the farmer can wake the disk once per day and sleep it the rest of the time.
+
+This mechanism also provides indefinite long-term security. As technology advances, the protocol can reduce the Network Plot ID Filter, and farmers keep pace by replotting at higher strengths. The filter is expected to adjust approximately once every three years, and plot strength increases are scheduled starting in 2036 approximately every two years — see [https://github.com/Chia-Network/chips/pull/161](https://github.com/Chia-Network/chips/pull/161) for the exact schedule. Note that plots don't expire, but each reduction in the Network Plot ID Filter will require twice the challenge frequency and thus harvester and disk loads.
+
+Two main methods for long-term security interact:
+
+
+| Factor | Network Plot ID Filter Reduction | Plot Strength Increase |
+| ----------------- | ------------------------------------------- | ---------------------- |
+| **Harvester** | Increased energy, reduced farm size support | No impact |
+| **Solver** | No impact | Increases solve time |
+| **HDD activity** | Increases, eventually impractical | Proportional decrease |
+| **Plotting time** | No impact | Proportional increase |
+
+
+The default plot strength is recommended for most farmers, providing fast plotting with sufficient grouping to keep disk load minimal. Farmers wanting to sleep their HDDs can select higher strengths. The network plot ID filter schedule is pre-determined — refer to [https://github.com/Chia-Network/chips/pull/161](https://github.com/Chia-Network/chips/pull/161) for the exact schedule.
+
+### Tuning PoS2 Parameters
+
+- fundamental trade-offs: Proof Fragment challenge set size, the smaller the size, the lower the hdd usage for large groupings, but the higher the variance in chaining and the lower average construction time required for chaining.
+- if the construction time for chaining is too low, we increase the chaining hash work needed. Find a balance for acceptable Pi5 max farming size on single cpu vs attacker potential bit drop size (we will construct a table for trade-offs).
+- front-loading chaining: increases hashes required for chaining (work), but reduces variance in chaining. Front-loading makes work proportionally more difficult (but more memory requirements during chaining).
+
+### PoS2 vs. PoS1 Comparison
+
+**PoS1 (compressed plots).** A challenge response requires identifying all plots passing the plot ID filter, then reconstructing quality string proofs for each. Farm size is directly limited by reconstruction throughput. With a GPU capable of 40 ms reconstruction, a farmer supports ~~234 plots passing the filter. With a plot filter of 256, this permits roughly 60,000 plots (~~3 PiB at 50% compression) at full GPU load. The farmer must continuously weigh compression levels against storage costs, energy costs, GPU costs, XCH price, and netspace.
+
+**PoS2 (partial proofs with chaining).** A challenge response identifies all grouped plots passing the filter and dynamically constructs the Quality String for each — a lightweight operation. Full proof reconstruction occurs only for winning proofs. The farmer has up to 20 seconds to complete reconstruction; a Pi5 finishes in under 8 seconds. Because winning proofs are rare, farm size is not reconstruction-limited. A single-threaded Pi5 can support a farm exceeding 1 PB. The farmer's optimization focus shifts to energy efficiency: choosing plot strengths and disk sleep/wake strategies.
+
+Resistance to rental attacks is over 100× higher than in PoS1, with additional scheduled adjustments to the plot ID filter (see CHIP-49).
+
+## Specification
+
+### Technical Specifications and Performance
+
+#### Plot Sizes
+
+Only k=28 is supported. The on-disk format stores only T3 (sorted Proof Fragments), producing plots of approximately 0.92 GiB each — dramatically smaller than PoS1's 101.4 GiB uncompressed plots.
+
+Only even k-sizes are viable due to symmetric properties of the format. Additional k-sizes were rejected because the weakest k-size is always the attack target, larger k-sizes impose higher hardware demands, and plot grouping already addresses the need for varied storage granularities.
+
+Grouping at least 32 plots is recommended; up to 65,536 plots can be grouped together. Individual 1 GiB plots impose excessive indexing demands on the OS and RAM.
+
+
+| # plots | Grouped plot size |
+| ------- | ----------------- |
+| 1 | 0.92 GiB |
+| 16 | 14.7 GiB |
+| 32 | 29.4 GiB |
+| 100 | 92.0 GiB |
+
+
+The smaller per-plot size also reduces all-RAM plotting requirements, allowing most low-end systems to plot without temporary disk storage.
+
+#### Plotting Performance and Requirements
+
+CPU plotting is supported but less efficient than GPU. All times below are for all-RAM plotting; farmers may substitute temporary SSD storage for RAM at a modest speed penalty.
+
+Plotting time must always exceed the minimum threshold for rental attack resistance, making CPU plotting practical only for small numbers of plots.
+
+
+| | Raspberry Pi 5 8GB | Mac M3 Pro 12GB | Ryzen 5600 (6-core) | Nvidia 5090 |
+| --------------------------------- | ------------------ | --------------- | ------------------- | ----------- |
+| **Time per plot** (base strength) | 240s | 60s | 30s | 2s |
+| **Plotted space/day** | ~360 GiB/day | ~2 TiB/day | ~4 TiB/day | ~40 TiB/day |
+
+
+Plotting time approximately doubles with each strength increment. At lower strengths, general memory-management overhead is still a factor; at higher strengths, matching performance dominates and doubling is consistent.
+
+
+| Strength | CPU Time (s) | GPU Time (s) |
+| -------- | ------------ | ------------ |
+| base | 21.8 | 1.1 |
+| +1 | 35.5 | 1.9 |
+| +2 | 64.7 | 3.7 |
+| +3 | 124.4 | 7.3 |
+| +4 | 243.1 | 14.4 |
+| +5 | 481.3 | 28.6 |
+| +6 | 955.8 | 57.0 |
+| +7 | 1904.7 | 113.7 |
+| +8 | 3802.5 | 227.3 |
+
+
+RAM and storage requirements depend on group size:
+
+
+| # plots in group | Min CPU RAM | Min GPU RAM (optional) | Total RAM + storage needed |
+| ---------------- | ----------- | ---------------------- | -------------------------- |
+| 1 | 4 GB | 2 GB | 12 GB |
+| 2 | 4 GB | 2 GB | 13 GB |
+| 5 | 4 GB | 2 GB | 16 GB |
+| 21 | 4 GB | 2 GB | 32 GB |
+| 53 | 4 GB | 2 GB | 64 GB |
+| 117 | 4 GB | 2 GB | 128 GB |
+| 245 | 4 GB | 2 GB | 256 GB |
+| 1 + n | 4 GB | 2 GB | (12 + n) GB |
+
+
+Temporary storage can substitute for RAM. At higher strengths the relative impact of swap latency decreases.
+
+#### Proof Solving Times
+
+After a sufficiently high-quality Quality String is found, the solver reconstructs the full 128 x-values for network verification. Solve time depends on the maximum plot strength in the farm. The Pi5 solves a k=28 proof in under 8 seconds at strengths up to +4.
+
+
+| Strength | Raspberry Pi 5 | M3 Pro | Ryzen 9 9950X |
+| -------- | -------------- | ----------------- | ---------------- |
+| base | ~4.3 s | ~340 ms | ~220 ms |
+| +1 | ~4.5 s | ~370 ms | ~240 ms |
+| +2 | ~4.9 s | ~450 ms | ~280 ms |
+| +3 | ~5.7 s | ~660 ms | ~400 ms |
+| +4 | ~7.3 s | ~1.1 s | ~615 ms |
+| +5 | ~10 s | ~1.9 s | ~1.1 s |
+| +6 | ~17 s | ~3.7 s | ~1.9 s |
+| +7 | ~30 s | ~7.3 s | ~3.7 s |
+| +8 | — | ~14.5 s | ~7.3 s |
+| +n | — | ~2^(n−8) × 14.5 s | ~2^(n−8) × 7.3 s |
+
+
+Strengths +5 and above are capped at effective plot filter 8192 until the scheduled filter adjustments take effect.
+
+#### HDD Activity
+
+HDD activity depends on plot grouping, plot strength, and disk capacity. The table below assumes 10 ms random access and 250 MB/s sequential read. Higher plot strength increases the Effective Plot Filter proportionally, reducing average load.
+
+
+| Disk Capacity | Strength (eff. plot filter) | Group Size | Max load/challenge | Avg load | Bandwidth/day |
+| ------------- | --------------------------- | ---------- | ------------------ | -------- | ------------- |
+| 5 TB | base (512) | 1 | ~4.48% | ~2.09% | ~42 MB |
+| 5 TB | base (512) | 16 | ~0.85% | ~0.13% | ~42 MB |
+| 5 TB | +1 (1024) | 16 | ~0.64% | ~0.07% | ~21 MB |
+| 5 TB | +2 (2048) | 16 | ~0.43% | ~0.03% | ~10 MB |
+| 5 TB | +3 (4096) | 16 | ~0.21% | ~0.01% | ~5 MB |
+| 20 TB | base (512) | 1 | ~12.4% | ~8.4% | ~170 MB |
+| 20 TB | base (512) | 2 | ~7.4% | ~4.2% | ~170 MB |
+| 20 TB | base (512) | 16 | ~2.2% | ~0.52% | ~170 MB |
+| 20 TB | base (512) | 32 | ~1.31% | ~0.27% | ~170 MB |
+| 20 TB | base (512) | 64 | ~0.89% | ~0.14% | ~170 MB |
+| 20 TB | base (512) | 100 | ~0.69% | ~0.09% | ~170 MB |
+| 20 TB | base (512) | 1000 | ~0.47% | <0.01% | ~170 MB |
+| 100 TB | base (512) | 32 | ~3.42% | ~1.32% | ~850 MB |
+| 100 TB | base (512) | 64 | ~2.15% | ~0.65% | ~850 MB |
+| 100 TB | base (512) | 100 | ~1.85% | ~0.46% | ~850 MB |
+| 100 TB | base (512) | 1000 | ~0.47% | ~0.05% | ~850 MB |
+
+
+Plots in a group can be assigned a `meta_group` (up to 256). The effective plot filter guarantees that grouped plots with different meta groups never pass the same challenge, reducing peak load toward the average. For example, 202 meta groups × 100 grouped plots = 20,200 plots, where peak load converges to ~0.09%.
+
+Total daily bandwidth is low, so even large group counts can be read well within the challenge response window.
+
+A tool to estimate HDD activity is available: `./analytics simdiskusage`
+
+#### Harvester Farm Size Support
+
+Chaining Proof Fragments at challenge time is the primary CPU cost. The table below shows single-thread Pi5 limits (conservative, since the Pi5 has 4 threads and other harvester tasks are comparatively light).
+
+
+| CPU | Avg Plot Strength | Supported Farm Size (PiB) |
+| ----------------- | ----------------- | ------------------------------------------------- |
+| Pi5 single-thread | base | 1.2 |
+| Pi5 single-thread | +1 | 2.4 |
+| Pi5 single-thread | +2 | 4.8 |
+| Pi5 single-thread | +3 | 9.6 |
+| Pi5 single-thread | +4 | 19.2 |
+| Pi5 single-thread | +n | 1.2 × 2^n (capped by effective plot filter at +4) |
+
+
+### Proof of Space Specification
+
+A full PoS2 proof, for a given 32-byte plot ID with k = 28, comprises a chain of 16 sub-proofs of 8 x-values each: 128 k-bit values, an 8-bit strength, a 16-bit plot index, and an 8-bit meta_group (128 × 28 + 8 + 16 + 8 = 3,616 bits).
+
+```
+Proof 1 = x1, x2, ..., x8 (x ∈ [0, 2^k))
+Proof 2 = x9, x10, ..., x16
+...
+Proof 16 = x121, x122, ..., x128
+```
+
+For the full specification, refer to the ProofValidator in the reference implementation: [https://github.com/Chia-Network/pos2-chip](https://github.com/Chia-Network/pos2-chip)
+
+### Pooling Protocol Specification
+
+> **Note:** The pooling protocol is not part of the hard fork and is a work in progress.
+
+Key requirements:
+
+- Pools accept only one proof per plot ID (group) per challenge, reducing farmer solving load.
+- Solving timings are relaxed when a farmer wins multiple pool partials for a single challenge, preventing solver overload.
+- Software prioritizes block wins over pool partials for solving.
+- Optionally, pool operators may accept only the Quality String without requiring full proof reconstruction, operating as a trust-or-verify service.
+
+### Transition Period
+
+The transition period — during which new plots are accepted and legacy plots are phased out — is proposed in a separate CHIP. Production-level plotters and harvesters will be released before the hard fork date. Some netspace will inevitably be left behind, but the majority of farmers will be incentivized to replot.
+
+## Test Cases
+
+A test suite is available in the [reference implementation repository](https://github.com/Chia-Network/pos2-chip).
+
+## Reference Implementation
+
+Full reference implementations are available at: [https://github.com/Chia-Network/pos2-chip](https://github.com/Chia-Network/pos2-chip)
+
+- **C/C++ CPU plotter** — generates a single plot
+- **C/C++ CPU solver** — reconstructs full proofs from partial proofs
+- **C/C++ prover** — responds to challenges and produces proofs from stored plots
+- **C/C++ CUDA benchmark** — provides GPU performance estimates for plotting
+
+## Security
+
+This section analyzes the security of PoS2 against all known attack vectors. For cost comparisons, we use a simplified model comparing energy (Watts) and upfront cost ($). When both energy and cost for an attack exceed those of honest farming, the attack is clearly uneconomical.
+
+The following tunable parameters govern proof-of-space work:
+
+- **AESENC hashing rounds**: Controls CPU-boundedness. Higher values narrow the CPU/GPU performance gap but increase validation time.
+- **Match bits (T1/T2/T3)**: Each match requires 2^(match_bits) rounds of memory-bound sorting and compute-bound hashing. Validation remains O(1).
+- **Extra round multiplier (T1)**: Increases CPU-bound resistance to compression, but raises validation cost. Pi5 must validate in < 1 ms.
+- **Proof Fragment challenge set size**: Larger sets increase compression resistance for certain attacks and harvester compute, while decreasing HDD usage.
+
+These parameters have been iteratively tuned against all known attacks. The main attack categories are:
+
+1. **Rental Attack (Full Grind)** — No plot data retained; all data reconstructed to T3.
+2. **Match Bits Storage** — Attacker stores which match bits survive each matching round for deterministic reconstruction, trading storage for faster grind time.
+3. **Proof Fragment Bit Dropping** — Attacker solves all Proof Fragments in the challenge set at challenge time, prior to chaining.
+4. **Lazy-Solve Chaining** — Proof Fragments are bit-dropped; during chaining the attacker resolves only candidates that could extend the current chain, caching results.
+5. **Collected Xs** — Attacker collects all x-values feeding into T3 within a challenge range, using them as seeds for plot reconstruction.
+6. **Collected LXs** — Attacker collects only left-pair x-values feeding into T3, exploiting the asymmetric matching structure.
+
+### Rental Attacks
+
+A rental attack (full plot grind) has the advantage of pre-filtering plot generation to always pass the Effective Plot ID Filter. In PoS1, attackers could grind plot IDs to pass 3–4 successive challenges, giving ~3.5× leverage. In PoS2, the Effective Plot ID Filter guarantees a plot ID can only pass once in 16 successive signage points. However, a 4-signage-point look-ahead allows an attacker to grind a plot ID that passes once in those 4 points and once in the next 16, yielding 1.25× leverage.
+
+$$
+\text{Spoofed Plots} = \frac{\text{Effective Plot ID Filter} \times 1.25 \times 9375}{\text{T3 reconstruction time (ms)}}
+$$
+
+
+| Eff. Plot Filter | Strength | 5090 T3 time (ms) | Spoofed plots | Spoofed eTiB | Attacker W/TB | Attacker $/eTiB |
+| ---------------- | -------- | ----------------- | ------------- | ------------ | ------------- | --------------- |
+| 512 | base | 1,100 | 5,455 | 4.90 | 61.22 | $408.12 |
+| 1024 | +1 | 1,900 | 6,316 | 5.67 | 52.87 | $352.46 |
+| 2048 | +2 | 3,700 | 6,486 | 5.83 | 51.48 | $343.19 |
+| 4096 | +3 | 7,300 | 6,575 | 5.91 | 50.78 | $338.55 |
+| 8192 | +4 | 14,400 | 6,667 | 5.99 | 50.09 | $333.91 |
+
+
+These figures conservatively assume a 5090 consuming 300 W at $2,000. An honest farmer typically operates at under 6 W/TB for HDD storage and ~$10/TB, compared to ~60 W/TB and over $300/TB for the GPU attacker. On-the-fly recompute for spoofing is far too expensive for regular farming rewards; the main incentive is a network-level attack.
+
+For network attack resistance, we target a lower bound of 5% of the projected 100M 2027 H100e production (5,000,000 GPUs). Since T3 reconstruction time is the resistance metric, the minimum required T3 time for an H100e is:
+
+$$
+\text{Time to T3}_{\mathrm{ms}} = \frac{\text{GPUs} \times \text{PlotIdFilter} \times 1.25 \times \text{BlockInterval(ms)} \times \frac{\text{PlotSize GiB}}{1024}}{\text{TotalNetspace TiB}}
+$$
+
+For a 2027 target netspace of 10 EiB:
+
+
+| # H100e | Netspace to spoof (EiB) | Base Filter | Strength | Eff. Filter | Required H100e T3 (ms) | Est. H100e time | Benchmarked 5090 T3 (ms) | 5090 TiB/day |
+| --------- | ----------------------- | ----------- | -------- | ----------- | ---------------------- | --------------- | ------------------------ | ------------ |
+| 5,000,000 | 10 | 512 | base | 512 | 2,570 | 2,000 | 1,100 | 70.57 |
+| 5,000,000 | 10 | 512 | +4 | 8192 | 41,127 | 25,000 | 14,400 | 5.39 |
+
+
+A 512 Base Network Plot ID Filter requires ~2.5-second H100e plotting time to prevent rental attacks targeting 10 EiB. Our benchmarked 5090 times and estimated H100e times are close to these requirements. Lowering the filter to 256 would increase resistance but double harvesting energy and require higher-strength plotting for equivalent effective filters.
+
+Rental attack resistance will increase with expected netspace growth and when the base network plot filter adjusts.
+
+### Match Bits Storage and Deterministic Reconstruction Attack
+
+The matching algorithm uses match bits to perform 2^(match_bits) iterations to find valid pairs. These iterations require both sorting (memory-bound) and hashing (compute-bound). Plot strength influences the number of match bits.
+
+To keep the solver efficient on a Pi5, T1 construction is capped at 2 match bits. T2 and T3 use 2 match bits at base strength, plus one additional match bit per strength increment.
+
+Validation is fast because the specific match bit used to produce a successful pairing is implicitly encoded in the left side of the pair, eliminating the need to search 2^(match_bits) hashes during verification.
+
+**Attack mechanism.** An attacker constructs a plot and stores which of the 2^(match_bits) iterations produced each match. On reconstruction, the attacker replays only the stored iteration instead of searching all possibilities. For T1 (2 match bits), this requires 2 bits per entry across 2^k entries. For T2/T3, the stored bits range from 2 to 6 depending on strength. At full strength (+4), the attacker stores 2 bits in T1 and 6 bits each in T2 and T3 — a total of 14 bits per entry versus 29.45 bits for an honest plot, representing significant compression.
+
+At challenge time, the attacker reads all stored data and reconstructs the plot with only 2^k pairings per table instead of 2^k × 2^(match_bits).
+
+This attack grows more effective at higher strengths because the effective plot filter halves the work per strength increment, while the attacker needs only one additional stored bit per table.
+
+**Mitigation.** T1 is fixed at 2 match bits regardless of strength. Instead, T1 increases the number of AESENC hashing rounds with strength. This means the attacker cannot avoid the compute cost by storing match bits for T1 — the hashing work must be repeated. The trade-off is increased validation time at higher strengths, but this is necessary and effective against this attack.
+
+### Proof Fragment Bit Dropping
+
+An attacker dropping n bits from a Proof Fragment produces 2^n candidate fragments. Decrypting a candidate yields bit-dropped x-values to solve. Only one candidate will produce a valid match (false positives on solving 8 x-values are extremely rare for incorrect fragments).
+
+For each challenge, the attacker must validate every Proof Fragment in set A, since each starts a potential chain. For set B, only chaining candidates need validation — but since set A generates chains equal to the set size, most of set B will be candidates.
+
+This is why all Proof Fragments in set A start a chain: otherwise an attacker could selectively solve only the subset that forms Quality Strings.
+
+An efficient attacker minimizes work. For a 1-bit drop, only one of the two candidate x-value sets needs solving; if it fails, the other is correct by elimination. The bit-dropped x-values always have 14 missing bits, yielding 2^14 = 16,384 possible seed ranges. At higher bit-drop levels, the attacker can cache already-computed seed ranges that overlap across different Proof Fragments.
+
+### Chaining False Positives Attack
+
+Rather than resolving each Proof Fragment before chaining, an attacker can propagate false positives through the chain, collecting all candidate Quality Strings and only resolving those that pass block difficulty.
+
+#### Difficulty and Block Win Probability
+
+A 0.92 GiB plot at 10 EiB netspace has approximately a 1 in 10 billion chance of winning per challenge. An attacker producing billions of false-positive Quality Strings could in theory avoid reconstruction if none win. Resistance depends on the combinatorial cost of maintaining all candidate chains.
+
+#### Combinatorial Analysis
+
+
+| Chain link | Honest hashes | 0.5-bit drop | 1-bit drop | 2-bit drop |
+| ---------- | ----------------- | -------------------- | ---------------------- | ----------------------------- |
+| 1 | 64 | 91 | 128 | 256 |
+| 2 | 4,096 | 8,192 | 16,384 | 65,536 |
+| 3 | 4,096 | 11,585 | 32,768 | 262,144 |
+| 4 | 4,096 | 16,384 | 65,536 | 1,048,576 |
+| 5 | 4,096 | 23,170 | 131,072 | 4,194,304 |
+| 6 | 4,096 | 32,768 | 262,144 | 16,777,216 |
+| 7 | 4,096 | 46,341 | 524,288 | 67,108,864 |
+| 8 | 4,096 | 65,536 | 1,048,576 | 268,435,456 |
+| 9 | 4,096 | 92,682 | 2,097,152 | 1,073,741,824 |
+| 10 | 4,096 | 131,072 | 4,194,304 | 4,294,967,296 |
+| 11 | 4,096 | 185,364 | 8,388,608 | 17,179,869,184 |
+| 12 | 4,096 | 262,144 | 16,777,216 | 68,719,476,736 |
+| 13 | 4,096 | 370,728 | 33,554,432 | 274,877,906,944 |
+| 14 | 4,096 | 524,288 | 67,108,864 | 1,099,511,627,776 |
+| 15 | 4,096 | 741,455 | 134,217,728 | 4,398,046,511,104 |
+| 16 | 4,096 | 1,048,576 | 268,435,456 | 17,592,186,044,416 |
+| | **Total: 20,544** | **Total: 3,560,376** | **Total: 536,854,656** | **Total: 23,456,248,037,632** |
+
+
+Chaining hash rates (tuned custom AES hashing):
+
+
+| Compute Model | Chain Hashes/ms |
+| ------------------- | --------------- |
+| Pi5 (single-thread) | 18,000 |
+| 5090 | 10,000,000 |
+
+
+
+| Compute Model | Bits Dropped | Compression | Chain Hashes | ms/chain (avg) | Eff. Plot Filter | Supported Farm (eTiB) | TiB gained | W/TB gained | $/TB gained |
+| ------------- | ------------ | ----------- | ------------------ | -------------- | ---------------- | --------------------- | ---------- | ----------- | ----------- |
+| Pi5 | 0 (honest) | — | 20,544 | 1.11 | 512 | 3,902.43 | — | — | — |
+| 5090 | 1 | 3.40% | 536,854,656 | 54.09 | 512 | 79.73 | 2.71 | 147.74 | $738.71 |
+| 5090 | 1 | 3.40% | 536,854,656 | 54.09 | 8192 | 1,275.74 | 43.32 | 9.23 | $46.17 |
+| 5090 | 2 | 6.79% | 23,456,248,037,632 | 2,363,129.45 | — | — | — | — | — |
+| 5090 | 0.5 | 1.70% | 3,560,376 | 0.36 | 512 | 12,022.76 | 204.12 | 1.96 | $9.80 |
+| 5090 | 0.83 | 2.82% | 93,113,396 | 9.38 | 8192 | 7,355.43 | 207.30 | 1.93 | $9.65 |
+| 5090 | 0.93 | 3.16% | 259,917,748 | 26.19 | 8192 | 2,635.02 | 83.21 | 4.81 | $24.04 |
+
+
+The attacker is incentivized to plot at maximum strength to leverage the largest effective plot filter (8192), which minimizes per-challenge chaining work.
+
+At 2 bits dropped, the required chain hashes are computationally infeasible. At 1 bit dropped, both W/TB and $/TB on the gained storage exceed honest farming costs.
+
+At fractional bit drops (interleaving full and 1-bit-dropped Proof Fragments), at 0.93 bits the attacker's W/TB approaches honest levels but $/TB is more than double. At 0.5 bits with filter 512 and 0.83 bits with filter 8192, the attacker's $/TB is comparable to honest farming with a small W/TB advantage on ~200 TB of effective farm — under 1% energy savings on a 7+ PiB effective farm.
+
+These calculations use best-case hash-only benchmarks and exclude memory-management overhead for constructing millions of additional chains. At 0.83 bits dropped, the attacker must manage ~40 million chains of 16 Proof Fragments each.
+
+A gpu-based implementation of this attack would highlight the additional practical overhead, and determine whether increasing per-chain hash work would eliminate this marginal attack. However, any additional work to chaining would proportionally reduce Pi5 maximum farm support.
+
+### Proof Fragment Components Bit Dropping
+
+This attack decomposes Proof Fragments into their constituent left-side x-values, sorts them, and compresses the result.
+
+Each Proof Fragment contains 4 left-side x-values bit-dropped by 14 bits, so a set of N Proof Fragments yields N × 4 sorted x-values. The number of distinct bit-dropped x-value groups is 2^(14−b) for a bit-drop of b. When N is large and b is large, collisions among groups increase, improving compression.
+
+Sorting N × 4 values provides approximately log₂(N × 4) bits of compression. At challenge time, the attacker decompresses and solves for the set.
+
+The optimal attack set size is independent of the challenge set size, since the attacker can choose any grouping. For example, grouping with 512 entries at 1-bit drop compresses to the same size and incurs the same reconstruction cost as grouping with 256 entries at 2-bit drops.
+
+### Collected Xs and Collected Left-Pair Xs
+
+Instead of decomposing Proof Fragments, the attacker extracts all individual x-values from Proof Fragments in a range, sorts them, and at challenge time reconstructs Proof Fragments covering the challenge range. Chaining then proceeds normally.
+
+Collected Xs Attack
+
+The attacker benefits from x-value reuse: pairings in one table propagate to the next, so the same x-value appears in multiple entries. The unique x-value ratio decreases as set size grows:
+
+
+| Num Entries | Unique Xs | Unique LXs | Xs/Entry | LXs/Entry |
+| ----------- | ----------- | ---------- | -------- | --------- |
+| 4,075 | 32,595 | 16,299 | 7.99 | 4.00 |
+| 8,119 | 64,941 | 32,472 | 8.00 | 4.00 |
+| 16,197 | 129,503 | 64,762 | 7.99 | 4.00 |
+| 32,735 | 261,619 | 130,850 | 7.99 | 4.00 |
+| 65,278 | 521,316 | 260,778 | 7.99 | 3.99 |
+| 131,140 | 1,045,410 | 523,183 | 7.97 | 3.99 |
+| 262,153 | 2,082,761 | 1,043,362 | 7.94 | 3.98 |
+| 523,812 | 4,134,025 | 2,074,963 | 7.89 | 3.96 |
+| 1,047,648 | 8,158,293 | 4,110,615 | 7.79 | 3.92 |
+| 2,096,448 | 15,897,282 | 8,069,569 | 7.58 | 3.85 |
+| 4,193,845 | 30,201,842 | 15,550,490 | 7.20 | 3.71 |
+| 8,390,753 | 54,777,027 | 28,947,232 | 6.53 | 3.45 |
+| 16,775,355 | 91,861,059 | 50,722,920 | 5.48 | 3.02 |
+| 33,541,075 | 137,090,460 | 80,664,455 | 4.09 | 2.40 |
+
+
+With 2^28 entries in a plot, the unique ratio drops at high coverage. In practice this requires enormous bandwidth, but our analysis assumes infinite attacker bandwidth. The LXs variant (collecting only left-pair x-values) is the most theoretically effective but still requires substantial overhead.
+
+### Attack Summary
+
+For each attack method, computations were run from base strength to the capped strength of +4 (effective plot filter capped at 8192), across all compression levels. The table below shows the most effective configuration for each attack.
+
+
+| Attack Method | Strength / Eff. Filter | Plot % of Honest | Challenge Time (ms) | Supported Farm (TB) | Saved Size (TB) | Attacker W/TB saved | Attacker $/TB saved | Energy Ratio (attack/honest) | Cost Ratio (attack/honest) |
+| ---------------------------------- | ---------------------- | ---------------- | ------------------- | ------------------- | --------------- | ------------------- | ------------------- | ---------------------------- | -------------------------- |
+| PF Bit Drop (1 bit) | +6 / 8192 | 96.60% | 137 | 536.04 | 18.84 | 21.23 | $106.15 | 38.60 | 10.62 |
+| Chaining PF Bit Drop (0.83 bit)* | +6 / 8192 | 97.18% | 9.38 | 7,355.43 | 207.30 | 1.93 | $9.64 | 0.35 | 0.96 |
+| PF Components (64 set, 2-bit drop) | +6 / 8192 | 72.40% | 682 | 80.55 | 30.78 | 13.00 | $64.99 | 23.63 | 6.50 |
+| Collected Xs (33.5M set) | +6 / 8192 | 33.60% | 3,353 | 7.60 | 15.04 | 26.60 | $133.02 | 48.37 | 13.30 |
+| Collected LXs (4.2M set) | +6 / 8192 | 70.00% | 584 | 90.96 | 38.99 | 10.26 | $51.30 | 18.66 | 5.13 |
+| Match Bits Storage | +6 / 8192 | 47.50% | 1,001 | 36.06 | 39.79 | 10.05 | $50.26 | 18.28 | 5.03 |
+
+Using our base Network Plot ID Filter of 512, we are substantially resistant to all compression attacks with a plot size less than 97% of the uncompressed PoS2 plot, where our energy ratio is 20x times higher than an honest farmer, and costs are 5x higher or more. Using our analysis tool, we are still resistance up to a Network Plot ID Filter of 4096, which is 3 doublings of compute efficiency. In technology terms, this suggests we do not need to adjust our base Network Plot Id filter until GPU's are 8 times more efficient on a performance per watt basis than today, which could be more than 5 years.
+
+The Chaining PF Bit Drop attack is only potentially viable at very small compression levels (under 3%), since difficulty increases quadratically with each bit dropped. Further analysis with an optimized GPU implementation is needed to determine the practical limit of this attack. For HDDs at high plot strengths, honest farmers can sleep their drives to save energy, making it more cost effective to store honest plots than to incur the extra energy costs of this attack.
+
+### Statistical Attacks
+
+#### Removing Underperforming Partitions
+
+Partitions may exhibit slight variance in the number of Proof Fragments available for Quality String construction. For k=28, the average Proof Fragment count in a challenge scan range can differ by up to 1% from the expected value.
+
+The relative proof effectiveness of a partition is:
+
+$$
+\left( \frac{\text{Proof Fragments in partition}}{\text{Expected Proof Fragments}} \right)^{\text{Quality String length}} = \text{Relative effectiveness}
+$$
+
+If an attacker drops a partition with 32 fragments (expected: 64):
+
+$$
+\left( \frac{32}{64} \right)^{16} = \left( \frac{1}{2} \right)^{16} \approx 0.0015\%
+$$
+
+That partition produces ~0.0015% as many Quality Strings as average — effectively zero due to the 16-element chain compounding. Yet dropping it only removes 32 entries from a total of 2^28, a sacrifice of 32/2^28 ≈ 0.0000119% of the plot. The storage and challenge savings are negligible, making the attack pointless even for a severely underperforming partition.
+
+#### Favoring Larger Plots
+
+Plot entry counts vary slightly due to statistical distribution. In the Quality String design, variance compounds across the 16-element chain, amplifying small deviations.
+
+A plot's relative output is approximately:
+
+$$
+\frac{\text{Number of Proof Fragments in Quality String}^{(\frac{\text{Number of Entries in Plot}}{\text{Expected Number of Entries}})} }{ \text{Number of Proof Fragments in Quality String} } = \text{Relative Output}
+$$
+
+For k=28 plots, raw size may deviate up to ~0.1% from expected, and compounded chain variance could raise proof output by up to ~0.3% by selecting only the largest plots. Farmers wanting to reclaim space should remove their smallest plots first.
+
+### Non-Viable Attacks
+
+#### Theoretical Compression
+
+There is no meaningful room for further data compression. ANS compression on sorted Proof Fragments in T3 is already near-optimal.
+
+#### Block Difficulty Filtering
+
+In PoS1, attackers could test partial proofs against global difficulty to prune candidates before full reconstruction. In PoS2, full proof reconstruction is required for each candidate, making such pruning impossible.
+
+##### Why Limit to 2k Bits?
+
+Each Proof Fragment encodes 4 of 8 x-values at k/2 bits each, totaling 2k bits. Exceeding this would make storing two full k-bit back-pointers more efficient, defeating the compression intent. Such a structure might compress slightly better but would impose unacceptable HDD load, which would force less resistant Plot ID Filter settings or advantage SSD-equipped attackers.
+
+#### Hellman Attacks
+
+Hellman attacks are most effective on early tables (already dropped in PoS2) or on final output values in the last table (not used in PoS2). Quality String requirements would force thousands of Hellman recompute passes, erasing any precomputation benefit.
+
+#### Table Restructuring
+
+With only one stored table, the plot is a single continuous blob of sorted data with no table pointers or back-references. There are no alternative data structures to exploit.
+
+Index pointer overhead exists but is negligible relative to total plot size.
+
+#### Underweighted Data
+
+If certain plot data were rarely used, an attacker could drop and reconstruct it on demand. However, scan ranges for challenges are uniformly random, and statistical attacks on underused ranges are ineffective (see [Statistical Attacks](#statistical-attacks)). The format stores no redundant or underused data.
+
+#### Parameter Tuning for Optimal Settings
+
+**Fixed constraints:**
+
+- Pi5 must solve a k=28 proof in under 8 seconds
+- HDD activity on a fully loaded 20 TB disk must not exceed 10%
+- Rental attack resistance must defend against 5 million H100e GPUs
+- Harvester must support at least 200 TiB on a single CPU
+- No economically viable compression
+
+**Relaxed constraints (optimized where possible):**
+
+- Lower HDD activity
+- Faster plotting times
+- Lower harvester energy
+
+**Proposed parameter settings:**
+
+
+| Parameter | Value |
+| --------------------------------- | --------------------- |
+| Network Plot ID Filter | 512 |
+| Proof Fragment Challenge Set Size | 64 |
+| Number of Chains | 16 |
+| Chaining Factors | 64/1/1/1/.../1/(1/64) |
+| T1 Match Bits | 2 |
+| T1 Hashing Extra Rounds | × 2^strength |
+| T2/T3 Match Bits | 2 + strength |
+| T2/T3 Hashing Extra Rounds | 0 |
+
+
+These settings meet all fixed constraints. Further tuning may improve relaxed constraints as additional tooling and benchmarks become available.
+
+### Future Proofing
+
+Separate CHIPs present solutions for future-proofing through scheduled filter reductions and minimum plot strength increases. These automated changes can be delayed or prevented via soft forks as the community responds to technological advances over time.
+
+## Copyright
+
+Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
\ No newline at end of file
diff --git a/assets/chip-0048/chaining-set-of-paths-two-partitions.png b/assets/chip-0048/chaining-set-of-paths-two-partitions.png
new file mode 100644
index 00000000..372aca42
Binary files /dev/null and b/assets/chip-0048/chaining-set-of-paths-two-partitions.png differ
diff --git a/assets/chip-0048/chaining-set-of-paths.png b/assets/chip-0048/chaining-set-of-paths.png
new file mode 100644
index 00000000..ed416ed4
Binary files /dev/null and b/assets/chip-0048/chaining-set-of-paths.png differ
diff --git a/assets/chip-0048/chip-specification-pos.md b/assets/chip-0048/chip-specification-pos.md
new file mode 100644
index 00000000..df898d7b
--- /dev/null
+++ b/assets/chip-0048/chip-specification-pos.md
@@ -0,0 +1,610 @@
+>[!NOTE]
+>This section is still a work in progress, pending the Prover reference implementation that will clarify the Chaining specification sections.
+
+# Table of Contents
+
+- [Table of Contents](#table-of-contents)
+- [Definitions](#definitions)
+- [Proof Format](#proof-format)
+- [New Matching Algorithm](#new-matching-algorithm)
+ - [Matching Bits](#matching-bits)
+ - [match\_info](#match_info)
+ - [sub\_k and partition bits](#sub_k-and-partition-bits)
+ - [g function](#g-function)
+ - [matching\_target hash](#matching_target-hash)
+- [matching section](#matching-section)
+ - [M1(x1,x2)](#m1x1x2)
+ - [Matching algorithm diagram](#matching-algorithm-diagram)
+ - [M2(x1,x2,x3,x4)](#m2x1x2x3x4)
+ - [M3 (x1,x2,...x8)](#m3-x1x2x8)
+ - [Proof Fragments specify partitions](#proof-fragments-specify-partitions)
+ - [M4 (x1...x16)](#m4-x1x16)
+ - [M5 (x1...x32)](#m5-x1x32)
+ - [Collation functions](#collation-functions)
+ - [get partition bits](#get-partition-bits)
+- [Proof Fragments and Feistel Cipher](#proof-fragments-and-feistel-cipher)
+- [Compressing Tables](#compressing-tables)
+ - [Benes Compression](#benes-compression)
+ - [Optimizing Fanouts](#optimizing-fanouts)
+- [Chaining Specification](#chaining-specification)
+ - [T4 Input Filtering](#t4-input-filtering)
+ - [Paths between Partitions](#paths-between-partitions)
+ - [Challenge to Chains](#challenge-to-chains)
+
+
+# Definitions
+
+x || y denotes zero-padded concatenation: for the implied domain y [2^z] it is (x << z) + y
+<<, >> bitwise left shift and right shift operators
+
+# Proof Format
+
+For a given 32-byte plot id, and a space parameter k where k is in (28,30,32), a sub-k parameter in (20,21,22) and a 32-byte challenge chosen by a verifier, a proof of space is a chain of 16 proofs of 32 x-values each for a total of 512 k bits where:
+
+ Proof 1 = x1, x2, ... x32 (with x in [0..<2^k])
+ Proof 2 = x33, x34, ... x64
+ ...
+ Proof 16 = x480, x481, ... x512
+
+**Initialization**
+ \(c_0\) = 32-bit challenge provided by the verifier.
+
+**Chain Growth**
+For \(i = 1,2,\dots,16\):
+
+\[
+ \{x_{32(i-1)+1},\,x_{32(i-1)+2},\,\dots,\,x_{32i}\}
+ =
+ \begin{cases}
+ x_1,\dots,x_{32}, & i=1,\\
+ x_{33},\dots,x_{64}, & i=2,\\
+ \vdots\\
+ x_{481},\dots,x_{512}, & i=16.
+ \end{cases}
+ \]
+\[
+ c_i \;=\; \mathrm{ChainLink}\bigl(c_{i-1},\,x_{32(i-1)+1},\dots,x_{32i}\bigr),
+ \]
+
+which internally runs the cascade of \(M_5\to M_4\to M_3\to M_2\to M_1\), Feistel, and collation functions \(C_1\)–\(C_5\).
+
+**Threshold Check**
+ \[
+ \text{if } c_i \;\ge\; T_i
+ \quad\text{then the proof fails,}
+ \]
+ where
+ \[
+ T_1 = \text{first\_threshold},
+ \quad
+ T_{2\ldots16} = \text{next\_threshold}.
+ \]
+
+**Validation**
+ The proof is **valid** if and only if
+ \[
+ c_i < T_i
+ \quad\text{for all }i=1,\dots,16.
+ \]
+
+
+
+
+
+# New Matching Algorithm
+
+The matching algorithm for all tables has changed, and now forms the basis of security. It is a memory hard algorithm which can be tuned to take more or less time by adjusting the number of match indexes to test for whether two entries match.
+
+The benefit of this algorithm is that we can set the difficulty very high so that plotting will take longer and compression attacks will be more expensive, yet it incurs negligible cost when validating a proof. Since validation is “free”, we can tune this to be as difficult as we need, without adding extra compute expense to the network.
+
+## Matching Bits
+
+The matching algorithm takes an additional index number which is used to show that a match works. The left value and the index results in a bucket. This must match a bucket which the right value hashes to, and the matching combination of them have to pass an additional filter. Index bits will be included in proofs to make verification fast. To keep required memory down, entries are sorted into sections. All of the buckets to which a left value hashes will land in the same section.
+
+## match_info
+
+Matching functions operate over section bits that define particular sections, match_key bits that define how many iterations are used to find matches, and match_target bits that define collisions for matching.
+
+ match_info comprises k bits that define [section | match_key | match_target ]
+
+```mermaid
+flowchart TD
+ A[hash#40;x#41;]
+ A --> B[Match Info: k bits]
+ B --> C[Section Bits]
+ B --> D[Match Key Bits]
+ B --> E[Match Target Bits]
+```
+
+The number of bits used in match info for each table is:
+
+ t1,t2,t3 : num section bits = (k-26)
+ t4,t5 : num section bits = 2
+ t1 : num match key bits = 4
+ t2,t3,t4,t5 : num match key bits = 2
+ t1,t2,t3,t4,t5 : match target bits are remaining k - num_section_bits - num_match_key_bits
+
+Note that section bits and match key bits can be tuned and may be subject to change.
+
+## sub_k and partition bits
+
+T4 and T5 are partitioned tables, where each partition holds a unique range 2^sub_k in 2^k, and there are 2^partition_bits total partitions, where partition_bits = k - sub_k
+
+ The value for sub_k depends on k size. For:
+ k = 28 : sub_k is 16, partition bits is 12
+ k = 30 : sub_k is 17, partition bits is 13
+ k = 32 : sub_k is 18, partition bits is 14
+
+## g function
+
+The g function is used on the initial x-values range [0..2^k] and starts tehe generation of tables.
+
+ g(x) = chacha(plot_id || x) -> match_info k bit result
+
+
+## matching_target hash
+
+The matching target forms the basis of randomly pairing a left side entry with right side entry's `matchInfo` bits. It takes a left side meta data (meta_l) and a right side entries key, and checks if their hash matches the right side target bits from `matchInfo`
+
+ matching_target(meta_l, key_r) = match_target_bits of blake3(plot_id, meta_l, key_r)
+
+# matching section
+
+A match must occur between two different sections, with the "left" side match being in the lower half of the sections, and the "right" side match being in a corresponding section in the upper half of the sections.
+
+```python
+def matching_section(self, table_id, section):
+ num_section_bits = self.params.get_num_section_bits(table_id)
+ num_sections = self.params.get_num_sections(table_id)
+ rotated_left = (section << 1) | (section >> (num_section_bits - 1))
+ rotated_left_plus_1 = (rotated_left + 1) & (num_sections - 1)
+ section = (rotated_left_plus_1 >> 1) | (rotated_left_plus_1 << (num_section_bits - 1))
+ return section & (num_sections - 1)
+
+ # a section will always match 2 other sections from the other opposite lower/higher range, use this to find the other from a given section.
+ def inverse_matching_section(self, table_id, section):
+ num_section_bits = self.params.get_num_section_bits(table_id)
+ num_sections = self.params.get_num_sections(table_id)
+ # Invert the right rotation by rotating left by 1.
+ rotated_left = ((section << 1) | (section >> (num_section_bits - 1))) & (num_sections - 1)
+
+ # Invert the addition (subtract 1 modulo num_sections).
+ rotated_left_minus_1 = (rotated_left - 1) & (num_sections - 1)
+
+ # Invert the left rotation by rotating right by 1.
+ section_l = ((rotated_left_minus_1 >> 1) | (rotated_left_minus_1 << (num_section_bits - 1))) & (num_sections - 1)
+
+ return section_l
+
+ def get_matching_sections(self, table_id, section):
+ return self.matching_section(table_id, section), self.inverse_matching_section(table_id, section)
+```
+
+## M1(x1,x2)
+
+x1 and x2 match iff:
+
+ section_1, _, _ = g(x1)
+ section_2, match_key_2, match_target_2 = g(x2)
+
+ where:
+
+ matching_section(section_1, section_2) must be true
+
+ and
+
+ matching_target(x1, match_key_2) == match_target_2
+
+ and
+
+ match_filter_16(x1 & 15, x2 & 15) is true
+
+
+
+### Matching algorithm diagram
+```mermaid
+flowchart TD
+ A[M1#40;x1, x2#41;]
+ A --> B[g#40;x1#41;:
section_1,
_, _]
+ A --> C[g#40;x2#41;:
section_2,
match_key_2,
match_target_2]
+ B --> D[matching_section
#40;section_1, section_2#41;?]
+ C --> D
+ D -- Yes --> E[matching_target
#40;x1, match_key_2#41;
== match_target_2?]
+ D -- No --> F[No Match]
+ E -- Yes --> G[match_filter_16
#40;x1, x2#41;
true?]
+ E -- No --> F
+ G -- Yes --> H[Match]
+ G -- No --> F
+```
+
+
+
+
+## M2(x1,x2,x3,x4)
+
+x1,x2,x3,x4 match iff:
+
+ M1(x1,x2) and M1(x3,x4)
+
+ match_info_l, meta_l = C1(x1,x2)
+ match_info_r, meta_r = C1(x3,x4)
+
+ section_L, _, _ = match_info_l where match_key_bits is 2
+ section_R, match_key_R, match_target_R = match_info_r where match_key_bits is 2
+
+ where:
+
+ matching_section(section_L, section_R) must be true
+
+ and
+
+ matching_target(meta_l, match_key_R) == match_target_R
+
+ and
+
+ match_filter_4(meta_l & 15, meta_r & 15) is true
+
+## M3 (x1,x2,...x8)
+
+x1,x2,x3,x4,x5,x6,x7,x8 match iff:
+
+ M2(x1,x2,x3,x4) and M2(x5,x6,x7,x8)
+
+ match_info_l, meta_l = C2(C1(x1,x2), C1(x3,x4))
+ match_info_r, meta_r = C2(C1(x5,x6), C1(x7,x8))
+
+ section_L, _, _ = match_info_l where match_key_bits is 2
+ section_R, match_key_R, match_target_R = match_info_r where match_key_bits is 2
+
+ where:
+
+ matching_section(section_L, section_R) must be true
+
+ and
+
+ matching_target(meta_l, match_key_R) == match_target_R
+
+ and
+
+ match_filter_4(meta_l & 15, meta_r & 15) is true
+
+### Proof Fragments specify partitions
+
+- Proof Fragment is 2k bits
+- partition_bits = (k-sub_k) bits
+- first partition_bits of an encryptedXs specify the lateral partition
+- next 2 bits specify order bits
+- lower partition_bits specify cross-over partition (R)
+- if the top order bit is 0:
+ - lateral goes into upper partition, cross-over goes into lower partition
+ otherwise
+ - lateral goes into lower partition, cross-over goes into upper partition
+
+
+| **Segment** | **Bit Length** | **Description** | **Placement Based on Top Order Bit** |
+|-----------------------------|----------------------------------|------------------------------------------------------------------|--------------------------------------------------------|
+| **Lateral Partition (L)** | \( \text{partition\_bits} = k - \text{sub\_k} \) bits | The first \( \text{partition\_bits} \) of the encryptedXs specify the lateral partition. | If top order bit = 0: goes to **Upper Partition**; if top order bit = 1: goes to **Lower Partition**. |
+| **Order Bits** | 2 bits | The next 2 bits that determine the ordering. | Top order bit determines the placement of lateral and cross-over partitions. |
+| **Cross-over Partition (R)**| \( \text{partition\_bits} = k - \text{sub\_k} \) bits | The lower \( \text{partition\_bits} \) of the encryptedXs specify the cross-over partition (R). | If top order bit = 0: goes to **Lower Partition**; if top order bit = 1: goes to **Upper Partition**. |
+
+
+## M4 (x1...x16)
+
+x1,x2,x3,x4,x5,x6,x7,x8,...x16 match iff:
+
+
+ # this is commented out as it was an earlier matching function (may revert)
+
+ M3(x1...x8) and M3(x9...x16)
+
+ proof_fragment_l = encrypt(x1 >> k/2 || x3 >> k/2 || x5 >> k/2 || x7 >> k/2)
+ proof_fragment_r = encrypt(x9 >> k/2 || x11 >> k/2 || x13 >> k/2 || x15 >> k/2)
+
+ num_partition_bits =
+ if k = 28: 8
+ if k = 30: 9
+ if k = 32: 10
+
+ top_order_bit_l = proof_fragment_l >> (k - num_partition_bits - 1) & 1
+ top_order_bit_r = proof_fragment_r >> (k - num_partition_bits - 1) & 1
+
+ partition_l = get_l_partition_bits(proof_fragment_l)
+ partition_r = get_r_partition_bits(proof_fragment_r)
+
+ match_info_l, meta_l = C3(top_order_bit_l, C2(C1(x1,x2), C1(x3,x4)), C2(C1(x5,x6), C1(x7,x8)))
+ match_info_r, meta_r = C3(top_order_bit_r, C2(C1(x9,x10), C1(x11,x12)), C2(C1(x13,x14), C1(x15,x16)))
+
+ section_L, _, _ = match_info_l where match_key_bits is 2
+ section_R, match_key_R, match_target_R = match_info_r where match_key_bits is 2
+
+ where:
+ (note top_order_bit_l/r are redundant tests)
+ top_order_bit_l == 0
+ top_order_bit_r == 1
+ partition_l == partition_r
+
+ and
+
+ matching_section(section_L, section_R) must be true
+
+ and
+
+ matching_target(meta_l, match_key_R) == match_target_R
+
+ and
+
+ match_filter_4(meta_l & 15, meta_r & 15) is true
+
+
+## M5 (x1...x32)
+
+x1,x2,x3,x4,x5,x6,x7,x8,...x32 match iff:
+
+ M4(x1...x16) and M3(x17...x32)
+
+ proof_fragment_ll = encrypt(x1 >> k/2 || x3 >> k/2 || x5 >> k/2 || x7 >> k/2)
+ proof_fragment_rl = encrypt(x16 >> k/2 || x17 >> k/2 || x18 >> k/2 || x19 >> k/2)
+
+ num_partition_bits =
+ if k = 28: 12
+ if k = 30: 13
+ if k = 32: 14
+
+ second_order_bit_ll = proof_fragment_ll >> (k - num_partition_bits - 2) & 1
+ second_order_bit_rl = proof_fragment_rl >> (k - num_partition_bits - 2) & 1
+
+ match_info_l, meta_l = C4(second_order_bit_ll,
+ C3(0, C2(C1(x1,x2), C1(x3,x4)), C2(C1(x5,x6), C1(x7,x8))),
+ C3(1, C2(C1(x9,x10), C1(x11,x12)), C2(C1(x13,x14), C1(x15,x16))
+ )
+ match_info_r, meta_r = C4(second_order_bit_rl,
+ C3(0, C2(C1(x17,x18), C1(x19,x20)), C2(C1(x21,x22), C1(x23,x24))),
+ C3(1, C2(C1(x25,x26), C1(x27,x28)), C2(C1(x29,x30), C1(x31,x32))
+ )
+
+ section_L, _, _ = match_info_l where match_key_bits is 2
+ section_R, match_key_R, match_target_R = match_info_r where match_key_bits is 2
+
+ where:
+ (note: these are redundant tests)
+ second_order_bit_ll == 0
+ second_order_bit_rl == 1
+
+ and
+
+ matching_section(section_L, section_R) must be true
+
+ and
+
+ matching_target(meta_l, match_key_R) == match_target_R
+
+ and
+
+ C5(meta_l, meta_r) < 855570511
+
+Note that since T5 is the final table, the collation function C5 returns 32 filter bits. The chance of passing is 0.1992030328 or 855570511 out of 2^32. This will result in T3, T4, and T5 number of entries in the table be approximately the same after pruning.
+
+## Collation functions
+
+ C1(x1,x2) = blake(x1 || x2) -> k bits as match info
+ (x1 || x2) 2k bits as meta
+
+ C2(l,r) = blake3(l || r) -> k bits match_info, 2k bits meta
+
+ C3(order_bit, l,r) = blake3(l || r) ->
+ order_bit || (sub_k - 1) bits as match_info
+ 2k bits as meta
+
+ C4(order_bit, l, r) = blake3(l || r) ->
+ order_bit || (sub_k - 1) bits as match_info
+ 2k bits as meta
+
+ C5(l, r) = blake3(l || r) -> 32 bit filter value
+
+
+## get partition bits
+
+A Proof Fragment defines which partition of T4 and T5 they move into.
+
+ num_partition_bits = k - sub_k
+
+ Proof Fragment 2k bits: num_partition_bits partition_L || 2 order bits || (k - 2 - num_partition_bits * 2) bits || num_partition_bits partition_R
+
+
+# Proof Fragments and Feistel Cipher
+
+Create ciphertext from 2k bits from k/2 bits of each odd number x (4 in total). It is encrypted with plot id bytes and can be decrypted with same plot id bytes. Refer to the reference implementation.
+
+
+# Compressing Tables
+
+## Benes Compression
+
+Will be added later.
+
+## Optimizing Fanouts
+
+Benes will include two fanouts that represent the L and R side pointers from T4 back to T3. Either side could include a fanout of 0, meaning the entry was not used for that side (pruned) but was kept for the other side pointer. Note that both sides cannot have a fanout of 0, as otherwise that entry would have been pruned and removed from the plot. We use this to compress additional bits as follows:
+
+| L fanout | R fanout | notes | encoding L fanout | encoding R fanout | decompression notes |
+|----------|----------|-------|-------------------|-------------------|---------------------|
+| 0 | 0 | No occurences due to pruning | | | |
+| 1..n | 0..n | L has fanout of 1 or more, R is zero or more, no change to encoding | as is | as is | if L fanout > 0, use values as is |
+| 0 | 1..n | L has fanout of 0, R must be 1 or more. Use -1 in R encoding | 0 | (n-1) | if L is 0, R is value + 1 |
+
+
+# Chaining Specification
+
+>[!NOTE]
+>This section is still a work in progress, pending the Prover reference implementation.
+
+
+In the following section we describe how to chain Quality Links to form the Quality Chain. A Quality Link is comprised of four Proof Fragments in order, denoted as: `LL`, `LR`, `RL`, `RR`. Consider these as the leaves of a proof tree.
+
+### T4 Input Filtering
+
+Note that during construction, since each T3 entry is used by both partition and partition', we use 2^(sub_k+1) inputs into each partition of T4 and a matching function to reduce matches by 50%, to produce 2^sub_k outputs.
+
+This is done to increase the number of unique entries from T3 that are matched in a T4 partition, and significantly helps against T4 Partition attacks. Without this measure an attacker could group fewer inputs targetted to recompute the two partitions used on the chaining process and potentially drop T4 and T5 entirely. The list of Proof Fragments that receive R pointers used for the initial challenge lookup benefits from being sorted, so an attacker would also have to use more of these entries to do a lookup and add data to their attack. See section in security for a deeper analysis.
+
+
+## Paths between Partitions
+
+
+
+*Figure: partitions organized into logical stripes on disk.*
+
+We are able to organize partitions as shown in the Figure above, such that all pointers in T5, and all lateral (L) pointers from T4 through to T3 are contained in the same logical disk sector. The R pointers from T4 bridge to other disk stripes. Following this representation, we can now look at how data flows across two partitions that will be used in chaining.
+
+{ width=600 }
+*Figure: Back pointer connections between any two partitions A and B. Each box in T3 represents an Proof Fragment. The left boxes in T4 and T5 are L back pointers, and right boxes are R pointers respectively. Note that all L pointers from T4 to T3 point back to the same partition in T3. All R pointers from T4 to T3 will point from a lower partition to any of the upper partitions, or from an upper partition to any of the lower partitions. The amount of R pointers from T4 partition A to partition B will be about ~2^sub_k / 2^(k - sub_k)*
+
+
+
+*Figure: LR/RL paths. Contains Proof Fragments LL, RL, and LR. The RR Proof Fragment will be in another partition. Approximately 2^sub_k / 2^(k-sub_k) paths.*
+
+
+*Figure: LL/RR paths. Contains Proof Fragments LL, RL, and RR. The LR Proof Fragment will be in another partition. Approximately 2^sub_k / 2^(k-sub_k) paths.*
+
+Note that all the above paths pass through the parent node. This is critical since an attacker could drop bits on back pointers in the parent node, so we pass through include all points where bit dropping could occur.
+
+Currently we expect the following k and sub_k sizes, subject to further security analysis.
+
+| k size | sub_k size | # partitions | Expected #possible paths
2^sub_k / 2^(k-sub_k)|
+|--------|------------|--------------|--------------------------------------------------|
+| 28 | 20 | 256 * 2 | 4096 * 2 |
+| 30 | 21 | 512 * 2 | 4096 * 2 |
+| 32 | 22 | 1024 * 2| 4096 * 2 |
+
+
+## Challenge to Chains
+
+{width=400}
+*Figure: In this example (1) a challenge finds a passing Proof Fragment in partition 1. In (2) the Proof Fragment specifies an R' partition, in this case partition 2', and we find the pointer in T4 that points back to our selected Proof Fragment. In (3) we find the T5 parent node so that in (4) we traverse down to find the other child node for the Proof Fragments that are part of the Quality Link. Thus, we have two partitions between which to use our chain of Quality Links. The R path from L2',R is restricted to just those in the origin partition 1. The R' path from L1 is restricted to partition 2' that was defined in the passing first Proof Fragment. Note that if we had additional Proof Fragments that passed the first stage of the filter, this would result in additional Quality Chains but only 1 additional partition per additional Proof Fragment as the origin partition remains the same.*
+
+
+*Figure: Full set of possible Quality Link patterns.*
+
+
+```mermaid
+flowchart TD
+
+ challenge[Challenge]
+
+ subgraph SELECTEDPARTITIONS[Selected Partitions]
+ partitionA[Partition A]
+ partitionB[Partition B]
+ partitionC[Partition C]
+ end
+
+ %% subgraph QUALITYCHAIN[Quality Chain, 8 Links]
+
+ subgraph QUALITYLINK1[Proof Quality Link 1]
+ pfLL[Proof Fragment
#40;LL#41;
in partition B]
+ pfRL[Proof Fragment
#40;LR#41;
in partition B]
+ pfLR[Proof Fragment
#40;RL or RR#41;
in partition A]
+ pfRR[Proof Fragment
#40;RR or RL#41;
in partition C]
+ end
+
+ subgraph ALLCANDIDATES[Build All Candidate Paths]
+ path1[LL/RL/LR paths]
+ path2[LL/RL/RR paths]
+ end
+
+ SELECTEDPARTITIONS -- "build" --> ALLCANDIDATES
+ ALLCANDIDATES --> hash1
+ ALLCANDIDATES --> hash2
+
+ hash1[Challenge 2:
hash
#40;challenge,LL,LR,RL,RR#41;]
+
+ challenge -- "selection filter in partition A
~ 1/Gn selections" --> pfLR
+ challenge -- "specifies" --> partitionA
+ pfLR -- "specifies" --> partitionB
+ pfRR -- "specifies" --> partitionC
+ pfLR -- "path" --> pfLL
+ pfLR -- "path" --> pfRL
+ pfLR -- "path" --> pfRR
+ pfLL --> hash1
+ pfRL --> hash1
+ pfLR --> hash1
+ challenge --> hash1
+
+ subgraph QUALITYLINK2[Quality Link]
+ pfRL2[Proof Fragment
LL + LR + #40;RL or RR#41;]
+ end
+
+ hash1 -- "filter all candidate paths
~3 selections
" --> QUALITYLINK2
+ pfRL2 --> hash2
+
+ hash2[Challenge N:
hash
#40;challenge N-1,LL,LR,RL#41;
or
#40;challenge N-1,LL,LR,RR#41;]
+
+ hash2 -- "filter all candidate paths
~3 selections
" --> QUALITYLINK2
+
+
+ %% end
+```
diff --git a/assets/chip-0048/fragment-plotid-cost.png b/assets/chip-0048/fragment-plotid-cost.png
new file mode 100644
index 00000000..e099e785
Binary files /dev/null and b/assets/chip-0048/fragment-plotid-cost.png differ
diff --git a/assets/chip-0048/fragment-plotid-effort.png b/assets/chip-0048/fragment-plotid-effort.png
new file mode 100644
index 00000000..177df2a6
Binary files /dev/null and b/assets/chip-0048/fragment-plotid-effort.png differ
diff --git a/assets/chip-0048/fragment-scan-filter-resistance-gain.png b/assets/chip-0048/fragment-scan-filter-resistance-gain.png
new file mode 100644
index 00000000..d8470282
Binary files /dev/null and b/assets/chip-0048/fragment-scan-filter-resistance-gain.png differ
diff --git a/assets/chip-0048/partitions-challenge-to-chain.png b/assets/chip-0048/partitions-challenge-to-chain.png
new file mode 100644
index 00000000..c6f4d97b
Binary files /dev/null and b/assets/chip-0048/partitions-challenge-to-chain.png differ
diff --git a/assets/chip-0048/partitions-data-relationship.png b/assets/chip-0048/partitions-data-relationship.png
new file mode 100644
index 00000000..95ca6e2b
Binary files /dev/null and b/assets/chip-0048/partitions-data-relationship.png differ
diff --git a/assets/chip-0048/partitions-disk-stripes.png b/assets/chip-0048/partitions-disk-stripes.png
new file mode 100644
index 00000000..84638919
Binary files /dev/null and b/assets/chip-0048/partitions-disk-stripes.png differ
diff --git a/assets/chip-0048/partitions-mappings-example1.png b/assets/chip-0048/partitions-mappings-example1.png
new file mode 100644
index 00000000..cf875ced
Binary files /dev/null and b/assets/chip-0048/partitions-mappings-example1.png differ
diff --git a/assets/chip-0048/partitions-mappings-example2.png b/assets/chip-0048/partitions-mappings-example2.png
new file mode 100644
index 00000000..d9739867
Binary files /dev/null and b/assets/chip-0048/partitions-mappings-example2.png differ
diff --git a/assets/chip-0048/partitions-mappings-example3.png b/assets/chip-0048/partitions-mappings-example3.png
new file mode 100644
index 00000000..ea58f7d3
Binary files /dev/null and b/assets/chip-0048/partitions-mappings-example3.png differ
diff --git a/assets/chip-0048/partitions-mappings.png b/assets/chip-0048/partitions-mappings.png
new file mode 100644
index 00000000..95e82f35
Binary files /dev/null and b/assets/chip-0048/partitions-mappings.png differ
diff --git a/assets/chip-0048/partitions-path-ll-rl.png b/assets/chip-0048/partitions-path-ll-rl.png
new file mode 100644
index 00000000..f389e213
Binary files /dev/null and b/assets/chip-0048/partitions-path-ll-rl.png differ
diff --git a/assets/chip-0048/partitions-path-ll-rr.png b/assets/chip-0048/partitions-path-ll-rr.png
new file mode 100644
index 00000000..1819b9f5
Binary files /dev/null and b/assets/chip-0048/partitions-path-ll-rr.png differ
diff --git a/assets/chip-0048/partitions-path-lr-rl.png b/assets/chip-0048/partitions-path-lr-rl.png
new file mode 100644
index 00000000..ef8f4af4
Binary files /dev/null and b/assets/chip-0048/partitions-path-lr-rl.png differ
diff --git a/assets/chip-0048/partitions-path-rl-ll.png b/assets/chip-0048/partitions-path-rl-ll.png
new file mode 100644
index 00000000..3aaee3a3
Binary files /dev/null and b/assets/chip-0048/partitions-path-rl-ll.png differ
diff --git a/assets/chip-0048/pos-challenge-to-proofs.png b/assets/chip-0048/pos-challenge-to-proofs.png
new file mode 100644
index 00000000..087a4dc3
Binary files /dev/null and b/assets/chip-0048/pos-challenge-to-proofs.png differ
diff --git a/assets/chip-0048/proof-fragments-solve-times.png b/assets/chip-0048/proof-fragments-solve-times.png
new file mode 100644
index 00000000..0f0725aa
Binary files /dev/null and b/assets/chip-0048/proof-fragments-solve-times.png differ
diff --git a/assets/chip-0048/proof-of-space-proof-fragments.png b/assets/chip-0048/proof-of-space-proof-fragments.png
new file mode 100644
index 00000000..3c5a2e7c
Binary files /dev/null and b/assets/chip-0048/proof-of-space-proof-fragments.png differ
diff --git a/assets/chip-0048/proof-of-space-visual-representation.png b/assets/chip-0048/proof-of-space-visual-representation.png
new file mode 100644
index 00000000..703fab2e
Binary files /dev/null and b/assets/chip-0048/proof-of-space-visual-representation.png differ
diff --git a/assets/chip-0048/security-chaining-bit-dropping.png b/assets/chip-0048/security-chaining-bit-dropping.png
new file mode 100644
index 00000000..25855778
Binary files /dev/null and b/assets/chip-0048/security-chaining-bit-dropping.png differ
diff --git a/assets/chip-0048/security-partition-grinding-attack.png b/assets/chip-0048/security-partition-grinding-attack.png
new file mode 100644
index 00000000..5b01a5f4
Binary files /dev/null and b/assets/chip-0048/security-partition-grinding-attack.png differ
diff --git a/assets/chip-0048/security-partition-grinding-collect-fragments.png b/assets/chip-0048/security-partition-grinding-collect-fragments.png
new file mode 100644
index 00000000..459dcafb
Binary files /dev/null and b/assets/chip-0048/security-partition-grinding-collect-fragments.png differ
diff --git a/assets/chip-0048/security-partition-grinding-collect.png b/assets/chip-0048/security-partition-grinding-collect.png
new file mode 100644
index 00000000..4ca97f7b
Binary files /dev/null and b/assets/chip-0048/security-partition-grinding-collect.png differ
diff --git a/assets/chip-0048/security-partition-grinding-rebuild-fragments.png b/assets/chip-0048/security-partition-grinding-rebuild-fragments.png
new file mode 100644
index 00000000..7d300fe5
Binary files /dev/null and b/assets/chip-0048/security-partition-grinding-rebuild-fragments.png differ
diff --git a/assets/chip-0048/security-partition-grinding-rebuild.png b/assets/chip-0048/security-partition-grinding-rebuild.png
new file mode 100644
index 00000000..44efef6c
Binary files /dev/null and b/assets/chip-0048/security-partition-grinding-rebuild.png differ