diff --git a/docs/page-allocation-and-random-io.md b/docs/page-allocation-and-random-io.md new file mode 100644 index 0000000..3ed9c23 --- /dev/null +++ b/docs/page-allocation-and-random-io.md @@ -0,0 +1,268 @@ +# Page Allocation & Random I/O Design + +This document explains how RamDrive allocates fixed-size 64KB pages and supports random read/write at arbitrary byte offsets. + +## Overview + +RamDrive stores all file data in **native memory** (outside the .NET GC heap) using a **page pool** architecture. Each file's content is backed by a **page table** — an array of pointers to fixed-size pages — enabling O(1) random access to any byte offset. + +``` +User I/O request (read/write at offset + length) + │ + │ offset → pageIndex = offset / pageSize + │ pageOffset = offset % pageSize + ▼ +PagedFileContent (nint[] _pages — per-file page table) + │ + │ _pages[pageIndex] → native memory pointer + ▼ +PagePool (ConcurrentStack free list + NativeMemory) + │ + ▼ +OS Native Memory (NativeMemory.AllocZeroed) +``` + +## Page Pool (`PagePool.cs`) + +The `PagePool` manages a fixed-capacity pool of identically-sized memory pages. + +### Allocation Strategy + +``` +┌─────────────────────────────────────────────────────────┐ +│ PagePool │ +│ │ +│ Configuration: │ +│ pageSize = PageSizeKb × 1024 (default: 65536) │ +│ maxPages = CapacityMb × 1M / pageSize │ +│ │ +│ ┌───────────────────────┐ ┌───────────────────────┐ │ +│ │ ConcurrentStack │ │ ConcurrentStack │ │ +│ │ _freePages (LIFO) │ │ _allPages (cleanup) │ │ +│ │ ┌──┬──┬──┬──┐ │ │ tracks every alloc │ │ +│ │ │p5│p3│p1│..│ │ │ for Dispose() │ │ +│ │ └──┴──┴──┴──┘ │ └───────────────────────┘ │ +│ └───────────────────────┘ │ +│ │ +│ Counters: │ +│ _allocatedCount (total pages allocated from OS) │ +│ _rentedCount (pages currently in use) │ +└─────────────────────────────────────────────────────────┘ +``` + +### Two Allocation Modes + +1. **Lazy (default, `PreAllocate=false`)**: Pages are allocated from the OS on first demand via `NativeMemory.AllocZeroed`. The CAS loop in `AllocateNewPageIfUnderCapacity` ensures thread-safe capacity enforcement without locks. + +2. **Pre-allocate (`PreAllocate=true`)**: All pages are allocated at startup and pushed onto `_freePages`. Subsequent `Rent` calls only pop from the stack — no OS calls on the hot path. + +### Rent/Return Flow + +``` +Rent(): + 1. Try _freePages.TryPop() ← lock-free CAS, O(1) + 2. If empty → AllocateNewPageIfUnderCapacity() + └─ CAS loop on _allocatedCount ← ensures total ≤ maxPages + └─ NativeMemory.AllocZeroed() ← OS allocation (only first time) + 3. Return nint.Zero if capacity exhausted + +Return(page): + 1. NativeMemory.Clear(page) ← zero the page (security + correctness) + 2. _freePages.Push(page) ← back to free list + +RentBatch(buffer, count): + 1. _freePages.TryPopRange() ← single CAS for multiple pages + 2. Allocate remainder if needed + +ReturnBatch(pages, count): + 1. Clear all pages + 2. _freePages.PushRange() ← single CAS for multiple pages +``` + +### CAS-Based Capacity Enforcement + +Instead of using a lock, `AllocateNewPageIfUnderCapacity` uses a compare-and-swap loop: + +```csharp +long current = Volatile.Read(ref _allocatedCount); +while (current < _maxPages) +{ + long next = Interlocked.CompareExchange(ref _allocatedCount, current + 1, current); + if (next == current) // Won the race + { + return AllocateNativePage(); // Safe to allocate + } + current = next; // Lost the race, retry with updated value +} +return nint.Zero; // Capacity exhausted +``` + +This is lock-free and scales well under concurrent access. + +## Paged File Content (`PagedFileContent.cs`) + +Each file has a `PagedFileContent` object that maps byte offsets to pages. + +### Page Table Structure + +``` +File: "data.bin" (150,000 bytes written) + +_length = 150000 +_pages[] (nint array — page table): + +Index: [0] [1] [2] +Pointer: 0x7F...A0 0x7F...B0 0x7F...C0 + │ │ │ + ▼ ▼ ▼ + ┌──────┐ ┌──────┐ ┌──────┐ + │64 KB │ │64 KB │ │64 KB │ + │page 0│ │page 1│ │page 2│ + └──────┘ └──────┘ └──────┘ + +Byte 0─65535 → page 0 +Byte 65536─131071 → page 1 +Byte 131072─150000 → page 2 (partial, rest is zero) +``` + +### Random Read at Arbitrary Offset + +To read `N` bytes starting at byte offset `O`: + +``` +pageIndex = O / pageSize ← which page +pageOffset = O % pageSize ← offset within that page +chunkSize = min(N, pageSize - pageOffset) ← bytes available in this page + +Copy chunkSize bytes from: + _pages[pageIndex] + pageOffset → destination buffer + +If chunkSize < N: + advance to next page, repeat +``` + +**Example**: Read 100 bytes at offset 65500 (page size = 65536): + +``` +Step 1: pageIndex=0, pageOffset=65500, chunkSize=min(100, 36)=36 + → copy 36 bytes from page 0, offset 65500 + +Step 2: pageIndex=1, pageOffset=0, chunkSize=min(64, 65536)=64 + → copy 64 bytes from page 1, offset 0 + +Total: 36 + 64 = 100 bytes ✓ +``` + +### Random Write — Three-Phase Protocol + +Write operations use a three-phase approach to minimize write-lock hold time: + +``` +Phase 1 — Read Lock: Scan page table + ┌─────────────────────────────────────────┐ + │ Identify which pages need allocation │ + │ (i.e., _pages[i] == nint.Zero) │ + │ Count = neededCount │ + └─────────────────────────────────────────┘ + │ + ▼ +Phase 2 — No Lock: Batch allocate from PagePool + ┌─────────────────────────────────────────┐ + │ pool.RentBatch(buffer, neededCount) │ + │ OS allocation happens here, │ + │ outside any file lock │ + └─────────────────────────────────────────┘ + │ + ▼ +Phase 3 — Write Lock: Assign pages + memcpy + ┌─────────────────────────────────────────┐ + │ For each chunk: │ + │ if _pages[idx] == Zero: │ + │ _pages[idx] = preAllocated[j++] │ + │ memcpy: source → page + offset │ + │ Update _length if extended │ + └─────────────────────────────────────────┘ +``` + +**Why three phases?** Phase 2 (the expensive part — OS memory allocation) runs without holding any lock. The write lock in phase 3 only does pointer assignments and `memcpy`, which are fast. + +### Sparse File Support + +Pages are allocated **on demand**. If you write to offset 1,000,000 without writing offsets 0–999,999, only the page(s) covering offset 1,000,000 are allocated: + +``` +_pages[]: [Zero] [Zero] ... [Zero] [0x7F..] [Zero] ... + │ │ │ │ + ▼ ▼ ▼ ▼ + (not (not (not (allocated — + allocated) allocated) data here) +``` + +Reading from an unallocated page returns zeroes — the same behavior as a sparse file on NTFS. + +### Truncation + +`SetLength(newLength)` when shrinking: +1. Zero the partial data in the last retained page (security) +2. Collect all pages beyond `newLength` +3. `ReturnBatch` them to the pool (zeroed on return) +4. Shrink the `_pages[]` array + +### Locking Model + +``` +Per-file ReaderWriterLockSlim: + + Read(): EnterReadLock ← concurrent reads don't block each other + Write(): Phase 1 = EnterReadLock (scan) + Phase 2 = no lock (allocate) + Phase 3 = EnterWriteLock (assign + copy) + SetLength(): EnterWriteLock + +Global _structureLock (in RamFileSystem): + CreateFile/CreateDirectory/Delete/Move + ← only structural operations, not I/O +``` + +## End-to-End: A Random Write Example + +Writing "Hello" at offset 70,000 in a new file: + +``` +1. DokanRamAdapter.WriteFile(offset=70000, data="Hello") + │ +2. node.Content.Write(70000, "Hello") + │ +3. Phase 1 (read lock): + │ pageIndex 1 (offset 65536-...) → need? yes (Zero) + │ neededCount = 1 + │ +4. Phase 2 (no lock): + │ pool.RentBatch([p1], 1) + │ ← allocates 1 × 64KB from native memory + │ +5. Phase 3 (write lock): + │ _pages[1] = p1 + │ + │ Chunk 1: pageIndex=1, pageOffset=4464, chunkSize=5 + │ memcpy "Hello" → p1 + 4464 + │ + │ _length = 70005 + │ +6. Done — 1 page allocated, 5 bytes written +``` + +The page table index calculation `firstPage = offset / pageSize` to `lastPage = (offset + length - 1) / pageSize` determines which pages are touched. Here `70000 / 65536 = 1` and `70004 / 65536 = 1`, so only page 1 is allocated. The `_pages[]` array is resized to length 2 (to hold index 0 and 1), but `_pages[0]` remains `nint.Zero` (sparse) — no memory is consumed for it. + +## Performance Characteristics + +| Operation | Complexity | Lock Held | +|-----------|-----------|-----------| +| Random read (single page) | O(1) | Read lock (shared) | +| Random read (cross-page) | O(pages touched) | Read lock (shared) | +| Random write (pages pre-allocated) | O(pages touched) | Write lock (exclusive, memcpy only) | +| Random write (new pages needed) | O(pages touched) | Phase 1: read, Phase 2: none, Phase 3: write | +| Page rent from free list | O(1) | Lock-free (CAS) | +| Page batch rent | O(1) amortized | Lock-free (single CAS via TryPopRange) | +| SetLength (truncate) | O(freed pages) | Write lock | +| SetLength (extend, sparse) | O(1) | Write lock | diff --git a/tests/RamDrive.Core.Tests/PagePoolTests.cs b/tests/RamDrive.Core.Tests/PagePoolTests.cs new file mode 100644 index 0000000..65dfec6 --- /dev/null +++ b/tests/RamDrive.Core.Tests/PagePoolTests.cs @@ -0,0 +1,283 @@ +using FluentAssertions; +using Microsoft.Extensions.Logging.Abstractions; +using Microsoft.Extensions.Options; +using RamDrive.Core.Configuration; +using RamDrive.Core.Memory; +using Xunit; + +namespace RamDrive.Core.Tests; + +/// +/// Tests for — the fixed-size native page allocator. +/// Validates 64KB page allocation, batch operations, capacity enforcement, and disposal. +/// +public sealed class PagePoolTests : IDisposable +{ + private const int DefaultPageSizeKb = 64; + private const long DefaultCapacityMb = 1; // 1 MB = 16 pages of 64KB + + private readonly PagePool _pool; + + public PagePoolTests() + { + _pool = CreatePool(DefaultCapacityMb, DefaultPageSizeKb); + } + + public void Dispose() => _pool.Dispose(); + + // ==================== Page Size ==================== + + [Fact] + public void PageSize_Is64KBByDefault() + { + _pool.PageSize.Should().Be(64 * 1024, "default page size is 64KB"); + } + + [Fact] + public void MaxPages_CalculatedFromCapacityAndPageSize() + { + // 1 MB / 64KB = 16 pages + _pool.MaxPages.Should().Be(16); + } + + // ==================== Single Rent/Return ==================== + + [Fact] + public void Rent_ReturnsNonZeroPointer() + { + nint page = _pool.Rent(); + page.Should().NotBe(nint.Zero, "a rented page should have a valid native pointer"); + _pool.Return(page); + } + + [Fact] + public void Rent_IncrementsRentedCount() + { + _pool.RentedCount.Should().Be(0); + + nint page = _pool.Rent(); + _pool.RentedCount.Should().Be(1); + + _pool.Return(page); + _pool.RentedCount.Should().Be(0); + } + + [Fact] + public void Rent_LazyAllocates_OnlyWhenNeeded() + { + _pool.AllocatedCount.Should().Be(0, "no pages allocated before first rent"); + + nint page = _pool.Rent(); + _pool.AllocatedCount.Should().Be(1, "one page allocated on first rent"); + + _pool.Return(page); + _pool.AllocatedCount.Should().Be(1, "returning a page does not deallocate it"); + } + + [Fact] + public void Return_PutsPageBackOnFreeList() + { + nint page = _pool.Rent(); + _pool.FreeCount.Should().Be(0, "rented page is not free"); + + _pool.Return(page); + _pool.FreeCount.Should().Be(1, "returned page is back on free list"); + } + + [Fact] + public unsafe void Rent_ReturnsZeroedMemory() + { + nint page = _pool.Rent(); + + // Write non-zero data + byte* ptr = (byte*)page; + for (int i = 0; i < _pool.PageSize; i++) + ptr[i] = 0xFF; + + _pool.Return(page); // Return zeroes the page + + // Re-rent — should get the same (zeroed) page from free list + nint page2 = _pool.Rent(); + byte* ptr2 = (byte*)page2; + for (int i = 0; i < _pool.PageSize; i++) + ptr2[i].Should().Be(0, "re-rented page should be zeroed"); + + _pool.Return(page2); + } + + // ==================== Capacity Enforcement ==================== + + [Fact] + public void Rent_ReturnsZero_WhenCapacityExhausted() + { + var pages = new nint[_pool.MaxPages]; + for (int i = 0; i < _pool.MaxPages; i++) + { + pages[i] = _pool.Rent(); + pages[i].Should().NotBe(nint.Zero); + } + + // One more should fail + nint extra = _pool.Rent(); + extra.Should().Be(nint.Zero, "should return Zero when capacity exhausted"); + + // Return all + foreach (nint p in pages) + _pool.Return(p); + } + + [Fact] + public void Rent_SucceedsAgain_AfterReturningPages() + { + var pages = new nint[_pool.MaxPages]; + for (int i = 0; i < _pool.MaxPages; i++) + pages[i] = _pool.Rent(); + + // Capacity exhausted + _pool.Rent().Should().Be(nint.Zero); + + // Return one page + _pool.Return(pages[0]); + + // Now can rent again + nint newPage = _pool.Rent(); + newPage.Should().NotBe(nint.Zero, "should succeed after returning a page"); + _pool.Return(newPage); + + // Cleanup + for (int i = 1; i < pages.Length; i++) + _pool.Return(pages[i]); + } + + // ==================== Batch Rent/Return ==================== + + [Fact] + public void RentBatch_RentsMultiplePagesAtOnce() + { + var buffer = new nint[4]; + int rented = _pool.RentBatch(buffer, 4); + + rented.Should().Be(4); + _pool.RentedCount.Should().Be(4); + + foreach (nint p in buffer) + p.Should().NotBe(nint.Zero); + + _pool.ReturnBatch(buffer, 4); + _pool.RentedCount.Should().Be(0); + } + + [Fact] + public void RentBatch_ReturnsPartialCount_WhenCapacityInsufficient() + { + // Pool has 16 pages max; try to rent 20 + var buffer = new nint[20]; + int rented = _pool.RentBatch(buffer, 20); + + rented.Should().Be(16, "can only rent up to capacity"); + _pool.RentedCount.Should().Be(16); + + _pool.ReturnBatch(buffer, rented); + } + + [Fact] + public void ReturnBatch_ReturnsAllPages() + { + var buffer = new nint[8]; + _pool.RentBatch(buffer, 8); + + _pool.RentedCount.Should().Be(8); + _pool.ReturnBatch(buffer, 8); + _pool.RentedCount.Should().Be(0); + _pool.FreeCount.Should().Be(8); + } + + // ==================== Pre-allocation ==================== + + [Fact] + public void PreAllocate_AllocatesAllPagesAtStartup() + { + using var prePool = CreatePool(1, 64, preAllocate: true); + + prePool.AllocatedCount.Should().Be(16, "all 16 pages pre-allocated"); + prePool.RentedCount.Should().Be(0, "none rented yet"); + prePool.FreeCount.Should().Be(16, "all on free list"); + } + + // ==================== Capacity Reporting ==================== + + [Fact] + public void CapacityBytes_ReportsCorrectTotal() + { + _pool.CapacityBytes.Should().Be(1 * 1024 * 1024, "1 MB capacity"); + } + + [Fact] + public void UsedBytes_And_FreeBytes_TrackCorrectly() + { + _pool.UsedBytes.Should().Be(0); + _pool.FreeBytes.Should().Be(1 * 1024 * 1024); + + nint page = _pool.Rent(); + _pool.UsedBytes.Should().Be(64 * 1024); + _pool.FreeBytes.Should().Be(1 * 1024 * 1024 - 64 * 1024); + + _pool.Return(page); + _pool.UsedBytes.Should().Be(0); + _pool.FreeBytes.Should().Be(1 * 1024 * 1024); + } + + // ==================== Thread Safety ==================== + + [Fact] + public void ConcurrentRentReturn_DoesNotCorruptState() + { + using var pool = CreatePool(4, 64); // 4 MB = 64 pages + + const int threadCount = 8; + const int opsPerThread = 100; + var barrier = new Barrier(threadCount); + var exceptions = new List(); + + var threads = Enumerable.Range(0, threadCount).Select(_ => new Thread(() => + { + try + { + barrier.SignalAndWait(); + for (int i = 0; i < opsPerThread; i++) + { + nint page = pool.Rent(); + if (page != nint.Zero) + { + Thread.Yield(); + pool.Return(page); + } + } + } + catch (Exception ex) + { + lock (exceptions) + exceptions.Add(ex); + } + })).ToList(); + + foreach (var t in threads) t.Start(); + foreach (var t in threads) t.Join(); + + exceptions.Should().BeEmpty("no exceptions during concurrent rent/return"); + pool.RentedCount.Should().Be(0, "all pages returned after concurrent operations"); + } + + // ==================== Helper ==================== + + private static PagePool CreatePool(long capacityMb, int pageSizeKb, bool preAllocate = false) + { + var options = Options.Create(new RamDriveOptions + { + CapacityMb = capacityMb, + PageSizeKb = pageSizeKb, + PreAllocate = preAllocate, + }); + return new PagePool(options, NullLogger.Instance); + } +} diff --git a/tests/RamDrive.Core.Tests/PagedFileContentTests.cs b/tests/RamDrive.Core.Tests/PagedFileContentTests.cs new file mode 100644 index 0000000..ce8b16f --- /dev/null +++ b/tests/RamDrive.Core.Tests/PagedFileContentTests.cs @@ -0,0 +1,440 @@ +using FluentAssertions; +using Microsoft.Extensions.Logging.Abstractions; +using Microsoft.Extensions.Options; +using RamDrive.Core.Configuration; +using RamDrive.Core.Memory; +using Xunit; + +namespace RamDrive.Core.Tests; + +/// +/// Tests for — the per-file page table that supports +/// random read/write at arbitrary byte offsets over fixed-size 64KB pages. +/// +public sealed class PagedFileContentTests : IDisposable +{ + private const int PageSizeKb = 64; + private const int PageSize = PageSizeKb * 1024; // 65536 bytes + + private readonly PagePool _pool; + + public PagedFileContentTests() + { + var options = Options.Create(new RamDriveOptions + { + CapacityMb = 4, // 4 MB = 64 pages of 64KB + PageSizeKb = PageSizeKb, + }); + _pool = new PagePool(options, NullLogger.Instance); + } + + public void Dispose() => _pool.Dispose(); + + // ==================== Basic Write/Read ==================== + + [Fact] + public void Write_ThenRead_SmallData_WithinSinglePage() + { + using var content = new PagedFileContent(_pool); + + byte[] data = [1, 2, 3, 4, 5]; + int written = content.Write(0, data); + written.Should().Be(5); + content.Length.Should().Be(5); + + var readBuf = new byte[5]; + int read = content.Read(0, readBuf); + read.Should().Be(5); + readBuf.Should().Equal(data); + } + + [Fact] + public void Write_ThenRead_ExactlyOnePage() + { + using var content = new PagedFileContent(_pool); + + var data = new byte[PageSize]; + FillPattern(data, 0xAB); + + int written = content.Write(0, data); + written.Should().Be(PageSize); + content.Length.Should().Be(PageSize); + + var readBuf = new byte[PageSize]; + int read = content.Read(0, readBuf); + read.Should().Be(PageSize); + readBuf.Should().Equal(data); + } + + // ==================== Random Read/Write at Arbitrary Offsets ==================== + + [Fact] + public void RandomWrite_AtMiddleOfPage() + { + using var content = new PagedFileContent(_pool); + + // Write at an arbitrary offset inside the first page + int offset = 1000; + byte[] data = [0xDE, 0xAD, 0xBE, 0xEF]; + content.Write(offset, data); + + content.Length.Should().Be(offset + data.Length); + + var readBuf = new byte[4]; + content.Read(offset, readBuf); + readBuf.Should().Equal(data); + } + + [Fact] + public void RandomWrite_AtArbitraryOffset_InSecondPage() + { + using var content = new PagedFileContent(_pool); + + // Write into the second page (offset >= 64KB) + int offset = PageSize + 500; + byte[] data = [0x11, 0x22, 0x33]; + content.Write(offset, data); + + content.Length.Should().Be(offset + data.Length); + + var readBuf = new byte[3]; + content.Read(offset, readBuf); + readBuf.Should().Equal(data); + } + + [Fact] + public void RandomWrite_SpanningTwoPages() + { + using var content = new PagedFileContent(_pool); + + // Write data that starts near the end of page 0 and spans into page 1 + int offset = PageSize - 3; // 3 bytes before page boundary + byte[] data = [0xAA, 0xBB, 0xCC, 0xDD, 0xEE, 0xFF]; // 6 bytes crosses boundary + content.Write(offset, data); + + content.Length.Should().Be(offset + data.Length); + + var readBuf = new byte[6]; + content.Read(offset, readBuf); + readBuf.Should().Equal(data, "cross-page write/read should preserve all bytes"); + } + + [Fact] + public void RandomWrite_SpanningThreePages() + { + using var content = new PagedFileContent(_pool); + + // Write starting in page 0, crossing page 1 entirely, ending in page 2 + int offset = PageSize - 10; + int writeSize = PageSize + 20; // spans from page 0 into page 2 + var data = new byte[writeSize]; + FillPattern(data, 0x42); + + content.Write(offset, data); + + var readBuf = new byte[writeSize]; + content.Read(offset, readBuf); + readBuf.Should().Equal(data, "write spanning 3 pages should be fully readable"); + } + + [Fact] + public void MultipleRandomWrites_ToNonContiguousOffsets() + { + using var content = new PagedFileContent(_pool); + + // Write to page 0 + content.Write(100, new byte[] { 0x01 }); + // Write to page 3 (skipping pages 1 and 2) + content.Write(3 * PageSize + 500, new byte[] { 0x03 }); + // Write to page 7 + content.Write(7 * PageSize + 200, new byte[] { 0x07 }); + + // Verify each write + var buf = new byte[1]; + content.Read(100, buf); + buf[0].Should().Be(0x01); + + content.Read(3 * PageSize + 500, buf); + buf[0].Should().Be(0x03); + + content.Read(7 * PageSize + 200, buf); + buf[0].Should().Be(0x07); + } + + [Fact] + public void OverwriteExistingData_AtSameOffset() + { + using var content = new PagedFileContent(_pool); + + int offset = 2048; + content.Write(offset, new byte[] { 0xAA, 0xBB, 0xCC }); + content.Write(offset, new byte[] { 0x11, 0x22, 0x33 }); + + var buf = new byte[3]; + content.Read(offset, buf); + buf.Should().Equal([0x11, 0x22, 0x33], "overwrite should replace previous data"); + } + + // ==================== Sparse Reads ==================== + + [Fact] + public void Read_UnallocatedPage_ReturnsZeroes() + { + using var content = new PagedFileContent(_pool); + + // Write to page 2, but don't touch page 0 or 1 + content.Write(2 * PageSize, new byte[] { 0xFF }); + + // Read from page 0 (unallocated, but within file length? No — length is based on writes) + // Actually, file length is 2*PageSize+1, so page 0 and 1 are within bounds + var buf = new byte[10]; + content.Read(0, buf); + buf.Should().AllBeEquivalentTo((byte)0, "unallocated sparse pages should read as zeroes"); + } + + [Fact] + public void Read_BeyondFileLength_ReturnsZeroBytes() + { + using var content = new PagedFileContent(_pool); + + content.Write(0, new byte[] { 1, 2, 3 }); + + var buf = new byte[10]; + int read = content.Read(100, buf); // offset beyond length + read.Should().Be(0, "reading past file length returns 0 bytes"); + } + + [Fact] + public void Read_PartiallyBeyondLength_ClampsToBounds() + { + using var content = new PagedFileContent(_pool); + + content.Write(0, new byte[] { 1, 2, 3, 4, 5 }); + + var buf = new byte[10]; + int read = content.Read(3, buf); // offset 3, length 5, so only 2 bytes left + read.Should().Be(2); + buf[0].Should().Be(4); + buf[1].Should().Be(5); + } + + // ==================== SetLength / Truncation ==================== + + [Fact] + public void SetLength_Truncate_FreesPages() + { + using var content = new PagedFileContent(_pool); + + // Write 3 full pages + var data = new byte[3 * PageSize]; + FillPattern(data, 0xAB); + content.Write(0, data); + + long usedBefore = _pool.RentedCount; + usedBefore.Should().Be(3); + + // Truncate to 1 page + content.SetLength(PageSize).Should().BeTrue(); + content.Length.Should().Be(PageSize); + + _pool.RentedCount.Should().Be(1, "2 pages should have been freed by truncation"); + } + + [Fact] + public void SetLength_Truncate_ZerosPartialPage() + { + using var content = new PagedFileContent(_pool); + + var data = new byte[PageSize]; + FillPattern(data, 0xFF); + content.Write(0, data); + + // Truncate to half a page + int halfPage = PageSize / 2; + content.SetLength(halfPage); + + // Extend back without writing + content.SetLength(PageSize); + + var readBuf = new byte[PageSize]; + content.Read(0, readBuf); + + // First half should have original data + for (int i = 0; i < halfPage; i++) + readBuf[i].Should().Be(0xFF, $"byte {i} in retained portion should be preserved"); + + // Second half should be zero (cleared during truncation) + for (int i = halfPage; i < PageSize; i++) + readBuf[i].Should().Be(0, $"byte {i} beyond truncation should be zero"); + } + + [Fact] + public void SetLength_Extend_DoesNotAllocate() + { + using var content = new PagedFileContent(_pool); + + content.SetLength(10 * PageSize); + content.Length.Should().Be(10 * PageSize); + _pool.RentedCount.Should().Be(0, "extending with SetLength does not allocate pages (sparse)"); + } + + [Fact] + public void SetLength_ToZero_FreesAllPages() + { + using var content = new PagedFileContent(_pool); + + content.Write(0, new byte[2 * PageSize]); + _pool.RentedCount.Should().Be(2); + + content.SetLength(0); + content.Length.Should().Be(0); + _pool.RentedCount.Should().Be(0); + } + + // ==================== Capacity Exhaustion ==================== + + [Fact] + public void Write_ReturnsNegativeOne_WhenDiskFull() + { + using var content = new PagedFileContent(_pool); + + // Fill the entire pool (4 MB = 64 pages) + var bigData = new byte[4 * 1024 * 1024]; + int written = content.Write(0, bigData); + written.Should().Be(bigData.Length); + + // Try to write one more page + int result = content.Write(bigData.Length, new byte[PageSize]); + result.Should().Be(-1, "should return -1 when pool is exhausted"); + } + + // ==================== Empty Write ==================== + + [Fact] + public void Write_EmptySpan_ReturnsZero() + { + using var content = new PagedFileContent(_pool); + + int written = content.Write(0, ReadOnlySpan.Empty); + written.Should().Be(0); + content.Length.Should().Be(0); + } + + // ==================== Concurrent Access ==================== + + [Fact] + public void ConcurrentReads_DoNotBlock() + { + using var content = new PagedFileContent(_pool); + + var data = new byte[PageSize]; + FillPattern(data, 0xBE); + content.Write(0, data); + + const int threadCount = 8; + var barrier = new Barrier(threadCount); + var exceptions = new List(); + + var threads = Enumerable.Range(0, threadCount).Select(_ => new Thread(() => + { + try + { + barrier.SignalAndWait(); + var buf = new byte[PageSize]; + int read = content.Read(0, buf); + read.Should().Be(PageSize); + buf[0].Should().Be(0xBE); + } + catch (Exception ex) + { + lock (exceptions) + exceptions.Add(ex); + } + })).ToList(); + + foreach (var t in threads) t.Start(); + foreach (var t in threads) t.Join(); + + exceptions.Should().BeEmpty("concurrent reads should succeed without errors"); + } + + [Fact] + public void ConcurrentRandomWrites_ToDistinctPages_Succeed() + { + using var content = new PagedFileContent(_pool); + + // Pre-extend so page table is sized + content.SetLength(8 * PageSize); + + const int threadCount = 8; + var barrier = new Barrier(threadCount); + var exceptions = new List(); + + var threads = Enumerable.Range(0, threadCount).Select(i => new Thread(() => + { + try + { + barrier.SignalAndWait(); + long offset = i * PageSize; + var data = new byte[100]; + FillPattern(data, (byte)(i + 1)); + content.Write(offset, data); + } + catch (Exception ex) + { + lock (exceptions) + exceptions.Add(ex); + } + })).ToList(); + + foreach (var t in threads) t.Start(); + foreach (var t in threads) t.Join(); + + exceptions.Should().BeEmpty(); + + // Verify each thread's write + for (int i = 0; i < threadCount; i++) + { + var buf = new byte[100]; + content.Read(i * PageSize, buf); + buf[0].Should().Be((byte)(i + 1), $"thread {i}'s write should be preserved"); + } + } + + // ==================== Page Boundary Edge Cases ==================== + + [Fact] + public void Write_ExactlyAtPageBoundary_AllocatesNewPage() + { + using var content = new PagedFileContent(_pool); + + // Write exactly to fill page 0 + content.Write(0, new byte[PageSize]); + _pool.RentedCount.Should().Be(1); + + // Write one byte at the start of page 1 + content.Write(PageSize, new byte[] { 0xFF }); + _pool.RentedCount.Should().Be(2, "writing at page boundary allocates a new page"); + } + + [Fact] + public void Read_CrossingPageBoundary_IsContiguous() + { + using var content = new PagedFileContent(_pool); + + // Fill last 2 bytes of page 0 and first 2 bytes of page 1 + content.Write(PageSize - 2, new byte[] { 0xAA, 0xBB, 0xCC, 0xDD }); + + var buf = new byte[4]; + content.Read(PageSize - 2, buf); + buf.Should().Equal([0xAA, 0xBB, 0xCC, 0xDD], + "reading across page boundary should return contiguous data"); + } + + // ==================== Helpers ==================== + + private static void FillPattern(byte[] buffer, byte value) + { + Array.Fill(buffer, value); + } +} diff --git a/tests/RamDrive.Core.Tests/RamDrive.Core.Tests.csproj b/tests/RamDrive.Core.Tests/RamDrive.Core.Tests.csproj index 3d32c51..697a79f 100644 --- a/tests/RamDrive.Core.Tests/RamDrive.Core.Tests.csproj +++ b/tests/RamDrive.Core.Tests/RamDrive.Core.Tests.csproj @@ -6,6 +6,7 @@ enable false true + true diff --git a/tests/RamDrive.Core.Tests/RamFileSystemTests.cs b/tests/RamDrive.Core.Tests/RamFileSystemTests.cs new file mode 100644 index 0000000..aebe291 --- /dev/null +++ b/tests/RamDrive.Core.Tests/RamFileSystemTests.cs @@ -0,0 +1,207 @@ +using FluentAssertions; +using Microsoft.Extensions.Logging.Abstractions; +using Microsoft.Extensions.Options; +using RamDrive.Core.Configuration; +using RamDrive.Core.FileSystem; +using RamDrive.Core.Memory; +using Xunit; + +namespace RamDrive.Core.Tests; + +/// +/// Integration tests for — file/directory CRUD through the +/// in-memory file system layer, verifying that page-backed file I/O works end-to-end. +/// +public sealed class RamFileSystemTests : IDisposable +{ + private const int PageSizeKb = 64; + private const int PageSize = PageSizeKb * 1024; + + private readonly PagePool _pool; + private readonly RamFileSystem _fs; + + public RamFileSystemTests() + { + var options = Options.Create(new RamDriveOptions + { + CapacityMb = 2, + PageSizeKb = PageSizeKb, + }); + _pool = new PagePool(options, NullLogger.Instance); + _fs = new RamFileSystem(_pool); + } + + public void Dispose() + { + _fs.Dispose(); + _pool.Dispose(); + } + + // ==================== File CRUD ==================== + + [Fact] + public void CreateFile_InRoot_Succeeds() + { + var node = _fs.CreateFile(@"\test.txt"); + node.Should().NotBeNull(); + node!.Name.Should().Be("test.txt"); + node.IsFile.Should().BeTrue(); + } + + [Fact] + public void CreateFile_InSubdirectory_Succeeds() + { + _fs.CreateDirectory(@"\subdir"); + var node = _fs.CreateFile(@"\subdir\file.txt"); + node.Should().NotBeNull(); + node!.Name.Should().Be("file.txt"); + } + + [Fact] + public void CreateFile_DuplicateName_ReturnsNull() + { + _fs.CreateFile(@"\dup.txt"); + _fs.CreateFile(@"\dup.txt").Should().BeNull(); + } + + [Fact] + public void CreateFile_ParentNotFound_ReturnsNull() + { + _fs.CreateFile(@"\nonexistent\file.txt").Should().BeNull(); + } + + [Fact] + public void FindNode_Root() + { + _fs.FindNode(@"\").Should().NotBeNull(); + _fs.FindNode(@"\")!.IsDirectory.Should().BeTrue(); + } + + [Fact] + public void FindNode_CaseInsensitive() + { + _fs.CreateFile(@"\MyFile.TXT"); + _fs.FindNode(@"\myfile.txt").Should().NotBeNull(); + _fs.FindNode(@"\MYFILE.TXT").Should().NotBeNull(); + } + + // ==================== File I/O through FileSystem ==================== + + [Fact] + public void WriteAndRead_ThroughFileNode() + { + var node = _fs.CreateFile(@"\data.bin")!; + + byte[] data = [10, 20, 30, 40, 50]; + node.Content!.Write(0, data); + + var buf = new byte[5]; + node.Content.Read(0, buf); + buf.Should().Equal(data); + + node.Size.Should().Be(5); + } + + [Fact] + public void RandomWrite_ThroughFileNode_SpanningPages() + { + var node = _fs.CreateFile(@"\large.bin")!; + + // Write spanning two pages + int offset = PageSize - 10; + var data = new byte[20]; + Array.Fill(data, (byte)0xCD); + node.Content!.Write(offset, data); + + var buf = new byte[20]; + node.Content.Read(offset, buf); + buf.Should().Equal(data); + } + + // ==================== Directory Operations ==================== + + [Fact] + public void CreateDirectory_Succeeds() + { + var dir = _fs.CreateDirectory(@"\mydir"); + dir.Should().NotBeNull(); + dir!.IsDirectory.Should().BeTrue(); + } + + [Fact] + public void ListDirectory_ShowsChildren() + { + _fs.CreateFile(@"\a.txt"); + _fs.CreateFile(@"\b.txt"); + _fs.CreateDirectory(@"\subdir"); + + var children = _fs.ListDirectory(@"\"); + children.Should().NotBeNull(); + children!.Count.Should().Be(3); + children.Select(c => c.Name).Should().Contain(["a.txt", "b.txt", "subdir"]); + } + + // ==================== Delete ==================== + + [Fact] + public void Delete_File_FreesPages() + { + var node = _fs.CreateFile(@"\temp.txt")!; + node.Content!.Write(0, new byte[PageSize]); + _pool.RentedCount.Should().Be(1); + + _fs.Delete(@"\temp.txt").Should().BeTrue(); + _pool.RentedCount.Should().Be(0, "deleting a file should free its pages"); + } + + [Fact] + public void Delete_NonEmptyDirectory_Fails() + { + _fs.CreateDirectory(@"\dir"); + _fs.CreateFile(@"\dir\file.txt"); + + _fs.Delete(@"\dir").Should().BeFalse(); + } + + [Fact] + public void Delete_EmptyDirectory_Succeeds() + { + _fs.CreateDirectory(@"\dir"); + _fs.Delete(@"\dir").Should().BeTrue(); + _fs.FindNode(@"\dir").Should().BeNull(); + } + + // ==================== Move ==================== + + [Fact] + public void Move_RenameFile() + { + var node = _fs.CreateFile(@"\old.txt")!; + node.Content!.Write(0, new byte[] { 1, 2, 3 }); + + _fs.Move(@"\old.txt", @"\new.txt", replace: false).Should().BeTrue(); + _fs.FindNode(@"\old.txt").Should().BeNull(); + + var moved = _fs.FindNode(@"\new.txt"); + moved.Should().NotBeNull(); + + var buf = new byte[3]; + moved!.Content!.Read(0, buf); + buf.Should().Equal([1, 2, 3], "data should be preserved after move"); + } + + // ==================== Capacity Tracking ==================== + + [Fact] + public void TotalBytes_And_FreeBytes_ReportCorrectly() + { + _fs.TotalBytes.Should().Be(2 * 1024 * 1024); + _fs.FreeBytes.Should().Be(2 * 1024 * 1024); + + var node = _fs.CreateFile(@"\file.dat")!; + node.Content!.Write(0, new byte[PageSize]); + + _fs.UsedBytes.Should().Be(PageSize); + _fs.FreeBytes.Should().Be(2 * 1024 * 1024 - PageSize); + } +}