Releases: Perseusxrltd/mnemion
v3.5.0 — Studio: Connect Agents + systematic bug fixes
Highlights
🔌 Connect Agents — one-click MCP setup
Studio now ships a Connect Agents view (/connect, keyboard G C) that detects installed MCP-capable AI clients and wires Mnemion into each one with a single click.
Supported: Claude Code (user + project), Claude Desktop, OpenAI Codex CLI (TOML), Google Gemini CLI, Cursor, Codeium Windsurf, Zed. Legacy mempalace entries are detected and auto-replaced. Every install writes a timestamped backup to .mnemion_backups/. The installed command uses the absolute path of Studio's own Python interpreter — no PATH surprises.
Any unlisted client (OpenClaw, Nemoclaw, Hermes, Cline, custom agents…) can connect via the universal JSON snippet in the view.
🔴 Critical Studio bug fixes
- Timestamp field mismatch — readers queried
timestamp; writers savefiled_at. Every "Created" row, Recently Added sort, and agent last-seen was silently broken. - DrawerCreateModal didn't navigate — read
data.idwhen backend returnsdrawer_id. - Empty search previews —
hybrid_searcherreturnstext, frontend renderedcontent; now mapped at the API boundary. - Vault export crashed on click — referenced undefined
_col; now uses_get_collection()and streams viaFileResponsewith a temp file. - CORS blocked non-5173 ports — widened to any localhost port plus
file://. - CommandPalette showed literal "undefined" — operator-precedence bug in fallback.
- Hardcoded port 5173 in Settings, Electron dev mode, and
start.bat— all now dynamic.
🟢 Studio features
- Graph hover highlight — Obsidian-style: hover a node, non-neighbours dim, adjacent edges brighten.
- Dashboard recent drawers + quick-capture button.
- Search
wing:/room:inline operators; contested trust badges on results. <ErrorBoundary>wraps the router<Outlet />.- New endpoint:
GET /api/drawers/recent.
🟡 Cleanup
- Dead
RightSidebar.tsx(162 lines, never imported) removed. - Dead
/wsWebSocket stub +broadcast()removed. hooks/mnemion_save_hook.pyno longer hardcodes~/projects/mnemion.- Shortcut modal reflects only wired shortcuts; added
G C → Connect Agents.
Upgrade
pip install --upgrade mnemion
uvicorn studio.backend.main:app --port 7891
cd studio/frontend && npm install && npm run devOpen http://localhost:5173 → Connect Agents → click Install on each AI client.
Compatibility
Drop-in upgrade from v3.4.x. Legacy mempalace MCP entries are auto-replaced on install. All 120 Python tests pass.
🤖 Co-authored with Claude Code
Mnemion v3.4.7 — Surgical Hardening & Ghost Purge
Mnemion Surgical Hardening & Ghost Purge (v3.4.7)
This release marks a final transition to a pure, production-ready Mnemion core by purging technical debt and legacy ghosts.
🛠️ Key Improvements
- Legacy Ghost Purge: Excised all remaining hardcoded references to mempalace, palace, and MemPal from config systems, miners, and benchmarks.
- Dependency Hardening: Added explicit logger.warning notifications when orch is missing but latent space grooming (LeWM) or context prediction (JEPA) is configured. No more silent feature degradation.
- Onboarding Cleanup: Removed nonexistent palace_facts.py references from generated metadata.
- Benchmark Synchronization: Realigned all internal benchmarks (ConvoMem, LongMemEval, LoCoMo) to the native repository and Mnemion namespace.
- Version Manifest Parity: Fully synchronized versions across .molthub/project.md, README.md, .claude-plugin, and .codex-plugin.
🧹 Maintenance
- Removed MempalaceConfig compatibility alias from config.py.
- Stripped legacy ~/.mempalace migration fallbacks for architectural purity.
- Updated project version to v3.4.7.
v3.4.1 — Mnemion Anaktoron Hardening & Truth Audit
Finalized Anaktoron-native architecture, synchronized FTS5/ChromaDB/Trust pipelines, and integrated LEWM predictive latent trajectory.
v3.3.5 — Streaming restore, O(batch) peak memory
Root cause of SIGKILL
restore called json.load() on the full export before writing anything. A 58 MB file materialises as ~500 MB–1 GB of Python objects (dict/string overhead). Combined with ChromaDB loading its sentence-transformer model (~90 MB) and running embedding inference, the OOM killer was triggered consistently at ~950/33 433 drawers — before even 3% of the archive was written.
Reducing batch size didn't help because the full list was already in RAM.
What changed
_stream_json_array(filepath)
Yields one drawer object at a time using json.JSONDecoder.raw_decode() with a 512 KB rolling file buffer. At any point during restore, only the current batch is in memory — regardless of archive size.
_count_json_objects(filepath)
Fast byte scan (b'"id":') that counts drawers in ~20 ms without JSON parsing. Used at startup so % progress still works with the streaming path.
cmd_restore
Now feeds _stream_json_array() into the batch loop. The full export never exists as a Python list.
Memory profile (33 k drawer archive)
| Version | Peak RAM during restore |
|---|---|
| v3.3.2 (batch GC) | ~500–1 000 MB (full list in RAM) |
| v3.3.5 (streaming) | ~90–150 MB (transformer + one batch) |
v3.3.2 — Restore OOM fix
What's fixed
SIGKILL during large archive restore
Restoring a 33k-drawer, 58MB export was dying with SIGKILL (OOM killer) because:
- ChromaDB embeds every document on write using a sentence transformer model
- Batch size of 500 × ~22k char average = ~11MB of text per embedding round
- The full JSON array stayed in memory throughout
Changes
- Default batch size 500 → 50 — bounds the embedding memory per round
--batch-sizeflag —mnemion restore archive/drawers_export.json --batch-size 20for very tight hosts- Per-batch GC — each batch is cleared from the in-memory list and
gc.collect()is called after every ChromaDB write flush=Trueon all print() — progress is now visible before any OOM event- Progress shows
%+ drawer count —[35%] 11700/33433 ... - README updated — all
mnemion mine archive/...references replaced withmnemion restore, OOM note added with--batch-sizeworkaround
v3.3.0 — restore command + collection name fixes
What's new
mnemion restore — new command for importing a JSON export
The mnemion mine archive/drawers_export.json path documented in the README was broken — mine expects a directory, not a JSON file, and would crash with No mnemion.yaml found. This release adds the actual command:
```bash
New machine setup from shared export
mnemion restore archive/drawers_export.json
Add to an existing palace without wiping it
mnemion restore archive/drawers_export.json --merge
Replace an existing palace entirely
mnemion restore archive/drawers_export.json --replace
```
Collection name resolution fixed across all commands
Several commands (search, wake-up, status, repair, compress, and the miner) hardcoded "mnemion_drawers" instead of reading collection_name from ~/.mnemion/config.json. This caused silent failures for palaces created with the legacy "mempalace_drawers" collection name.
All read/write paths now resolve the collection name from MempalaceConfig().collection_name.
Files changed
cli.py—restorecommand,repairandcompresscollection name fixsearcher.py—search()andsearch_memories()collection name fixlayers.py—Layer1,Layer2,Layer3,MemoryStack.status()collection name fixminer.py—get_collection()andstatus()collection name fixconvo_miner.py—get_collection()collection name fix
v3.2.23 — Multi-agent palace sync
Multi-Agent Palace Sync
Multiple agents on different machines can now share the same memory palace automatically — no human coordination, no corrupted JSON, no silent failures.
New files
sync/merge_exports.py
Pure-Python merge utility. Produces a clean union of two drawers_export.json files without git merge markers. When the same drawer ID exists in both, the newer filed_at timestamp wins (remote wins on tie). Handles first-sync (missing file) gracefully.
sync/SyncMemories.sh
Bash implementation of the sync algorithm for Linux/macOS agents (openclaw, headless servers).
Rewritten: sync/SyncMemories.ps1
| Before | After |
|---|---|
| Blind push — fails silently if remote is ahead | git fetch before every push |
| No merge — git conflict markers corrupt the JSON | merge_exports.py produces a valid merged JSON |
| No retry — one rejection and it gave up | Up to 5 retries with 2–9 s random jitter |
git push origin master hardcoded |
Branch auto-detected; overridable via MNEMION_BRANCH |
| No agent identity in commits | MNEMION_AGENT_ID stamped in every commit |
| No concurrency guard | Lock file; stale locks > 10 min auto-cleared |
Environment variables
| Variable | Default | Description |
|---|---|---|
MNEMION_AGENT_ID |
hostname | Name in commit messages |
MNEMION_BRANCH |
auto-detected | Branch to sync |
MNEMION_REPO_DIR |
~/.mnemion |
Memory git repo path |
MNEMION_SOURCE_DIR |
auto-detected | Mnemion package path |
MNEMION_PYTHON |
python3 |
Python binary (Linux/Mac) |
Known v1 limitation
Drawer deletions do not propagate. A drawer deleted on one agent will be re-added after the next merge from another agent that still has it. Tombstone propagation is planned for a future version.
v3.2.22 — Entity detection quality, search ranking, Makefile
What's fixed
Entity detection false positives
- ~120 new stopwords added: status adjectives (
current,verified,pending,active…), common tech/business nouns (stage,trust,hybrid,call,notes,auto…), adjective-nouns that appear capitalised in project docs (lexical,semantic,abstract…). Directly addresses the reported noisy candidates. - Frequency threshold raised 3 → 5: words that appear fewer than 5 times no longer become candidates, reducing sentence-start capitalisation noise.
- Uncertain list filtered: zero-signal entries with confidence < 0.3 are now dropped before presentation; uncertain cap tightened 8 → 6.
Search ranking (conversational queries)
_fts_search previously ran only a strict phrase match (entire query in double-quotes). For conversational queries like "Which APIs are next after Shopify and CJ?" the phrase never matched anything, leaving ranking entirely to vector search — which pulled broad overview docs (USER.md, TOOLS.md) ahead of the directly relevant ads.md.
Now runs two passes: phrase search (preserved for exact-phrase priority) + keyword search (stop-words stripped, AND-of-terms). Both candidate sets are merged before RRF fusion.
Pytest environment binding
New top-level Makefile with install, test, test-fast, lint, format, clean targets. All test targets invoke $(VENV_PY) -m pytest, so pytest always runs in the project venv.
Fix for: ConftestImportFailure: No module named 'chromadb' — caused by running the system-level pytest binary instead of the venv one.
make test # full suite
make test-fast # skip benchmark/slow/stress marksTag alignment
Tags v3.2.19, v3.2.20, v3.2.21 now point to their correct version-bump commits. Previously v3.2.19 was misplaced at HEAD (the 3.2.21 commit), causing the reported version inconsistency.
v3.2.21 — version bump
Version bump only. No code changes.
v3.2.20 — version bump
Version bump only. No code changes.