Skip to content
Piyush edited this page Oct 26, 2025 · 6 revisions

Welcome to the In Memoria Wiki!

This wiki provides a comprehensive guide to the "In Memoria" project, its architecture, and how to contribute. "In Memoria" is a sophisticated, local-first tool designed to provide persistent intelligence to AI coding assistants, effectively giving them a long-term memory about your codebase. Note: This wiki is automatically generated via AI agents (Gemini CLI, Claude Code, etc.); inaccuracies may be present, please report any you find.


1. The Problem: Session Amnesia

AI coding assistants like Claude, Copilot, and Cursor are powerful, but they suffer from a critical limitation: amnesia. Each new session starts from a blank slate. You must repeatedly explain your project's architecture, coding conventions, and established patterns. This is not only repetitive but also highly inefficient, consuming valuable time and tokens as the AI re-analyzes your code from scratch in every interaction.

The result is generic advice that often clashes with your project's specific style, leading to a frustrating and disjointed developer experience.

2. The Solution: Persistent, Local-First Intelligence

"In Memoria" solves this problem by acting as a Model Context Protocol (MCP) server. It runs entirely on your local machine, ensuring your code never leaves your control.

What is the Model Context Protocol (MCP)?

MCP is a standardized communication interface that allows AI models to connect with external tools and data sources. "In Memoria" implements this protocol to provide a persistent knowledge base about your code that AI agents can query. It acts as a specialized sidecar, equipping the AI with deep, contextual knowledge about your project.

How It Works

  1. Analyze: In Memoria performs a deep analysis of your codebase, parsing the source code to understand its structure, semantics, and style.
  2. Learn: It identifies and learns recurring patterns, naming conventions, architectural choices, and even your personal coding habits.
  3. Store: This learned intelligence is saved persistently in a local SQLite database within your project directory.
  4. Serve: It runs a local MCP server that exposes this knowledge base to AI agents through a rich set of specialized tools.

When an AI agent needs context, it queries "In Memoria" to get instant, relevant, and style-consistent information, eliminating the need for repeated explanations and enabling a truly collaborative partnership.


3. Core Features in Detail

  • Deep Code Analysis: Using tree-sitter, In Memoria parses 11 different languages to build a detailed Abstract Syntax Tree (AST). This goes beyond simple text matching, allowing the engine to understand the code's grammatical structure. See the Rust Core page for more.

  • Intelligent Pattern Learning: The system uses statistical analysis to identify recurring patterns in your code. This includes:

    • Naming Conventions: Learns whether you use camelCase, snake_case, or PascalCase for functions, classes, and variables.
    • Structural Patterns: Detects architectural choices like MVC, Layered Architecture, or if you prefer a component-based structure.
    • Implementation Patterns: Recognizes common software design patterns like the Singleton, Factory, or Observer patterns.
  • Project Blueprints: The get_project_blueprint tool provides a token-efficient summary of the entire project, including the detected technology stack, primary language, application entry points, and key directories. This allows an AI agent to get oriented in seconds.

  • Smart File Routing: The predict_coding_approach tool can map a high-level task (e.g., "add password reset functionality") to the specific files that are most relevant to that feature, guiding the AI to the correct location for the changes.

  • Semantic Search: By generating vector embeddings for your code, In Memoria enables semantic search. This allows you to search for code based on its meaning and intent, not just keywords. For example, searching for "function to get user data" can find a function named fetchUserProfile.

  • Persistent Work Memory: The system tracks your work across sessions, including which files you were working on, what feature you were implementing, and architectural decisions you made. This allows an AI to pick up right where you left off.


4. Who Is It For?

  • Individual Developers: Enhance your personal productivity by giving your AI assistant a perfect memory of your own coding style and architectural decisions.
  • Development Teams: Share the in-memoria.db file to ensure every team member (and their AI assistant) is working with the same set of conventions and patterns, promoting consistency across the entire project.
  • AI Agent Developers: Leverage In Memoria as a powerful infrastructure component to build more context-aware and intelligent developer tools.

5. Wiki Navigation

To learn more, explore the detailed documentation on the following pages:

  • Getting Started: Instructions on how to install, build, and run the project.
  • Architecture: A high-level overview of the system architecture, including the Rust Core, TypeScript Server, and storage solutions.
  • TypeScript Application: A deep dive into the Node.js/TypeScript application that runs the MCP server and orchestrates the analysis.
  • Rust Core: Detailed information about the high-performance Rust engine that powers the code analysis.
  • Walkthrough-Adding-A-New-Language: A step-by-step tutorial for contributors.
  • Database Schema: A detailed breakdown of the SQLite database schema.
  • Configuration: An in-depth guide to configuring In Memoria.
  • MCP Tools: A comprehensive guide for AI agents on how to use the tools provided by the In Memoria MCP server.
  • MCP Integration: A guide for connecting AI tools and editors to the In Memoria server.
  • Contributing: Guidelines for contributing to the In Memoria project.

Clone this wiki locally