Skip to content

Commit aa09b36

Browse files
committed
docs: update architecture and api references for cognitive memory
Update AGENTS.md with four-crate architecture (adding llmem-index and llmem-quant), mention Python training pipeline, and update commit guidelines. Update llms.txt project summary with cognitive CLI metaphors and new crate descriptions. Update SKILL.md CLI examples to use new cognitive commands.
1 parent 83501d7 commit aa09b36

File tree

3 files changed

+26
-13
lines changed

3 files changed

+26
-13
lines changed

AGENTS.md

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,18 @@
22

33
## Project Overview
44

5-
A specification and Rust implementation defining a convention for storing AI agent memory as plain markdown files. Two levels: project (`~/.llmem/{project}/`) and global (`~/.llmem/global/`).
5+
A specification and Rust implementation defining a convention for storing AI agent memory as plain markdown files. Two levels: project (`~/.llmem/{project}/`) and global (`~/.llmem/global/`). Uses cognitive metaphors for its CLI: memorize, remember, note, learn, consolidate, reflect, forget.
66

77
## Architecture
88

9-
Rust workspace with three crates:
10-
- `llmem-core` — spec types, file parsing, index operations
11-
- `llmem-cli` — CLI binary (`llmem`) for managing memory
9+
Rust workspace with four crates:
10+
- `llmem-core` — spec types, file parsing, index operations, inbox, config, embeddings
11+
- `llmem-cli` — CLI binary (`llmem`) with cognitive commands
1212
- `llmem-server` — RAG HTTP server for semantic search
13+
- `llmem-index` — ANN indices (HNSW, IVF-Flat) and tree-sitter code indexing
14+
- `llmem-quant` — TurboQuant vector quantization (1-4 bit)
15+
16+
Training pipeline in `training/` (Python): data generation, model distillation, ONNX export.
1317

1418
## Discovering Structure
1519

@@ -33,7 +37,7 @@ cargo run -p llmem-cli # run the CLI
3337
## Commit Guidelines
3438

3539
Conventional commits via `sr commit`:
36-
- `feat(core):` / `feat(cli):` / `feat(server):` — scoped by crate
40+
- `feat(core):` / `feat(cli):` / `feat(server):` / `feat(quant):` — scoped by crate
3741
- `docs(spec):` — specification changes
3842
- `docs(readme):` — documentation changes
3943

llms.txt

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,20 @@
11
# llmem
22

3-
> An open standard for tool-agnostic, project-scoped AI agent memory. Defines a two-level memory convention — project (`~/.llmem/{project}/`) and global (`~/.llmem/global/`) — using typed markdown files with YAML frontmatter. Includes a Rust CLI for managing memory and a RAG HTTP server for semantic search. Works with Claude Code, Codex, Gemini, Copilot, Cursor, or any AI tool.
3+
> An open standard for tool-agnostic AI agent memory with a cognitive CLI. Defines a two-level memory convention — project (`~/.llmem/{project}/`) and global (`~/.llmem/global/`) — using typed markdown files with YAML frontmatter. CLI commands mirror memory processes: memorize, remember, note, learn, consolidate, reflect, forget. Features working memory inbox, attention scoring, Hebbian reinforcement, consolidation cycles, semantic search (HNSW/IVF-Flat), and TurboQuant vector quantization. Includes a RAG HTTP server. Works with Claude Code, Codex, Gemini, Copilot, Cursor, or any AI tool.
44

55
## Docs
66

7-
- [README](https://github.com/urmzd/llmem/blob/main/README.md): Quick start, CLI usage, and memory types
7+
- [README](https://github.com/urmzd/llmem/blob/main/README.md): Quick start, cognitive CLI usage, and memory types
88
- [AGENTS.md](https://github.com/urmzd/llmem/blob/main/AGENTS.md): Architecture and development guide
99
- [Specification](https://github.com/urmzd/llmem/blob/main/SPECIFICATION.md): Full specification (levels, file format, dynamic loading, integration)
1010

1111
## API / Key Files
1212

1313
- [SPECIFICATION.md](https://github.com/urmzd/llmem/blob/main/SPECIFICATION.md): The formal llmem specification
14-
- [crates/llmem-core/](https://github.com/urmzd/llmem/blob/main/crates/llmem-core/): Core types and operations
15-
- [crates/llmem-cli/](https://github.com/urmzd/llmem/blob/main/crates/llmem-cli/): CLI binary
14+
- [crates/llmem-core/](https://github.com/urmzd/llmem/blob/main/crates/llmem-core/): Core types, inbox, config, embeddings
15+
- [crates/llmem-cli/](https://github.com/urmzd/llmem/blob/main/crates/llmem-cli/): Cognitive CLI binary
16+
- [crates/llmem-index/](https://github.com/urmzd/llmem/blob/main/crates/llmem-index/): ANN indices and tree-sitter code indexing
17+
- [crates/llmem-quant/](https://github.com/urmzd/llmem/blob/main/crates/llmem-quant/): TurboQuant vector quantization
1618
- [crates/llmem-server/](https://github.com/urmzd/llmem/blob/main/crates/llmem-server/): RAG HTTP server
1719
- [skills/llmem/SKILL.md](https://github.com/urmzd/llmem/blob/main/skills/llmem/SKILL.md): Portable agent skill
1820

skills/llmem/SKILL.md

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -117,10 +117,17 @@ Under 200 lines per level. Loaded at every conversation start.
117117

118118
## Optional CLI
119119

120+
The CLI uses cognitive metaphors for its commands:
121+
120122
```bash
121123
cargo install llmem-cli
122-
llmem init # project memory
123-
llmem init --global # global memory
124-
llmem add feedback <name> -d "<description>"
125-
llmem search "<query>"
124+
llmem init # project memory
125+
llmem init --global # global memory
126+
llmem memorize "prefer Rust for CLI tools" -t feedback # encode to long-term memory
127+
llmem note "check async runtime choices" # quick capture to inbox
128+
llmem learn . # ingest codebase into inbox
129+
llmem consolidate # promote inbox → long-term memory
130+
llmem remember "rust" # semantic + text search
131+
llmem reflect --all # review memories + inbox
132+
llmem forget feedback_old-approach.md # deliberately forget
126133
```

0 commit comments

Comments
 (0)