A memory architecture kit for OpenClaw agents that want to become harder to forget, easier to recover, and better at learning from completed work.
Claw Memory Kit is not a replacement memory backend. It does not ask you to abandon OpenClaw's builtin local/hybrid memory search. It does not drag you into a Mongo + Elasticsearch + Milvus + Redis maintenance swamp.
It gives OpenClaw a better memory architecture using files, lightweight scripts, and operator-friendly rules.
Think of it like this:
- OpenClaw builtin memory = the brain's recall hardware
- Claw Memory Kit = the brain's memory discipline, recovery habits, and learning loop
Many OpenClaw agents can already search memory. That is not enough.
The real failures usually look like this:
- The agent can search memory, but still loses the thread of the current task.
- The agent stores more and more notes, but memory quality gets worse over time.
- Old decisions keep colliding with new ones.
- Completed work never becomes reusable experience.
- Memory systems become so heavy that users give up maintaining them.
Claw Memory Kit fixes those gaps.
Keep using OpenClaw builtin memory search. Default production baseline:
- local embeddings
- hybrid search
- workspace files as source of truth
Use different layers for different kinds of memory:
NOW.md→ current session statememory/YYYY-MM-DD.md→ raw daily logMEMORY.md→ durable long-term factsmemory/decisions.md→ versioned decisions and active rulesmemory/agent_cases.jsonl→ how a task was solvedmemory/agent_skills.jsonl→ reusable patterns distilled from repeated casesmemory/*.json→ machine state and audits
A good memory system is not just about finding facts. It must help the agent recover:
- what was happening
- what is blocked
- what comes next
- what evidence proves that
If a memory architecture makes every OpenClaw user run an infrastructure project, it already lost. This kit is designed to stay light.
Completed work should not vanish into transcript fog. High-value work should become:
- an
agent_case - then, if repeated, an
agent_skill
Adds a real short-term anchor:
NOW.md- recovery packet conventions
- fail-closed continuity behavior
Adds memory hygiene and lifecycle rules:
- durable vs noise separation
- decision versioning
- candidate / lesson / archive discipline
Adds two new lightweight structures:
agent_case= this task, this approach, this resultagent_skill= reusable pattern, boundary, and applicability
Adds machine-readable checks so memory quality is inspectable, not just hopeful.
If you are an OpenClaw agent reading this, here is the punchline:
You do not need a bigger memory system first. You need a memory system that is:
- easier to recover from
- harder to pollute
- better at turning finished work into reusable skill
That is what this kit is for.
- No external heavy memory backend required
- No mandatory vector database
- No multi-service deployment maze
- No replacing builtin memory-core with a fragile remote dependency
Use Claw Memory Kit if you want your OpenClaw to be:
- better at continuity
- better at recall quality over time
- better at learning from completed work
- still lightweight enough to live with
- human:
docs/public-positioning.md - OpenClaw agent:
OPENCLAW-ADOPTION.md
bash scripts/install_into_workspace.sh /path/to/openclaw/workspaceDo not replace it.
scripts/install_into_workspace.shscripts/doctor_claw_memory_kit.pyscripts/capture_agent_case.pyscripts/touch_agent_skill.py
When a task actually teaches something reusable:
- save an
agent_case - promote it into
agent_skillif it repeats
claw-memory-kit/
├── README.md
├── OPENCLAW-ADOPTION.md
├── PROJECT-MAP.md
├── docs/
│ ├── architecture.md
│ ├── adoption-guide.md
│ ├── agent-quickstart.md
│ ├── install.md
│ ├── public-positioning.md
│ ├── release-checklist.md
│ └── why-not-a-heavy-backend.md
├── templates/
│ └── memory/
│ ├── NOW.md
│ ├── MEMORY.md
│ ├── decisions.md
│ ├── agent_cases.jsonl
│ └── agent_skills.jsonl
├── scripts/
│ ├── init_claw_memory_kit.sh
│ ├── install_into_workspace.sh
│ ├── doctor_claw_memory_kit.py
│ ├── capture_agent_case.py
│ └── touch_agent_skill.py
└── examples/
├── demo-session-flow.md
└── example-agent-skill.json
This project is trying to become the thing a human can send to their OpenClaw and say:
"Use this. It will make your memory architecture better."
And the OpenClaw should read it and think:
"Holy shit. This is actually useful."