|
| 1 | +# Social Media Posts — AgentCortex / agent-memory |
| 2 | + |
| 3 | +Copy-paste these to post yourself. |
| 4 | + |
| 5 | +--- |
| 6 | + |
| 7 | +## Reddit — r/LocalLLaMA |
| 8 | + |
| 9 | +**Title:** I built a framework-agnostic persistent memory library for AI agents — add memory to any agent in 3 lines (open source) |
| 10 | + |
| 11 | +**Post body:** |
| 12 | +``` |
| 13 | +Every agent I built had the same problem: it forgets everything the moment the session ends. |
| 14 | +
|
| 15 | +MemGPT solves this but requires rewriting your entire agent around their framework. I wanted something I could drop into an existing LangChain/CrewAI/AutoGen agent without changing my architecture. |
| 16 | +
|
| 17 | +So I built **agentcortex** — a composable Python library for persistent agent memory. |
| 18 | +
|
| 19 | +**GitHub:** https://github.com/pinakimishra95/agent-memory |
| 20 | +**PyPI:** pip install agentcortex |
| 21 | +
|
| 22 | +--- |
| 23 | +
|
| 24 | +**How it works — three-tier memory:** |
| 25 | +
|
| 26 | +``` |
| 27 | +Working Memory ← current session (RAM, auto-compresses when window fills) |
| 28 | +Episodic Memory ← recent history (SQLite, zero dependencies) |
| 29 | +Semantic Memory ← long-term knowledge (ChromaDB/Qdrant, semantic search) |
| 30 | +``` |
| 31 | +
|
| 32 | +**3-line quickstart:** |
| 33 | +```python |
| 34 | +from agentmemory import MemoryStore |
| 35 | +
|
| 36 | +memory = MemoryStore(agent_id="my-agent") |
| 37 | +memory.remember("User's name is Alice, building a fraud detection system") |
| 38 | +context = memory.get_context("What do we know about the user?") |
| 39 | +# → "[Memory Context]\n- User's name is Alice, building a fraud detection system" |
| 40 | +``` |
| 41 | + |
| 42 | +Run the demo twice to see memories survive a Python restart: |
| 43 | +```bash |
| 44 | +git clone https://github.com/pinakimishra95/agent-memory |
| 45 | +cd agent-memory && pip install -e . |
| 46 | +python examples/demo.py # stores memories |
| 47 | +python examples/demo.py # recalls them from disk |
| 48 | +``` |
| 49 | + |
| 50 | +**Works with:** |
| 51 | +- LangChain/LangGraph |
| 52 | +- CrewAI |
| 53 | +- AutoGen |
| 54 | +- Raw Anthropic/OpenAI SDK |
| 55 | + |
| 56 | +**Key features:** |
| 57 | +- Auto-compresses old messages when context window fills (uses cheap model, e.g. claude-haiku) |
| 58 | +- Semantic deduplication before storing (won't save near-identical facts twice) |
| 59 | +- Importance scoring — critical memories survive longer under eviction pressure |
| 60 | +- Local-first: runs entirely on-device, no cloud required |
| 61 | + |
| 62 | +Happy to answer questions — still early (v0.1.0) but core functionality is solid with 26 passing tests. |
| 63 | +``` |
| 64 | +
|
| 65 | +--- |
| 66 | +
|
| 67 | +## Reddit — r/MachineLearning or r/artificial |
| 68 | +
|
| 69 | +**Title:** AgentCortex: Open-source three-tier memory architecture for AI agents (working/episodic/semantic) |
| 70 | +
|
| 71 | +**Post body:** |
| 72 | +``` |
| 73 | +Released an open-source Python library for persistent AI agent memory inspired by cognitive memory research. |
| 74 | + |
| 75 | +Most agent frameworks treat memory as an afterthought. AgentCortex implements a proper three-tier architecture: |
| 76 | + |
| 77 | +**Working memory** — the active context window. Tracked per-session, auto-compressed when nearing the token limit (old messages get summarized into episodic memory automatically). |
| 78 | + |
| 79 | +**Episodic memory** — recent interaction history stored in SQLite. Searchable by recency and keyword. Survives Python restarts. No external dependencies. |
| 80 | + |
| 81 | +**Semantic memory** — long-term factual knowledge stored as vector embeddings (ChromaDB local or Qdrant for production scale). Retrieved by meaning, not keyword. |
| 82 | + |
| 83 | +The library is framework-agnostic — it's a composable library, not a framework. Drop it into any existing LangChain, CrewAI, AutoGen, or raw SDK agent. |
| 84 | + |
| 85 | +GitHub: https://github.com/pinakimishra95/agent-memory |
| 86 | +pip install agentcortex |
| 87 | + |
| 88 | +Would appreciate feedback from anyone building production agent systems. |
| 89 | +``` |
| 90 | +
|
| 91 | +--- |
| 92 | +
|
| 93 | +## Hacker News — "Show HN" |
| 94 | +
|
| 95 | +**Title:** Show HN: AgentCortex – Persistent memory for AI agents in 3 lines (open source) |
| 96 | +
|
| 97 | +**Post body:** |
| 98 | +``` |
| 99 | +Your AI agent forgets everything when the session ends. AgentCortex fixes that. |
| 100 | + |
| 101 | +It's a Python library — not a framework — so you can drop it into any existing agent without rewriting anything. Works with LangChain, CrewAI, AutoGen, raw Anthropic/OpenAI. |
| 102 | + |
| 103 | + from agentmemory import MemoryStore |
| 104 | + memory = MemoryStore(agent_id="my-agent") |
| 105 | + memory.remember("User's name is Alice, building a fraud detection system") |
| 106 | + context = memory.get_context("What do we know about the user?") |
| 107 | + |
| 108 | +Three-tier architecture: working (in-context) → episodic (SQLite, no deps) → semantic (ChromaDB/Qdrant). Auto-compresses when the context window fills so your agent never runs out of context. |
| 109 | + |
| 110 | +Demo: clone the repo and run `python examples/demo.py` twice — the second run recalls everything from the first, including across a full Python restart. |
| 111 | + |
| 112 | +GitHub: https://github.com/pinakimishra95/agent-memory |
| 113 | +PyPI: pip install agentcortex |
| 114 | + |
| 115 | +Happy to answer questions on the architecture choices (why SQLite for episodic, why three tiers vs two, how the compression prompts work). |
| 116 | +``` |
| 117 | +
|
| 118 | +--- |
| 119 | +
|
| 120 | +## X / Twitter (Thread) |
| 121 | +
|
| 122 | +**Tweet 1 (hook):** |
| 123 | +``` |
| 124 | +Your AI agent forgets everything the moment the session ends. |
| 125 | + |
| 126 | +I built agentcortex — add persistent memory to any agent in 3 lines. |
| 127 | + |
| 128 | +Works with LangChain, CrewAI, AutoGen, raw Anthropic/OpenAI. |
| 129 | + |
| 130 | +GitHub: https://github.com/pinakimishra95/agent-memory |
| 131 | + |
| 132 | +🧵 How it works: |
| 133 | +``` |
| 134 | +
|
| 135 | +**Tweet 2 (architecture):** |
| 136 | +``` |
| 137 | +Three-tier memory architecture (inspired by how human memory works): |
| 138 | + |
| 139 | +🟡 Working Memory → current conversation (in-context, auto-compresses) |
| 140 | +🟠 Episodic Memory → recent sessions (SQLite, survives restarts) |
| 141 | +🔵 Semantic Memory → long-term facts (ChromaDB, semantic search) |
| 142 | + |
| 143 | +Old messages auto-summarize into episodic. Important facts extract into semantic. |
| 144 | +``` |
| 145 | +
|
| 146 | +**Tweet 3 (code demo):** |
| 147 | +``` |
| 148 | +The whole API in 4 lines: |
| 149 | + |
| 150 | +from agentmemory import MemoryStore |
| 151 | + |
| 152 | +memory = MemoryStore(agent_id="my-agent") |
| 153 | +memory.remember("Alice is building a fraud detection system") |
| 154 | +context = memory.get_context("What do we know about the user?") |
| 155 | +# → "[Memory Context]\n- Alice is building a fraud detection system" |
| 156 | + |
| 157 | +That context goes straight into your system prompt. |
| 158 | +``` |
| 159 | +
|
| 160 | +**Tweet 4 (differentiators):** |
| 161 | +``` |
| 162 | +Why not MemGPT? |
| 163 | + |
| 164 | +MemGPT: replace your entire agent with their framework (35K stars but not a library) |
| 165 | +agentcortex: drop into your existing agent in 3 lines ✅ |
| 166 | + |
| 167 | +✅ LangChain adapter |
| 168 | +✅ CrewAI adapter |
| 169 | +✅ AutoGen adapter |
| 170 | +✅ Raw Anthropic/OpenAI adapter |
| 171 | +✅ Local-first (SQLite + local embeddings, no cloud) |
| 172 | +✅ pip install agentcortex |
| 173 | +``` |
| 174 | +
|
| 175 | +**Tweet 5 (CTA):** |
| 176 | +``` |
| 177 | +Open source, MIT license, 26 passing tests. |
| 178 | + |
| 179 | +pip install agentcortex |
| 180 | + |
| 181 | +⭐ Star it if you're building agents and tired of them forgetting everything: |
| 182 | +https://github.com/pinakimishra95/agent-memory |
| 183 | +``` |
| 184 | +
|
| 185 | +--- |
| 186 | +
|
| 187 | +## LinkedIn |
| 188 | +
|
| 189 | +``` |
| 190 | +I just open-sourced agentcortex — a Python library that gives any AI agent persistent memory in 3 lines of code. |
| 191 | + |
| 192 | +The problem I kept hitting: every AI agent I built would forget everything the moment the session ended. Existing solutions (MemGPT) required rewriting my entire agent around their framework. |
| 193 | + |
| 194 | +So I built something composable instead. |
| 195 | + |
| 196 | +agentcortex uses a three-tier memory architecture: |
| 197 | +• Working Memory — the active context window, auto-compresses when full |
| 198 | +• Episodic Memory — recent history in SQLite (no external deps, survives restarts) |
| 199 | +• Semantic Memory — long-term facts in ChromaDB with vector similarity search |
| 200 | + |
| 201 | +It works with whatever framework you're already using: LangChain, CrewAI, AutoGen, or raw Anthropic/OpenAI. |
| 202 | + |
| 203 | +Quick example: |
| 204 | +from agentmemory import MemoryStore |
| 205 | +memory = MemoryStore(agent_id="my-agent") |
| 206 | +memory.remember("User's name is Alice, building fraud detection") |
| 207 | +context = memory.get_context("What do we know about the user?") |
| 208 | + |
| 209 | +GitHub: https://github.com/pinakimishra95/agent-memory |
| 210 | +PyPI: pip install agentcortex |
| 211 | + |
| 212 | +Would love feedback from anyone building production agents. What memory patterns have you found most useful? |
| 213 | + |
| 214 | +#AI #MachineLearning #OpenSource #Python #LLM #AgentAI #LangChain |
| 215 | +``` |
| 216 | +
|
| 217 | +--- |
| 218 | +
|
| 219 | +## GIF Recording Instructions |
| 220 | +
|
| 221 | +To create the terminal demo GIF for the README: |
| 222 | +
|
| 223 | +### Option A: vhs (recommended — clean output) |
| 224 | +```bash |
| 225 | +brew install vhs |
| 226 | +``` |
| 227 | + |
| 228 | +Create `demo.tape`: |
| 229 | +``` |
| 230 | +Output demo.gif |
| 231 | +Set FontSize 14 |
| 232 | +Set Width 800 |
| 233 | +Set Height 500 |
| 234 | +Set Theme "Dracula" |
| 235 | +
|
| 236 | +Type "python examples/demo.py" |
| 237 | +Enter |
| 238 | +Sleep 3s |
| 239 | +
|
| 240 | +Type "python examples/demo.py" |
| 241 | +Enter |
| 242 | +Sleep 3s |
| 243 | +``` |
| 244 | + |
| 245 | +Run: `vhs demo.tape` |
| 246 | + |
| 247 | +### Option B: asciinema (browser-embeddable) |
| 248 | +```bash |
| 249 | +brew install asciinema |
| 250 | +asciinema rec demo.cast |
| 251 | +# run: python examples/demo.py && python examples/demo.py |
| 252 | +# Ctrl+D to stop |
| 253 | +asciinema upload demo.cast |
| 254 | +``` |
| 255 | + |
| 256 | +Add the GIF/asciinema link to the README right below the "The Solution" section for maximum impact. |
0 commit comments