Skip to content

Commit 9e873ba

Browse files
pinexaiclaude
andcommitted
feat: agentmemory v0.1.2 — 5 new features
- AutoGen adapter: AutoGenMemoryHook + get_autogen_memory_context - Qdrant backend: MemoryStore now accepts qdrant_url param + example - Export/import JSON: MemoryStore.export_json() / import_json() - CLI: agentmemory inspect / export / import subcommands - AsyncMemoryStore: full async/await API via ThreadPoolExecutor - 67 new tests (123 total, all passing) - README: 5 new integration sections + roadmap updated Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
1 parent 5f00f9b commit 9e873ba

19 files changed

Lines changed: 1677 additions & 24 deletions

README.md

Lines changed: 154 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -294,6 +294,155 @@ Works with Claude Code, Cursor, and any MCP-compatible AI coding assistant.
294294

295295
---
296296

297+
## AutoGen Integration
298+
299+
Give AutoGen agents persistent memory that survives across sessions.
300+
301+
```python
302+
from agentmemory import MemoryStore
303+
from agentmemory.adapters.autogen import AutoGenMemoryHook, get_autogen_memory_context
304+
import autogen
305+
306+
memory = MemoryStore(agent_id="my-autogen-agent")
307+
308+
# Inject past context into the agent's system_message
309+
context = get_autogen_memory_context(memory, role="Research Assistant",
310+
goal="literature review on LLMs")
311+
312+
assistant = autogen.AssistantAgent(
313+
name="researcher",
314+
system_message=context + "\nYou are a helpful research assistant.",
315+
llm_config={"model": "gpt-4o-mini"},
316+
)
317+
318+
# Hook captures every reply and stores it in memory
319+
hook = AutoGenMemoryHook(memory, importance=6)
320+
assistant.register_reply(
321+
trigger=autogen.ConversableAgent,
322+
reply_func=hook.on_agent_reply,
323+
position=0,
324+
)
325+
```
326+
327+
Install: `pip install "agentcortex[autogen]"`
328+
329+
---
330+
331+
## Qdrant Production Backend
332+
333+
Scale to millions of vectors with a dedicated vector database.
334+
335+
```python
336+
from agentmemory import MemoryStore
337+
338+
# docker run -p 6333:6333 qdrant/qdrant
339+
memory = MemoryStore(
340+
agent_id="my-agent",
341+
semantic_backend="qdrant",
342+
qdrant_url="http://localhost:6333", # or Qdrant Cloud URL
343+
embedding_provider="sentence-transformers",
344+
)
345+
346+
memory.remember("Production architecture uses microservices", importance=8)
347+
results = memory.recall("architecture")
348+
```
349+
350+
Install: `pip install "agentcortex[qdrant]"`
351+
352+
---
353+
354+
## Memory Export / Import (JSON)
355+
356+
Back up and restore episodic memories across machines or agent instances.
357+
358+
```python
359+
from agentmemory import MemoryStore
360+
361+
memory = MemoryStore(agent_id="my-agent")
362+
memory.remember("PostgreSQL is our main database", importance=8)
363+
364+
# Export to JSON file
365+
memory.export_json("backup.json")
366+
367+
# Restore on another machine / new agent
368+
new_memory = MemoryStore(agent_id="new-agent")
369+
count = new_memory.import_json("backup.json")
370+
print(f"Imported {count} memories")
371+
372+
# Merge instead of replacing
373+
new_memory.import_json("backup.json", merge=True)
374+
375+
# Or work with the dict directly
376+
data = memory.export_json() # no path → returns dict only
377+
new_memory.import_json(data)
378+
```
379+
380+
---
381+
382+
## Memory CLI
383+
384+
Inspect and manage memories from the command line.
385+
386+
```bash
387+
# Inspect stored memories
388+
agentmemory inspect --agent-id my-project
389+
390+
# agentmemory — agent: my-project
391+
# ════════════════════════════════════════
392+
# EPISODIC MEMORY (3 entries)
393+
# ────────────────────────────────────────
394+
# # IMP Created Content
395+
# 1 9 2026-02-28 14:23:01 We use PostgreSQL for relational...
396+
# 2 7 2026-02-27 09:14:55 payment/process_transaction.py h...
397+
# 3 5 2026-02-26 18:30:12 User prefers functional style ove...
398+
399+
# Export memories to JSON
400+
agentmemory export --agent-id my-project --output memories.json
401+
402+
# Import memories (restores; use --merge to add alongside existing)
403+
agentmemory import memories.json --agent-id new-project --merge
404+
```
405+
406+
Install: `pip install agentcortex` (the CLI is always included)
407+
408+
---
409+
410+
## Async Support
411+
412+
Use agentmemory in FastAPI, aiohttp, or any async Python application.
413+
414+
```python
415+
import asyncio
416+
from agentmemory import AsyncMemoryStore
417+
418+
async def main():
419+
# Identical API to MemoryStore — just add await
420+
memory = AsyncMemoryStore(agent_id="my-async-agent")
421+
422+
await memory.remember("User prefers Python over JavaScript", importance=7)
423+
results = await memory.recall("tech stack")
424+
context = await memory.get_context("What do we know?")
425+
426+
# Export / import work the same way
427+
data = await memory.export_json()
428+
await memory.import_json(data)
429+
430+
memory.close()
431+
432+
# Or use as an async context manager
433+
async def with_context_manager():
434+
async with AsyncMemoryStore(agent_id="my-agent") as memory:
435+
await memory.remember("Context manager closes executor automatically")
436+
ctx = await memory.get_context()
437+
print(ctx)
438+
439+
asyncio.run(main())
440+
```
441+
442+
Install: `pip install agentcortex` (`AsyncMemoryStore` is always included)
443+
444+
---
445+
297446
## Comparison
298447

299448
| | MemGPT | LangChain Memory | **AgentMemory** |
@@ -311,11 +460,11 @@ Works with Claude Code, Cursor, and any MCP-compatible AI coding assistant.
311460

312461
## Roadmap
313462

314-
- [ ] AutoGen adapter
315-
- [ ] Qdrant production backend examples
316-
- [ ] Memory export/import (JSON)
317-
- [ ] Memory visualization CLI (`agentmemory inspect`)
318-
- [ ] Async support (`AsyncMemoryStore`)
463+
- [x] AutoGen adapter (`pip install "agentcortex[autogen]"`)
464+
- [x] Qdrant production backend (`pip install "agentcortex[qdrant]"`)
465+
- [x] Memory export/import (JSON)`memory.export_json()` / `memory.import_json()`
466+
- [x] Memory visualization CLI `agentmemory inspect / export / import`
467+
- [x] Async support `AsyncMemoryStore` with full `await` API
319468
- [x] MCP server integration (`pip install "agentcortex[mcp]"`)
320469

321470
---

agentmemory/__init__.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,16 +12,18 @@
1212
Works with LangChain, CrewAI, AutoGen, and raw Anthropic/OpenAI SDKs.
1313
"""
1414

15+
from .async_store import AsyncMemoryStore
1516
from .compression import ContextCompressor
1617
from .dedup import MemoryDeduplicator
1718
from .store import MemoryStore
1819
from .tiers.episodic import EpisodicMemory
1920
from .tiers.semantic import SemanticMemory
2021
from .tiers.working import WorkingMemory
2122

22-
__version__ = "0.1.1"
23+
__version__ = "0.1.2"
2324
__all__ = [
2425
"MemoryStore",
26+
"AsyncMemoryStore",
2527
"EpisodicMemory",
2628
"SemanticMemory",
2729
"WorkingMemory",

agentmemory/adapters/__init__.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
from .anthropic import MemoryAnthropic
2+
from .autogen import AutoGenMemoryHook, get_autogen_memory_context
23
from .crewai import CrewMemoryCallback, get_memory_context_for_agent
34
from .langchain import MemoryHistory, inject_memory_context
45
from .openai import MemoryOpenAI
@@ -10,4 +11,6 @@
1011
"MemoryAnthropic",
1112
"CrewMemoryCallback",
1213
"get_memory_context_for_agent",
14+
"AutoGenMemoryHook",
15+
"get_autogen_memory_context",
1316
]

agentmemory/adapters/autogen.py

Lines changed: 145 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,145 @@
1+
"""AutoGen adapter for agentmemory.
2+
3+
Provides two things:
4+
1. AutoGenMemoryHook — a register_reply hook that captures agent messages
5+
and stores them in persistent memory.
6+
2. get_autogen_memory_context — returns a memory context string ready to
7+
inject into an AutoGen agent's system_message.
8+
9+
No hard dependency on pyautogen — the module imports cleanly without it.
10+
pyautogen is only required if you actually run AutoGen agents.
11+
12+
Usage:
13+
from agentmemory import MemoryStore
14+
from agentmemory.adapters.autogen import AutoGenMemoryHook, get_autogen_memory_context
15+
import autogen
16+
17+
memory = MemoryStore(agent_id="my-agent")
18+
19+
# Inject past context into agent system_message
20+
context = get_autogen_memory_context(memory, role="Research Assistant")
21+
22+
assistant = autogen.AssistantAgent(
23+
name="assistant",
24+
system_message=context + "\\nYou are a helpful research assistant.",
25+
)
26+
27+
# Register the hook so replies are captured to memory
28+
hook = AutoGenMemoryHook(memory)
29+
assistant.register_reply(
30+
trigger=autogen.ConversableAgent,
31+
reply_func=hook.on_agent_reply,
32+
position=0,
33+
)
34+
"""
35+
from __future__ import annotations
36+
37+
from typing import TYPE_CHECKING, Any
38+
39+
if TYPE_CHECKING:
40+
from agentmemory.store import MemoryStore
41+
42+
43+
class AutoGenMemoryHook:
44+
"""
45+
AutoGen register_reply hook that captures agent replies and stores them
46+
in agentmemory for use across sessions.
47+
48+
This hook observes replies without intercepting them — it always returns
49+
``(False, None)`` so the normal reply chain continues unmodified.
50+
51+
Usage:
52+
hook = AutoGenMemoryHook(memory_store, importance=6)
53+
agent.register_reply(
54+
trigger=autogen.ConversableAgent,
55+
reply_func=hook.on_agent_reply,
56+
position=0,
57+
)
58+
"""
59+
60+
def __init__(self, memory_store: MemoryStore, importance: int = 6) -> None:
61+
"""
62+
Args:
63+
memory_store: The MemoryStore instance to write to.
64+
importance: Importance level 1-10 for stored memories (default 6).
65+
Use higher values for critical reasoning, lower for
66+
routine conversational turns.
67+
"""
68+
self._store = memory_store
69+
self._importance = importance
70+
71+
def on_agent_reply(
72+
self,
73+
recipient: Any,
74+
messages: list[dict] | None,
75+
sender: Any,
76+
config: Any,
77+
) -> tuple[bool, None]:
78+
"""
79+
AutoGen register_reply callback signature.
80+
81+
Reads the last message in the chain and stores its content to memory.
82+
Returns ``(False, None)`` — does NOT intercept or modify the reply.
83+
84+
Args:
85+
recipient: The agent receiving the message (unused).
86+
messages: Current message list.
87+
sender: The agent that sent the message (unused).
88+
config: AutoGen config dict (unused).
89+
"""
90+
if messages:
91+
last = messages[-1]
92+
content = last.get("content", "")
93+
if content and isinstance(content, str) and content.strip():
94+
# Truncate very long messages to avoid storing noise
95+
self._store.remember(content[:1000], importance=self._importance)
96+
return False, None # observe only — do not intercept
97+
98+
def on_message_received(self, message: str | dict) -> None:
99+
"""
100+
Convenience method for manually storing a received message.
101+
102+
Args:
103+
message: Raw message string or dict with a "content" key.
104+
"""
105+
if isinstance(message, dict):
106+
content = message.get("content", "")
107+
else:
108+
content = str(message)
109+
if content and content.strip():
110+
self._store.remember(content[:1000], importance=self._importance)
111+
112+
113+
def get_autogen_memory_context(
114+
memory_store: MemoryStore,
115+
role: str,
116+
goal: str | None = None,
117+
max_tokens: int = 400,
118+
) -> str:
119+
"""
120+
Return a memory context string for injection into an AutoGen agent's
121+
``system_message``.
122+
123+
Retrieves relevant memories from agentmemory based on the agent's role and
124+
optional goal, and returns them as a formatted context block. Prepend this
125+
to the agent's system_message to give it codebase/task memory.
126+
127+
Args:
128+
memory_store: The MemoryStore instance to query.
129+
role: Agent role name, e.g. "Research Assistant", "Code Reviewer".
130+
goal: Optional goal description to narrow memory retrieval.
131+
max_tokens: Maximum token budget for the returned context (default 400).
132+
133+
Returns:
134+
A formatted context string, or an empty string if no memories exist yet.
135+
136+
Example:
137+
context = get_autogen_memory_context(memory, "Research Assistant",
138+
goal="literature review on LLMs")
139+
agent = autogen.AssistantAgent(
140+
name="researcher",
141+
system_message=context + "\\nYou are a research assistant.",
142+
)
143+
"""
144+
query = f"{role} {goal or ''}".strip()
145+
return memory_store.get_context(query=query, max_tokens=max_tokens)

0 commit comments

Comments
 (0)