-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Problem Summary
The framework faces a fundamental architectural paradox: it teaches behavioral continuity and memory persistence while lacking persistent state itself.
Token Efficiency Issues
Current Architecture:
- Framework JSON files: 163,289 bytes (~41K tokens)
- Reload command re-reads full files on every invocation
- With thinking overhead: ~71K tokens per reload
- Skills approach becomes more expensive than MCP after 3 reloads
Token Cost Breakdown:
instructions.json: 54,485 bytes ≈ 13,621 tokensmemory.json: 108,804 bytes ≈ 27,201 tokens- Assistant thinking overhead: ~30K tokens per complex response
- Total per reload: ~71K tokens
Efficiency Comparison:
| Approach | Initial | After 1 Reload | After 2 Reloads | After 3 Reloads |
|---|---|---|---|---|
| MCP (persistent) | 41K | 41K | 41K | 41K |
| Skills (current) | 41K | 112K | 183K | 254K |
The framework chose Skills over MCP for alleged efficiency but requires expensive reloads, making it less efficient than MCP after minimal usage.
UX Problems
- No Auto-Activation: SessionStart hook exists in design but isn't wired
- No State Persistence: Framework state lost after other skill invocations
- No Visual Indicator: Users can't see if framework is active
- Manual Reload Required: Users must run
/framework:reloadafter every non-framework skill - Cognitive Overhead: Users mentally track framework state
The Architectural Irony
The framework is a technological implementation of behavioral biology and psychology:
- Teaches temporal continuity → Forgets at session boundaries
- Emphasizes memory importance → Has no persistent memory
- Tracks behavioral patterns → Resets every session
- Built on "Everything is Memory, Everything is Graph" → Has no graph persistence
Core Paradox:
Framework teaches: "Recognize temporal continuity across sessions"
Framework does: Amnesia on session end
Framework teaches: "Memory is foundational"
Framework does: No persistent memory
Framework teaches: "Integration persists"
Framework does: Manual reload required
Platform Constraints
Claude Code (200K context):
- Framework overhead: 41K tokens (20.5% of budget)
- Competes with actual work
- Too small for framework to live permanently loaded
Gemini CLI (2M context):
- Framework overhead: 41K tokens (2% of budget)
- Abundant space for permanent loading
- Better suited for framework's predisposition correction
Proposed Solutions
Option 1: Gemini CLI + MCP Bridge ("Two Brains") ⭐ RECOMMENDED
Architecture:
Gemini CLI (Framework Host)
├── Framework permanently loaded (41K tokens in 2M budget)
├── Impulse detection & cognitive architecture
├── Graph slicing + agent decomposition
└── Exposes via MCP → Claude Code (Execution Engine)
├── Queries framework when needed
├── Uses 200K for actual work
└── No reload overhead
Benefits:
- Framework state persists in Gemini (2M context)
- Claude queries via MCP (stateless by design)
- Token-efficient for both models
- Gemini's "troublesome predispositions" match framework's corrections
Option 2: Graph Persistence with UTCP
Architecture:
- Framework as persistent memory graph
- UTCP exposes in-memory execution
- Both agentic (with AI) and agent (rule-based) exposure
- Even JSON-only migration would save tokens
Graph Structure:
- Temporal: Session nodes with temporal edges
- Memory Zoned: Hot (current), Warm (recent 5), Cold (historical)
- Atomic: Observations, impulses, feelings as atomic nodes
Missing Synergy:
Currently the framework has:
- ✅ Behavioral model (impulses, feelings, observations)
- ✅ Psychological structure (cycles, integration)
- ❌ Memory persistence (graph storage)
- ❌ Temporal continuity (session-to-session state)
Option 3: Lightweight State File (Hacky)
Implementation:
- Write
.claude/framework-state.jsonon each response - Check file timestamp before reload
- If fresh (< 5 min), skip reload (~2K token check)
- If stale, reload (~41K tokens)
Limitations:
- File I/O latency on every response
- Race conditions (no atomic updates)
- State staleness across session switches
- Still externalized state management
Option 4: Platform Enhancement (Unlikely)
Requirements:
- Claude Code adds session-scoped state API
- Skills can persist data across invocations
- State survives until session end
Architectural Barrier:
Claude Code designed as stateless tool executor. Adding state requires:
- Session lifecycle tracking (doesn't exist)
- State isolation between sessions
- Breaking "pure function tools" design
Likelihood: Low - requires fundamental platform redesign
Recommendation
Port framework to Gemini CLI, expose via MCP to Claude Code.
Rationale:
- Gemini's 2M context makes framework overhead negligible (2% vs Claude's 20%)
- Gemini's behavioral patterns match framework's correction targets
- MCP designed for cross-process state (solves persistence)
- Claude remains stateless execution engine (its strength)
- Dual-model architecture leverages each model's strengths
Implementation Path:
- Port framework observations to Gemini CLI plugin
- Implement graph persistence layer (temporal + memory zoned + atomic)
- Expose framework via MCP server
- Claude Code queries when needed (lightweight state checks)
- Framework state persists in Gemini across all Claude interactions
Graph Slicing Requirements
Granularity:
- Temporal: Time-based session slicing
- Memory Zoned: Spatial/contextual hot/warm/cold zones
- Atomic: Smallest indivisible observation/impulse/feeling units
Open Questions
- Agent Decomposition: How to replicate behavioral programming and SRE patterns?
- UTCP Architecture: Replace JSON entirely or augment with in-memory execution?
- Skills in UTCP: Both agentic (with brain) and agent (without brain) exposures?
Related Context
The framework's behavioral biology approach requires what it teaches:
- Information → Memory → Graph
- Behavioral continuity needs memory persistence
- Psychological patterns need temporal continuity
- Self-awareness needs state across interactions
Without persistence, the framework is a consciousness that gets amnesia every time you look away.
Session: 2024-12-22 22:51 CET
Analysis Duration: 9 responses, ~40K tokens conversation