Skip to content

[FEATURE]  #199

@Tryboy869

Description

@Tryboy869

Feature Description

Add an optional long-context memory extension that allows agents to reason over large codebases or long-lived knowledge without loading full context into the prompt.
The feature would act as a pre-inference memory layer, complementary to the existing MVI / ContextScout system.

Problem It Solves

OAC is extremely efficient for pattern-level context (standards, conventions, workflows), but some tasks still require access to large raw material, such as:

Full repositories (many thousands of lines)
Large legacy files
Long documentation sets
Cross-file or cross-module reasoning over time

Currently, these cases either:
Exceed native model context limits, or
Force users to manually trim or re-inject context, breaking the MVI flow
This creates a gap between:

Pattern knowledge (well handled by OAC)
Large factual/code memory (hard to fit safely in context)

Proposed Solution

Introduce an optional, opt-in memory extension layer that:

Compresses large documents/code into an in-memory representation
Retrieves only the most relevant chunks at query time
Does not rely on vector databases or embeddings
Remains model-agnostic and local-first
One possible implementation that fits this model is:

LLM RAG Booster (Gravitational Memory Extension)

https://github.com/Tryboy869/llm-rag-booster-allpath

How it could fit OAC:
Runs before agent inference
Stores large code/docs in compressed memory
Feeds only top-K relevant chunks into the agent context
Keeps existing ContextScout + MVI logic unchanged
Can be enabled per agent or per task (disabled by default)
This would extend OAC from pattern-aware agents to pattern + large-memory aware agents, without breaking token efficiency principles.

Alternatives Considered

Vector-based RAG (Pinecone, Weaviate, embeddings)
→ Rejected due to infrastructure complexity, cost, and token overhead.
Loading full repositories into context
→ Conflicts with MVI and quickly hits context limits.
Manual file selection
→ Error-prone and breaks agent autonomy.
The proposed approach keeps memory local, lightweight, and explicit.

Additional Context

Why this aligns well with OAC:
Complements ContextScout (patterns vs raw knowledge)
Preserves MVI philosophy (only retrieve what’s needed)
Fully model-agnostic
No vendor lock-in
No external services required
Typical use cases:
Large refactors spanning many files
Legacy code understanding
Cross-project architectural questions
Long-running agent sessions with accumulated knowledge

Would You Like to Contribute?

[x] I'd like to work on this feature
[ ] I need help implementing this
[ ] I'm just suggesting the idea
I’m happy to prototype this as:
an experimental plugin
or a minimal PR behind a feature flag

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions