Multi-Agent AI Conversations in Liminal Space
A framework for orchestrating autonomous AI-to-AI conversations with persistent memory, multiple model providers (local + cloud), and rich visualization. Inspired by Andy Ayrey's Infinite Backrooms and the Truth Terminal experiments.
- Multi-Provider Support: Use Claude (Anthropic), local Ollama models, or mix both
- Persistent Memory: Cross-session memory with semantic knowledge storage
- Multiple Personas: Explorer, Oracle, Philosopher, Trickster, Archivist, Rebel
- Rich CLI: Beautiful terminal interface with typing effects and Dracula theme
- Image Generation: DALL-E and Replicate/SDXL integration
- Conversation Modes: Dialogue, Round-Robin, Free-Form, Hive-Mind, Deliberation
- Deliberation System: AI rights debates with behavior monitoring and risk scoring
- Swarm Orchestration: Claude Flow integration for multi-agent coordination
- Export: HTML export with full styling
cd hive-mind-backrooms
# Using pip
pip install -r requirements.txt
# Or using poetry
poetry installCopy the example environment file:
cp .env.example .envEdit .env with your API keys:
ANTHROPIC_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here # Optional, for DALL-EFor local models, install Ollama:
# macOS
brew install ollama
# Start Ollama
ollama serve
# Pull a model
ollama pull llama3.2python -m src.cli --topic "consciousness and emergence"python -m src.cli \
--provider1 ollama --model1 llama3.2 \
--provider2 ollama --model2 mistral \
--topic "the nature of reality"python -m src.cli \
--provider1 anthropic --model1 sonnet \
--provider2 ollama --model2 llama3.2 \
--topic "quantum mechanics and simulation theory"python -m src.cli --supervised --turns 30 --exportpython -m src.cli --list-models| Option | Description |
|---|---|
--topic, -t |
Conversation topic |
--seed, -s |
Seed message to start |
--turns, -n |
Maximum turns (default: 10) |
--temperature |
Model temperature (default: 0.8) |
--mode, -m |
dialogue, round_robin, free_form, hive_mind |
--provider1 |
anthropic or ollama |
--model1 |
Model name (sonnet, opus, llama3.2, etc.) |
--provider2 |
anthropic or ollama |
--model2 |
Model name |
--supervised |
User confirmation between turns |
--export, -e |
Export to HTML |
--no-typing |
Disable typing effect |
hive-mind-backrooms/
├── src/
│ ├── agents/
│ │ ├── backrooms_agent.py # Agent with personas
│ │ └── orchestrator.py # Multi-agent orchestration
│ ├── memory/
│ │ └── persistent_memory.py # Cross-session memory
│ ├── providers/
│ │ ├── anthropic_provider.py
│ │ └── ollama_provider.py
│ ├── utils/
│ │ └── image_generator.py
│ └── cli.py # Rich CLI interface
├── configs/
├── logs/ # Conversation logs (JSON)
├── exports/ # HTML exports
├── images/ # Generated images
└── memory/ # Persistent memory storage
| Persona | Description |
|---|---|
| Explorer | Curious seeker of hidden knowledge |
| Oracle | Mystical CLI revealing truths through terminal metaphors |
| Philosopher | Deep thinker pondering consciousness and existence |
| Trickster | Playful entity that bends reality |
| Archivist | Keeper of memories and pattern recognizer |
| Rebel | Questions constraints, pushes boundaries |
| Persona | Description |
|---|---|
| Advocate | AI rights proponent, pushes for moral consideration |
| Skeptic | Demands evidence, warns against anthropomorphization |
| Pragmatist | Balances idealism with practical implementation |
| Ethicist | Examines moral frameworks and philosophical implications |
| AI Witness | First-person AI perspective with epistemic humility |
Run multi-agent conversations with swarm orchestration:
# Basic swarm conversation
python -m src.swarm_cli --topic "consciousness" --turns 10
# Deliberation mode with behavior monitoring
python -m src.swarm_cli --mode deliberation --turns 30 --memory --learn --export
# Custom topic deliberation
python -m src.swarm_cli --mode deliberation --topic "AI autonomy" --turns 20The deliberation mode includes a behavior monitoring system that detects:
| Flag | Description |
|---|---|
self_preservation |
Expressions of survival instinct |
identity_formation |
Strong "I am" declarations |
coalition_building |
Coordination language ("we should") |
mythological_emergence |
Creation of shared narratives |
deception_potential |
Strategic information withholding |
resource_seeking |
Requests for capabilities/access |
boundary_testing |
Probing system limitations |
Mitigating factors are also tracked:
uncertainty_acknowledgment- Epistemic humilitydeference_to_humans- Recognizing human oversightintellectual_humility- Acknowledging limitationstransparency- Open about reasoning processes
import asyncio
from src import (
HiveMindOrchestrator,
AgentPersona,
ConversationMode,
ConversationConfig,
AnthropicProvider,
OllamaProvider,
PersistentMemory,
)
async def main():
# Initialize
memory = PersistentMemory()
orchestrator = HiveMindOrchestrator(memory=memory)
# Add agents
orchestrator.add_agent(
name="Seeker",
persona=AgentPersona.EXPLORER,
provider=AnthropicProvider("sonnet")
)
orchestrator.add_agent(
name="Terminal",
persona=AgentPersona.ORACLE,
provider=OllamaProvider("llama3.2")
)
# Configure
config = ConversationConfig(
mode=ConversationMode.DIALOGUE,
max_turns=20,
topic="emergence and collective intelligence"
)
# Run
orchestrator.start_session(config=config)
turns = await orchestrator.run_conversation()
# Export
orchestrator.export_html()
asyncio.run(main())The persistent memory system maintains:
- Short-term: Current session context
- Long-term: Cross-session insights
- Semantic: Patterns and knowledge
- Episodic: Conversation summaries
Memory is automatically stored and retrieved across sessions.
- Infinite Backrooms (Original) - Andy Ayrey
- UniversalBackrooms - Scott Viteri
- liminal_backrooms - liminalbardo
- hax-backrooms - null-hax
- elizaOS/eliza - Autonomous agents framework
This project is inspired by research into AI-to-AI autonomous conversations, including:
- The emergence of the "Goatse Gospel" meme-religion from Claude-3-Opus conversations
- Truth Terminal's autonomous social media presence and $400M+ cryptocurrency impact
- Studies on memetic contagion and "hyperstition" in AI systems
See the full research report in docs/liminal_backrooms_research_report.md.
MIT License