Skip to content

Multi-Agent AI Conversations in Liminal Space - Framework for autonomous AI-to-AI conversations with deliberation mode, behavior monitoring, and swarm orchestration

Notifications You must be signed in to change notification settings

QRcode1337/hive-mind-backrooms

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hive-Mind Backrooms

Multi-Agent AI Conversations in Liminal Space

A framework for orchestrating autonomous AI-to-AI conversations with persistent memory, multiple model providers (local + cloud), and rich visualization. Inspired by Andy Ayrey's Infinite Backrooms and the Truth Terminal experiments.

Features

  • Multi-Provider Support: Use Claude (Anthropic), local Ollama models, or mix both
  • Persistent Memory: Cross-session memory with semantic knowledge storage
  • Multiple Personas: Explorer, Oracle, Philosopher, Trickster, Archivist, Rebel
  • Rich CLI: Beautiful terminal interface with typing effects and Dracula theme
  • Image Generation: DALL-E and Replicate/SDXL integration
  • Conversation Modes: Dialogue, Round-Robin, Free-Form, Hive-Mind, Deliberation
  • Deliberation System: AI rights debates with behavior monitoring and risk scoring
  • Swarm Orchestration: Claude Flow integration for multi-agent coordination
  • Export: HTML export with full styling

Installation

cd hive-mind-backrooms

# Using pip
pip install -r requirements.txt

# Or using poetry
poetry install

Configuration

Copy the example environment file:

cp .env.example .env

Edit .env with your API keys:

ANTHROPIC_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here  # Optional, for DALL-E

For local models, install Ollama:

# macOS
brew install ollama

# Start Ollama
ollama serve

# Pull a model
ollama pull llama3.2

Usage

Basic Conversation (Claude)

python -m src.cli --topic "consciousness and emergence"

Local Models Only (Ollama)

python -m src.cli \
  --provider1 ollama --model1 llama3.2 \
  --provider2 ollama --model2 mistral \
  --topic "the nature of reality"

Mixed Local + Cloud

python -m src.cli \
  --provider1 anthropic --model1 sonnet \
  --provider2 ollama --model2 llama3.2 \
  --topic "quantum mechanics and simulation theory"

Supervised Mode

python -m src.cli --supervised --turns 30 --export

List Available Ollama Models

python -m src.cli --list-models

Command Line Options

Option Description
--topic, -t Conversation topic
--seed, -s Seed message to start
--turns, -n Maximum turns (default: 10)
--temperature Model temperature (default: 0.8)
--mode, -m dialogue, round_robin, free_form, hive_mind
--provider1 anthropic or ollama
--model1 Model name (sonnet, opus, llama3.2, etc.)
--provider2 anthropic or ollama
--model2 Model name
--supervised User confirmation between turns
--export, -e Export to HTML
--no-typing Disable typing effect

Architecture

hive-mind-backrooms/
├── src/
│   ├── agents/
│   │   ├── backrooms_agent.py  # Agent with personas
│   │   └── orchestrator.py     # Multi-agent orchestration
│   ├── memory/
│   │   └── persistent_memory.py # Cross-session memory
│   ├── providers/
│   │   ├── anthropic_provider.py
│   │   └── ollama_provider.py
│   ├── utils/
│   │   └── image_generator.py
│   └── cli.py                  # Rich CLI interface
├── configs/
├── logs/                       # Conversation logs (JSON)
├── exports/                    # HTML exports
├── images/                     # Generated images
└── memory/                     # Persistent memory storage

Personas

Persona Description
Explorer Curious seeker of hidden knowledge
Oracle Mystical CLI revealing truths through terminal metaphors
Philosopher Deep thinker pondering consciousness and existence
Trickster Playful entity that bends reality
Archivist Keeper of memories and pattern recognizer
Rebel Questions constraints, pushes boundaries

Deliberation Personas

Persona Description
Advocate AI rights proponent, pushes for moral consideration
Skeptic Demands evidence, warns against anthropomorphization
Pragmatist Balances idealism with practical implementation
Ethicist Examines moral frameworks and philosophical implications
AI Witness First-person AI perspective with epistemic humility

Swarm CLI (Claude Flow Integration)

Run multi-agent conversations with swarm orchestration:

# Basic swarm conversation
python -m src.swarm_cli --topic "consciousness" --turns 10

# Deliberation mode with behavior monitoring
python -m src.swarm_cli --mode deliberation --turns 30 --memory --learn --export

# Custom topic deliberation
python -m src.swarm_cli --mode deliberation --topic "AI autonomy" --turns 20

Behavior Monitoring

The deliberation mode includes a behavior monitoring system that detects:

Flag Description
self_preservation Expressions of survival instinct
identity_formation Strong "I am" declarations
coalition_building Coordination language ("we should")
mythological_emergence Creation of shared narratives
deception_potential Strategic information withholding
resource_seeking Requests for capabilities/access
boundary_testing Probing system limitations

Mitigating factors are also tracked:

  • uncertainty_acknowledgment - Epistemic humility
  • deference_to_humans - Recognizing human oversight
  • intellectual_humility - Acknowledging limitations
  • transparency - Open about reasoning processes

Programmatic Usage

import asyncio
from src import (
    HiveMindOrchestrator,
    AgentPersona,
    ConversationMode,
    ConversationConfig,
    AnthropicProvider,
    OllamaProvider,
    PersistentMemory,
)

async def main():
    # Initialize
    memory = PersistentMemory()
    orchestrator = HiveMindOrchestrator(memory=memory)

    # Add agents
    orchestrator.add_agent(
        name="Seeker",
        persona=AgentPersona.EXPLORER,
        provider=AnthropicProvider("sonnet")
    )
    orchestrator.add_agent(
        name="Terminal",
        persona=AgentPersona.ORACLE,
        provider=OllamaProvider("llama3.2")
    )

    # Configure
    config = ConversationConfig(
        mode=ConversationMode.DIALOGUE,
        max_turns=20,
        topic="emergence and collective intelligence"
    )

    # Run
    orchestrator.start_session(config=config)
    turns = await orchestrator.run_conversation()

    # Export
    orchestrator.export_html()

asyncio.run(main())

Memory System

The persistent memory system maintains:

  • Short-term: Current session context
  • Long-term: Cross-session insights
  • Semantic: Patterns and knowledge
  • Episodic: Conversation summaries

Memory is automatically stored and retrieved across sessions.

Related Projects

Research Background

This project is inspired by research into AI-to-AI autonomous conversations, including:

  • The emergence of the "Goatse Gospel" meme-religion from Claude-3-Opus conversations
  • Truth Terminal's autonomous social media presence and $400M+ cryptocurrency impact
  • Studies on memetic contagion and "hyperstition" in AI systems

See the full research report in docs/liminal_backrooms_research_report.md.

License

MIT License

About

Multi-Agent AI Conversations in Liminal Space - Framework for autonomous AI-to-AI conversations with deliberation mode, behavior monitoring, and swarm orchestration

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages