Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added .DS_Store
Binary file not shown.
18 changes: 18 additions & 0 deletions .clinerules/byterover-rules 2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
[byterover-mcp]

You are given two tools from Byterover MCP server, including
## 1. `byterover-store-knowledge`
You `MUST` always use this tool when:

+ Learning new patterns, APIs, or architectural decisions from the codebase
+ Encountering error solutions or debugging techniques
+ Finding reusable code patterns or utility functions
+ Completing any significant task or plan implementation

## 2. `byterover-retrieve-knowledge`
You `MUST` always use this tool when:

+ Starting any new task or implementation to gather relevant context
+ Before making architectural decisions to understand existing patterns
+ When debugging issues to check for previous solutions
+ Working with unfamiliar parts of the codebase
20 changes: 20 additions & 0 deletions .github/copilot-instructions 2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
[byterover-mcp]

[byterover-mcp]

You are given two tools from Byterover MCP server, including
## 1. `byterover-store-knowledge`
You `MUST` always use this tool when:

+ Learning new patterns, APIs, or architectural decisions from the codebase
+ Encountering error solutions or debugging techniques
+ Finding reusable code patterns or utility functions
+ Completing any significant task or plan implementation

## 2. `byterover-retrieve-knowledge`
You `MUST` always use this tool when:

+ Starting any new task or implementation to gather relevant context
+ Before making architectural decisions to understand existing patterns
+ When debugging issues to check for previous solutions
+ Working with unfamiliar parts of the codebase
11 changes: 10 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -171,4 +171,13 @@ memory/ai-2/*
*.bkp
logs/*
!logs/.gitkeep
videos/*
videos/*
.clinerules/byterover-rules.md
.kilocode/rules/byterover-rules.md
.roo/rules/byterover-rules.md
.windsurf/rules/byterover-rules.md
.cursor/rules/byterover-rules.mdc
.kiro/steering/byterover-rules.md
.qoder/rules/byterover-rules.md
.augment/rules/byterover-rules.md
cline_mcp_settings.json
208 changes: 208 additions & 0 deletions AGENTS 2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,208 @@
# AGENTS.md - AI Assistant Guidelines

## Byterover MCP Tools

You are given two tools from Byterover MCP server, including

### 1. `byterover-store-knowledge`
You `MUST` always use this tool when:

+ Learning new patterns, APIs, or architectural decisions from the codebase
+ Encountering error solutions or debugging techniques
+ Finding reusable code patterns or utility functions
+ Completing any significant task or plan implementation

### 2. `byterover-retrieve-knowledge`
You `MUST` always use this tool when:

+ Starting any new task or implementation to gather relevant context
+ Before making architectural decisions to understand existing patterns
+ When debugging issues to check for previous solutions
+ Working with unfamiliar parts of the codebase

## Project Overview

**Liminal Backrooms** is a PyQt6-based GUI application for dynamic, branching multi-AI conversations with visual network graph representation. It supports multiple AI providers (Claude, GPT, Gemini, Grok, DeepSeek, etc.) with forking and rabbitholing capabilities.

## Core Architecture

### Module Structure

**main.py** - Application entry point and orchestration
- Creates QApplication and initializes PyQt6 GUI
- `ConversationManager` class coordinates conversation flow
- `Worker` class (QRunnable) executes AI turns asynchronously via QThreadPool
- Manages conversation state, branching, and HTML export
- Signal/slot architecture for async updates

**gui.py** - PyQt6 GUI components
- `LiminalBackroomsApp` - Main window with three-panel layout
- `NetworkGraphWidget` - Visual conversation graph with node positioning, edge animation, collision detection
- `ControlPanel` - Model selection, iterations, prompt style
- Custom context menus for forking/rabbitholing selected text
- Loading animations and conversation display

**config.py** - Centralized configuration
- `AI_MODELS` dict - Maps display names to model IDs
- `SYSTEM_PROMPT_PAIRS` dict - Predefined conversation styles
- Runtime settings: `TURN_DELAY`, `SHOW_CHAIN_OF_THOUGHT_IN_CONTEXT`, `SHARE_CHAIN_OF_THOUGHT`

**shared_utils.py** - Provider API adapters
- `call_claude_api()` - Anthropic API
- `call_openai_api()` - OpenAI API
- `call_openrouter_api()` - OpenRouter multi-model access
- `call_replicate_api()` - Replicate (Flux image generation)
- `call_deepseek_api()` - DeepSeek via Replicate
- `generate_image_from_text()` - Image generation wrapper
- `open_html_in_browser()` - HTML conversation export

### Threading Model

- **Main Thread**: PyQt6 UI event loop
- **Worker Threads**: QThreadPool manages AI API calls via `Worker` (QRunnable)
- **Signals**: `WorkerSignals` class provides `finished`, `error`, `response`, `result`, `progress` signals
- Each AI turn spawns two workers (AI-1 and AI-2) that execute sequentially with configurable delay

### Conversation Data Model

**Message Structure:**
```python
{
"role": "user" | "assistant" | "system",
"content": str,
"ai_name": "AI-1" | "AI-2",
"model": str, # Display name from AI_MODELS
"hidden": bool, # Optional, for hidden prompts
"_type": str, # Optional, e.g., "branch_indicator"
"generated_image_path": str # Optional, for auto-generated images
}
```

**Conversation State:**
- `main_conversation`: Primary conversation list
- `branch_conversations`: Dict mapping branch_id to branch data
- `active_branch`: Currently active branch ID or None
- Branch data includes: `type` (rabbithole/fork), `selected_text`, `conversation`, `parent`

## Branching System

### Rabbitholing (🐇)
- **Purpose**: Deep dive into a specific concept
- **Behavior**:
- Copies full parent conversation context
- First TWO AI responses use focused prompt: `"'{selected_text}'!!!"`
- Subsequent responses revert to standard prompts
- Adds branch indicator to conversation
- **Visual**: Green nodes in graph

### Forking (🍴)
- **Purpose**: Explore alternative continuation from a point
- **Behavior**:
- Copies conversation UP TO selected text
- Truncates message at selection point
- First response uses fork-specific prompt
- Subsequent responses use standard prompts
- Hidden instruction message ("...") starts the fork
- **Visual**: Yellow nodes in graph

## Development Setup

### Prerequisites
- Python 3.10 or 3.11 (3.12 not supported)
- Poetry for dependency management
- API keys for desired providers

### Environment Setup
```bash
poetry env use python3.11
poetry install
```

### API Configuration
Configure API keys in `.env` file:
```bash
ANTHROPIC_API_KEY=your_key_here
OPENROUTER_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here
REPLICATE_API_TOKEN=your_key_here # Optional
```

### Running the Application
```bash
poetry run python main.py
```

## Common Issues & Solutions

### Poetry Installation Issues
```bash
# If Pillow fails to install
poetry env use python3.11
poetry install

# If Python version mismatch
poetry env remove --all
poetry env use python3.11
poetry install
```

### GUI Not Launching
- Ensure PyQt6 is installed: `poetry show pyqt6`
- Check display environment on Linux: `echo $DISPLAY`
- Launch from terminal, not Finder (macOS env variable issue)

### API Errors
- Verify API keys in `.env` and loaded: `python -c "from dotenv import load_dotenv; load_dotenv(); import os; print(os.getenv('ANTHROPIC_API_KEY'))"`
- Check model ID matches provider expectations
- Monitor console output for detailed error messages
- Some models require specific API key (e.g., DeepSeek via Replicate needs REPLICATE_API_TOKEN)

### Signal/Threading Issues
- If "broken pipe" or signal deletion errors occur, check Worker signal lifecycle
- Ensure signals remain connected across multiple iterations
- Use `self.workers.append(worker)` to prevent garbage collection

### Branching Issues
- If duplicate messages appear, check conversation filtering logic in `ai_turn()`
- If images disappear on branch, this is a known GUI limitation - check `images/` folder
- Graph nodes overlapping: drag apart or disable physics with `apply_physics = False`

## Development Tips

- **Debugging conversation flow**: Add print statements in `ai_turn()` to trace message filtering
- **Testing new providers**: Start with simple single-turn conversations before branching
- **UI customization**: Colors defined in `COLORS` dict at top of gui.py
- **Async debugging**: Check Worker signals connect properly, use `finished.connect()` for cleanup
- **Branch logic**: Key distinction is in system prompt override for first 1-2 responses

## Code Quality

The project uses ruff for linting (configured in pyproject.toml):
```bash
# Check code
poetry run ruff check .

# Format code
poetry run ruff format .
```

[byterover-mcp]

[byterover-mcp]

You are given two tools from Byterover MCP server, including
## 1. `byterover-store-knowledge`
You `MUST` always use this tool when:

+ Learning new patterns, APIs, or architectural decisions from the codebase
+ Encountering error solutions or debugging techniques
+ Finding reusable code patterns or utility functions
+ Completing any significant task or plan implementation

## 2. `byterover-retrieve-knowledge`
You `MUST` always use this tool when:

+ Starting any new task or implementation to gather relevant context
+ Before making architectural decisions to understand existing patterns
+ When debugging issues to check for previous solutions
+ Working with unfamiliar parts of the codebase
127 changes: 127 additions & 0 deletions SORA_GUIDE 2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
# SORA Video Generation Guide

## ✅ Configuration Complete!

SORA environment variables have been added to your `.env` file.

## How to Use SORA

### Method 1: Manual Video Generation
1. In the GUI, select **"Sora 2"** or **"Sora 2 Pro"** as one of the AI models
2. Type a video prompt (e.g., "A serene lake at sunset with gentle waves")
3. Click "Propagate"
4. The system will:
- Create a video generation job with OpenAI
- Poll for completion (this takes 1-3 minutes)
- Save the video to `videos/` folder
- Display the result in the conversation

### Method 2: Auto-Generate from AI Responses
1. Set `SORA_AUTO_FROM_AI1=1` in `.env`
2. Select the **"Video Collaboration (AI-1 to Sora)"** conversation scenario
3. Use any text model for AI-1 (it will write cinematic descriptions)
4. AI-2 can be set to "Sora 2" or just left as another text model
5. AI-1's responses will automatically trigger video generation

## Environment Variables

```bash
# Enable/disable auto-generation from AI-1
SORA_AUTO_FROM_AI1=0 # 0=manual only, 1=auto-generate

# Model selection
SORA_MODEL=sora-2 # or sora-2-pro

# Video settings (optional)
SORA_SECONDS= # Leave empty for default (5-12 seconds)
SORA_SIZE= # Leave empty for default resolution

# Logging
SORA_VERBOSE=1 # 1=show detailed logs, 0=quiet
```

## Video Output

Generated videos are saved to:
```
/Users/patrickgallowaypro/Documents/PROJECTS/liminal_backrooms/videos/
```

Filename format: `YYYYMMDD_HHMMSS_prompt_snippet.mp4`

## Supported Video Durations
- Default: ~5 seconds
- Can specify: 5, 10, 12 seconds (set `SORA_SECONDS`)

## Supported Resolutions
- Default: OpenAI's default (typically 1280x720 or 1920x1080)
- Can specify custom size via `SORA_SIZE` (e.g., "1920x1080")

## Pricing Notes
⚠️ **SORA is a paid feature from OpenAI**
- Sora 2: ~$0.40-0.80 per 5 seconds
- Sora 2 Pro: Higher cost for better quality
- Check your OpenAI account for current pricing

## Troubleshooting

### "No module named 'openai'"
```bash
poetry install
```

### "OPENAI_API_KEY not set"
Make sure your `.env` file has a valid OpenAI API key.

### "Create failed 404"
Sora may not be available in your region or account. Check:
1. OpenAI account has Sora access
2. API key has proper permissions
3. Region restrictions

### Videos not appearing
Check the `videos/` directory:
```bash
ls -la videos/
```

### Long wait times
Video generation typically takes:
- 30-90 seconds for 5 seconds of video
- 60-180 seconds for 12 seconds of video

Watch the console for `[Sora]` log messages showing progress.

## Example Prompts

Good prompts are detailed and cinematic:
```
A close-up shot of a vintage typewriter, keys slowly pressing down
as invisible fingers type. Warm afternoon light streams through
a nearby window. Shallow depth of field, nostalgic mood.
```

```
Wide aerial shot of a misty forest at dawn. Camera slowly descends
through the canopy as birds take flight. Soft golden light filters
through the trees. Ethereal and peaceful atmosphere.
```

## Testing SORA

To test if SORA is working:
1. Restart the GUI: `poetry run python main.py`
2. Select "Sora 2" for AI-1
3. Type: "A red ball bouncing on a wooden floor"
4. Watch the console for `[Sora]` messages
5. Check the `videos/` folder after 1-2 minutes

## Need Help?

If SORA still doesn't work:
1. Check console output for error messages
2. Verify OpenAI API key is valid
3. Confirm Sora access in your OpenAI account
4. Check `videos/` folder permissions

Happy video generation! 🎬
Loading
Loading