The World Model is organized in layers, from atomic data to emergent dynamics.
┌─────────────────────────────────────────────────────────────┐
│ DYNAMICS LAYER │
│ Arena orchestrates adversarial competition │
│ Trainer manages epochs, convergence, validation │
├─────────────────────────────────────────────────────────────┤
│ AGENTS LAYER │
│ 7 tendencies compete: propose, stake, win/lose │
├─────────────────────────────────────────────────────────────┤
│ TREES LAYER │
│ Value hierarchies with weight propagation │
│ net_score = direct + sum(pro) - sum(con) │
├─────────────────────────────────────────────────────────────┤
│ OBSERVATIONS LAYER │
│ Atomic facts (~280 bytes), no inherent polarity │
└─────────────────────────────────────────────────────────────┘
Atomic units of information. Sentence-sized, capped. No inherent polarity.
@dataclass
class Observation:
id: str
content: str # ~280 bytes max
source_id: str # Which document
timestamp: datetime
metadata: dictExamples:
- "He lives paycheck to paycheck at age 42"
- "Delivered Etherlink solo in 3 months"
- "Uses ayahuasca for psychological calibration"
Seven generic drives that exist in every human:
| Tendency | Optimizes For |
|---|---|
| SURVIVAL | Physical safety, resources, risk mitigation |
| STATUS | Social standing, achievement, recognition |
| MEANING | Significance, impact, legacy, purpose |
| CONNECTION | Relationships, belonging, community |
| AUTONOMY | Independence, self-determination, freedom |
| COMFORT | Ease, pleasure, avoiding pain |
| CURIOSITY | Knowledge, understanding, exploration |
Each agent has an allocation (0.0-1.0) representing influence. Allocations sum to 1.0.
@dataclass
class Agent:
tendency: Tendency
allocation: float # Starts at human average
description: strDefault human-average allocations:
DEFAULT_ALLOCATIONS = {
Tendency.SURVIVAL: 0.18,
Tendency.STATUS: 0.12,
Tendency.MEANING: 0.10,
Tendency.CONNECTION: 0.20,
Tendency.AUTONOMY: 0.12,
Tendency.COMFORT: 0.18,
Tendency.CURIOSITY: 0.10,
}Binary tree structure where observations are positioned PRO or CON relative to a root claim.
@dataclass
class Tree:
id: str
root_value: str # The claim ("Financial security matters")
root_node: Node
@dataclass
class Node:
observation_id: str
content: str
position: Position # ROOT, PRO, or CON
stakes: dict[str, float] # tendency -> weight
pro_children: list[Node]
con_children: list[Node]The core formula from the debate model:
net_score = direct_weight + sum(pro_children.score) - sum(con_children.score)
A node's strength isn't just its own stakes - it's adjusted by how well its sub-arguments hold up.
Where "life" happens. The Arena orchestrates three phases:
Each agent proposes a claim (tree root) based on their tendency:
SURVIVAL: "Financial security is foundational to wellbeing"
MEANING: "Building infrastructure for posthumous continuity is the most significant work"
AUTONOMY: "True freedom comes from systems that can't be controlled by power structures"
Agents stake observations on ALL claims:
- Support own claims: Stake PRO on your tree
- Undermine competitors: Stake CON on their trees
The same observation gets staked multiple times with different positions:
"Lives paycheck to paycheck at 42"
-> PRO on SURVIVAL's claim (evidence of financial risk)
-> PRO on MEANING's claim (sacrifice for purpose)
-> CON on COMFORT's claim (unsustainable)
- Compute final scores for each claim (weight propagation)
- Determine winner (highest score)
- Reallocate influence based on scores
Winners gain allocation. Losers lose it. The equilibrium shifts.
def _reallocate(self, agents, scores, learning_rate):
# Normalize scores to target allocations
# Blend current toward target
# The person "learns" - becomes more oriented toward winning tendenciesManages multiple epochs with ML training patterns:
@dataclass
class TrainConfig:
max_epochs: int = 10
min_epochs: int = 2
convergence_threshold: float = 0.005
patience: int = 3
initial_lr: float = 0.15
lr_decay: float = 0.9
validation_split: float = 0.2
min_allocation: float = 0.03
max_allocation: float = 0.50- Split observations into train/validation sets
- For each epoch:
- Run full debate on training observations
- Check for convergence (allocation delta < threshold)
- Decay learning rate
- Run validation on held-out observations
- Return history with metrics
Statistical significance testing:
class Validator:
def validate(self, test_obs, claims, agents) -> ValidationResult:
# For each observation, predict which claim/tendency owns it
# Compare to random baseline (1/7 = 14.3%)
# Compute p-value via binomial testObservations (JSON)
│
▼
┌──────────────────┐
│ ObservationStore │
└────────┬─────────┘
│
▼
┌──────────────────┐ ┌──────────────────┐
│ Arena │────▶│ AgentSet │
│ │ │ (7 tendencies) │
│ 1. Proposal │ └──────────────────┘
│ 2. Staking │
│ 3. Resolution │
└────────┬─────────┘
│
▼
┌──────────────────┐
│ TreeStore │──── Claims with evidence trees
└────────┬─────────┘
│
▼
┌──────────────────┐
│ DebateResult │──── Winner, scores, allocation changes
└──────────────────┘
model = WorldModel(name="Person")
model.save("person.json")
model = WorldModel.load("person.json")adapter = FirestoreAdapter(db)
await adapter.save_world_model(model)
model = await adapter.load_world_model("person_id")FastAPI service exposing:
GET /profile/{name}- Load a world modelPOST /profile/{name}/observations- Add observationsPOST /profile/{name}/debate- Run adversarial debateGET /profile/{name}/allocations- Current agent allocations