Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
90 changes: 90 additions & 0 deletions llm/llm_opinion_dynamics/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
# LLM Opinion Dynamics

An agent-based model of opinion dynamics powered by large language model (LLM) agents, built with [Mesa](https://github.com/projectmesa/mesa) and [Mesa-LLM](https://github.com/projectmesa/mesa-llm).

## Overview

Classical opinion dynamics models like [Deffuant-Weisbuch](../deffuant_weisbuch/) use mathematical rules to update agent opinions — if two agents are close enough in opinion, they converge by a fixed amount. While elegant, this misses the richness of real human persuasion.

This model replaces the math with genuine LLM reasoning. Each agent:
1. **Observes** its neighbors' current opinion scores
2. **Reasons** about whether their arguments are convincing
3. **Updates** its opinion score based on the quality of reasoning — not just proximity

This produces emergent behaviors that classical models cannot capture:
- Agents can be **stubbornly resistant** to persuasion even when numerically close
- Agents can **leap across** opinion gaps if an argument is compelling enough
- **Polarization** and **consensus** emerge from genuine reasoning, not formulas

## The Model

Agents are placed on a grid. At each step:
- Each agent observes its Moore neighborhood (up to 8 neighbors)
- It constructs a prompt summarizing neighbor opinions
- The LLM (e.g. Gemini Flash) reasons about whether to update its opinion
- The new opinion score (0-10) is extracted and stored

### Parameters

| Parameter | Description | Default |
|-----------|-------------|---------|
| `n_agents` | Number of agents | 9 |
| `width` | Grid width | 5 |
| `height` | Grid height | 5 |
| `topic` | The debate topic | AI regulation |
| `llm_model` | LLM model string | `gemini/gemini-2.0-flash` |

## Running the Model

Set your API key:
```bash
export GEMINI_API_KEY=your_key_here
```

Install dependencies:
```bash
pip install -r requirements.txt
```

Run the visualization:
```bash
solara run app.py
```

## Visualization

The Solara dashboard shows three live panels:

| Panel | What it shows |
|-------|--------------|
| **Agent Opinion Grid** | Heatmap of agent opinions on the spatial grid (red = against, green = for) |
| **Opinion Trajectories** | Per-agent opinion over time — reveals convergence, divergence, and stable minorities |
| **Population Dynamics** | Mean opinion + variance — declining variance signals emergent consensus |

**Initial state (Step 0):**

![Initial opinions — random spread across the grid](llm_opinion_dynamics_initial.png)

**After 4 steps of LLM-driven persuasion:**

![Step 4 — two agents converged to 3.8, variance dropped from 15 to 7](llm_opinion_dynamics_dashboard.png)

Notable emergent behaviors visible above:
- Agents 2 & 3 independently converged to **3.8** — emergent clustering, no hardcoded rule
- Agent 4 started at **9.6**, was persuaded by a neighbor at **0.5**, and moved to **2.0** in one step
- Agent 1 (spatially isolated, top of grid) held at **9.8** throughout — isolation preserves extreme opinions
- Variance declined from ~15 → ~7 across 4 steps

## Relationship to Classical Models

| Feature | Deffuant-Weisbuch | LLM Opinion Dynamics |
|---------|-------------------|----------------------|
| Opinion update rule | Mathematical (μ parameter) | LLM reasoning |
| Bounded confidence | Hard threshold (ε) | Emergent from argument quality |
| Agent memory | None | Short-term memory of past interactions |
| Persuasion mechanism | Numeric proximity | Natural language argument |

## References

- Deffuant, G., et al. (2000). *Mixing beliefs among interacting agents*. Advances in Complex Systems.
- Mesa-LLM: [github.com/projectmesa/mesa-llm](https://github.com/projectmesa/mesa-llm)
80 changes: 80 additions & 0 deletions llm/llm_opinion_dynamics/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
import matplotlib.pyplot as plt
import solara
from llm_opinion_dynamics.model import LLMOpinionDynamicsModel
from mesa.visualization import SolaraViz, make_plot_component
from mesa.visualization.utils import update_counter

model_params = {
"n_agents": {
"type": "SliderInt",
"value": 9,
"label": "Number of agents",
"min": 4,
"max": 20,
"step": 1,
},
"width": {
"type": "SliderInt",
"value": 5,
"label": "Grid width",
"min": 3,
"max": 10,
"step": 1,
},
"height": {
"type": "SliderInt",
"value": 5,
"label": "Grid height",
"min": 3,
"max": 10,
"step": 1,
},
"topic": {
"type": "InputText",
"value": "Should artificial intelligence be regulated by governments?",
"label": "Debate topic",
},
}


def OpinionTrajectoriesPlot(model):
"""Plot opinion trajectories for all agents over time."""
update_counter.get()

df = model.datacollector.get_agent_vars_dataframe()

if df.empty:
fig, ax = plt.subplots()
ax.set_title("No data yet — run the model")
return solara.FigureMatplotlib(fig)

opinions = df["opinion"].unstack("AgentID")

fig, ax = plt.subplots(figsize=(7, 5))
for agent_id in opinions.columns:
ax.plot(opinions.index, opinions[agent_id], linewidth=1.5, alpha=0.8)

ax.set_xlabel("Time step")
ax.set_ylabel("Opinion (0=against, 10=for)")
ax.set_title(f"Opinion Trajectories\nTopic: {model.topic[:60]}...")
ax.set_ylim(-0.5, 10.5)
ax.xaxis.set_major_locator(plt.MaxNLocator(integer=True))

return solara.FigureMatplotlib(fig)


MeanOpinionPlot = make_plot_component("mean_opinion")
VariancePlot = make_plot_component("opinion_variance")

model = LLMOpinionDynamicsModel()

page = SolaraViz(
model,
components=[
OpinionTrajectoriesPlot,
MeanOpinionPlot,
VariancePlot,
],
model_params=model_params,
name="LLM Opinion Dynamics",
)
Empty file.
76 changes: 76 additions & 0 deletions llm/llm_opinion_dynamics/llm_opinion_dynamics/agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
from mesa_llm.llm_agent import LLMAgent
from mesa_llm.reasoning.reasoning import Reasoning


class OpinionAgent(LLMAgent):
"""
An LLM-powered agent that holds an opinion on a topic and can be
persuaded by neighboring agents through natural language debate.

Attributes:
opinion (float): Current opinion score between 0.0 and 10.0.
topic (str): The topic being debated.
"""

def __init__(self, model, reasoning: type[Reasoning], opinion: float, topic: str):
system_prompt = f"""You are an agent in a social simulation debating the topic: '{topic}'.
Your current opinion score is {opinion:.1f} out of 10 (0=strongly against, 10=strongly for).
When you interact with neighbors:
1. Read their opinion and argument carefully.
2. If their argument is convincing, update your internal_state 'opinion' score closer to theirs.
3. If unconvincing, keep your score or move slightly away.
4. Always respond with your updated opinion score as a number between 0 and 10.
Be concise. Your reasoning should reflect genuine persuasion dynamics."""

super().__init__(
model=model,
reasoning=reasoning,
system_prompt=system_prompt,
vision=1,
internal_state=["opinion"],
)
self.opinion = opinion
self.topic = topic

def step(self):
"""Each step, observe neighbors and potentially update opinion."""
obs = self.generate_obs()

# Only debate if there are neighbors
if not obs.local_state:
return

# Build a prompt summarizing neighbor opinions
neighbor_summary = "\n".join(
f"- Agent {uid}: opinion={info['internal_state']}"
for uid, info in obs.local_state.items()
)

step_prompt = f"""Your current opinion on '{self.topic}' is {self.opinion:.1f}/10.

Your neighbors' opinions:
{neighbor_summary}

Based on these interactions, decide whether to update your opinion score.
Respond with ONLY a single number between 0.0 and 10.0 representing your new opinion."""

plan = self.reasoning.plan(obs, step_prompt=step_prompt)

# Parse the LLM response to extract updated opinion
try:
response_text = ""
if hasattr(plan, "llm_plan") and plan.llm_plan:
for block in plan.llm_plan:
if hasattr(block, "text"):
response_text += block.text
# Extract first float found in response
import re

numbers = re.findall(r"\b\d+\.?\d*\b", response_text)
if numbers:
new_opinion = float(numbers[0])
new_opinion = max(0.0, min(10.0, new_opinion))
self.opinion = new_opinion
self.internal_state = [f"opinion:{self.opinion:.1f}"]
except (ValueError, IndexError):
pass # Keep current opinion if parsing fails
84 changes: 84 additions & 0 deletions llm/llm_opinion_dynamics/llm_opinion_dynamics/model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
import mesa
from mesa.discrete_space import OrthogonalMooreGrid
from mesa_llm.reasoning.cot import CoTReasoning

from .agent import OpinionAgent


class LLMOpinionDynamicsModel(mesa.Model):
"""
An agent-based model of opinion dynamics powered by LLM agents.

Unlike classical opinion dynamics models (e.g. Deffuant-Weisbuch) that use
mathematical convergence rules, this model lets agents genuinely reason about
their neighbors' arguments using a large language model, producing more
nuanced and emergent opinion change patterns.

Each agent holds a numeric opinion score (0-10) on a given topic.
At each step, agents observe their neighbors and decide whether to update
their opinion based on LLM-driven reasoning about the arguments presented.

Args:
n_agents (int): Number of agents in the simulation.
width (int): Width of the grid.
height (int): Height of the grid.
topic (str): The debate topic agents will discuss.
llm_model (str): LLM model string in 'provider/model' format.
rng: Random number generator seed.
"""

def __init__(
self,
n_agents: int = 9,
width: int = 5,
height: int = 5,
topic: str = "Should artificial intelligence be regulated by governments?",
llm_model: str = "gemini/gemini-2.0-flash",
rng=None,
):
super().__init__(rng=rng)

self.topic = topic
self.grid = OrthogonalMooreGrid((width, height), torus=True, random=self.random)

self.datacollector = mesa.DataCollector(
agent_reporters={"opinion": "opinion"},
model_reporters={
"mean_opinion": lambda m: sum(a.opinion for a in m.agents)
/ len(m.agents),
"opinion_variance": lambda m: self._variance(m),
},
)

# Place agents on random cells
cells = list(self.grid.all_cells)
self.random.shuffle(cells)
selected_cells = cells[:n_agents]

for cell in selected_cells:
initial_opinion = self.random.uniform(0.0, 10.0)
agent = OpinionAgent(
model=self,
reasoning=CoTReasoning,
opinion=initial_opinion,
topic=topic,
)
agent.cell = cell
agent.pos = cell.coordinate

self.running = True
self.datacollector.collect(self)

def step(self):
"""Advance the model by one step."""
self.agents.shuffle_do("step")
self.datacollector.collect(self)

@staticmethod
def _variance(model):
"""Calculate opinion variance across all agents."""
opinions = [a.opinion for a in model.agents]
if not opinions:
return 0.0
mean = sum(opinions) / len(opinions)
return sum((o - mean) ** 2 for o in opinions) / len(opinions)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions llm/llm_opinion_dynamics/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
mesa[viz]>=3.0
mesa-llm>=0.1.0
1 change: 1 addition & 0 deletions llm/llm_prisoners_dilemma/.env
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
GEMINI_API_KEY=AIzaSyAZfe-oLg3sqgTyz92wuuURfnanbRt1-zA