Skip to content

feat: add LLM Schelling Segregation example using mesa-llm#363

Open
abhinavk0220 wants to merge 9 commits intomesa:mainfrom
abhinavk0220:add/llm-schelling-segregation
Open

feat: add LLM Schelling Segregation example using mesa-llm#363
abhinavk0220 wants to merge 9 commits intomesa:mainfrom
abhinavk0220:add/llm-schelling-segregation

Conversation

@abhinavk0220
Copy link

@abhinavk0220 abhinavk0220 commented Mar 6, 2026

Summary

An LLM-powered reimplementation of Schelling's (1971) classic segregation model where agents reason in natural language about their neighborhood before deciding to stay or move — instead of following a fixed satisfaction threshold.

The Model

Agents of two groups (A and B) are placed on a 5×5 grid. Each step:

  1. Each agent observes its Moore neighborhood (up to 8 neighbors on a torus grid)
  2. It describes the neighborhood composition to the LLM
  3. The LLM decides: `happy` (stay) or `unhappy` (move to a random empty cell)

The simulation tracks happiness levels and a segregation index over time. The model stops when all agents are happy.

What makes this different from classical Schelling

Classical Schelling uses a fixed threshold rule — an agent moves if fewer than X% of its neighbors share its group. The outcome is mathematically determined by that threshold.

Here, agents reason at each step:

"I belong to Group A. My neighborhood has 3 Group A and 4 Group B neighbors.
The mix is manageable — I feel comfortable here. I'll stay."

This produces qualitatively different dynamics. LLM agents weigh context, not just ratios.

Visualization

Initial state (Step 0) — random placement:

Screenshot 2026-03-21 115513

After 2 LLM reasoning steps — stable integration:

Screenshot 2026-03-21 120234
Step Happy Unhappy Segregation Index What happened
0 Random initial placement
1 ~16 ~0 0.00 LLM agents assess neighbors, most decide to stay
2 19 0 0.00 All agents happy — simulation stops

Why this matters: The classical Schelling model always produces segregation from even mild preferences. The LLM version produced zero segregation — agents reasoned their way to comfort in a diverse neighborhood. No hardcoded tolerance parameter. The LLM considered context and decided the mixed state was acceptable. This is something a fixed-threshold agent cannot do.

How to Run

cp .env.example .env  # fill in your API key
pip install -r requirements.txt
solara run app.py

Supported LLM Providers

Gemini, OpenAI, Anthropic, Ollama (local) — configured via `.env`.

Reference

Schelling, T.C. (1971). Dynamic models of segregation.
Journal of Mathematical Sociology, 1(2), 143–186.


GSoC contributor checklist

Context & motivation

The Schelling Segregation model is one of the most studied models in ABM — it shows how individual preferences produce macro-level segregation. Adding LLM agents replaces the fixed satisfaction threshold with genuine reasoning, letting agents weigh multiple factors before deciding to move.

What I learned

LLM agents produced zero segregation in this run — all 19 agents chose to stay happy in their mixed neighborhoods after just 2 reasoning steps. The model stopped because `all(agent.is_happy for agent in model.agents)` became True. The classical Schelling model never stops this way — it keeps segregating until equilibrium. The LLM version found a stable, integrated equilibrium through reasoning, not rules.

Learning repo

🔗 My learning repo: https://github.com/abhinavk0220/GSoC-learning-space
🔗 Relevant model(s): https://github.com/abhinavk0220/GSoC-learning-space/tree/main/models/llm_schelling

Readiness checks

  • This PR addresses an agreed-upon problem (linked issue or discussion with maintainer approval), or is a small/trivial fix
  • I have read the contributing guide and deprecation policy
  • I have performed a self-review: I reviewed my own PR
  • Another GSoC contributor has reviewed this PR:
  • Tests pass locally
  • Code is formatted (`ruff check . --fix`)
  • Screenshots added showing live run output

AI Assistance Disclosure

This PR was developed with AI assistance (Claude) for code generation and debugging. All code has been reviewed, tested, and understood by the contributor.


Mesa Examples Review Checklist (#390)

Does it belong?

  • No significant overlap with existing examples
  • Well-scoped simplest model that demonstrates the idea
  • Showcases Mesa features not already well-covered
  • Showcases interesting ABM mechanics (LLM reasoning vs rule-based dynamics)

Is it correct?

  • Uses current Mesa APIs (OrthogonalMooreGrid, DataCollector)
  • Runs and visualizes out of the box (requires API key — see .env.example)
  • Deterministic with fixed `rng` seed — LLM outputs are non-deterministic by nature
  • Moved to `llm/` directory

Is it clean?

  • No dead code or unused imports
  • Clear naming, logic readable
  • README explains what it does, what it demonstrates, how to run it
  • PR follows template, commits reasonably clean

abhinavKumar0206 and others added 2 commits March 6, 2026 15:57
…Schelling's (1971) classic segregation model using LLM agentsinstead of a fixed tolerance threshold.Each agent reasons in natural language about its neighborhood compositionand decides whether to stay ('happy') or relocate ('unhappy'). Thisproduces richer segregation dynamics than the classical threshold rule.Includes:- SchellingAgent extending LLMAgent with CoT reasoning- LLMSchellingModel on OrthogonalMooreGrid with torus=True- Segregation index metric tracked over time- SolaraViz with grid plot, happiness chart, and segregation index- README with comparison table vs classical Schelling modelReference: Schelling, T.C. (1971). Dynamic models of segregation.Journal of Mathematical Sociology, 1(2), 143-186.Related: mesa/mesa-llm#153
@EwoutH
Copy link
Member

EwoutH commented Mar 15, 2026

Thanks for the PR, looks like an interesting model.

Could you:

  • Checkout the test failure
  • Request a peer-review (and maybe do one or two yourself)

All LLM PRs will go in a new llm folder, but that can be done later.

@abhinavk0220
Copy link
Author

Thanks for the PR, looks like an interesting model.

Could you:

  • Checkout the test failure
  • Request a peer-review (and maybe do one or two yourself)

All LLM PRs will go in a new llm folder, but that can be done later.

Thanks @EwoutH! Same CI issue as the other LLM PRs
ModuleNotFoundError: no module named 'mesa_llm'.
Will move to llm/ directory. Just left a peer review on #384 looking for
a reviewer in return!

abhinavKumar0206 and others added 7 commits March 16, 2026 08:50
…elling model

- Replace separate HappinessPlot + SegregationPlot with combined SchellingStatsPlot
  to fix component overlap in SolaraViz layout
- Add dark theme styling to stats panel (matching other LLM examples)
- Change default grid size from 10x10 to 5x5 to reduce LLM API calls per step
- Add explicit dotenv loading and UTF-8 reconfiguration for Solara compatibility
- Switch default LLM provider from gemini to cerebras/llama3.1-8b

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add step0_initial and step2_integration screenshots from live run
- Document emergent finding: LLM agents chose integration over segregation
  (zero segregation index, all agents happy after 2 steps)
- Rewrite README to match LLM examples format with visualization section,
  round-by-round table, and contrast with classical Schelling (1971)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add noqa: E402 to imports after load_dotenv (required for Solara)
- Replace lambda wrapper with direct static method reference (PLW0108)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants