feat: add LLM Schelling Segregation example using mesa-llm#363
Open
abhinavk0220 wants to merge 9 commits intomesa:mainfrom
Open
feat: add LLM Schelling Segregation example using mesa-llm#363abhinavk0220 wants to merge 9 commits intomesa:mainfrom
abhinavk0220 wants to merge 9 commits intomesa:mainfrom
Conversation
…Schelling's (1971) classic segregation model using LLM agentsinstead of a fixed tolerance threshold.Each agent reasons in natural language about its neighborhood compositionand decides whether to stay ('happy') or relocate ('unhappy'). Thisproduces richer segregation dynamics than the classical threshold rule.Includes:- SchellingAgent extending LLMAgent with CoT reasoning- LLMSchellingModel on OrthogonalMooreGrid with torus=True- Segregation index metric tracked over time- SolaraViz with grid plot, happiness chart, and segregation index- README with comparison table vs classical Schelling modelReference: Schelling, T.C. (1971). Dynamic models of segregation.Journal of Mathematical Sociology, 1(2), 143-186.Related: mesa/mesa-llm#153
for more information, see https://pre-commit.ci
19 tasks
Member
|
Thanks for the PR, looks like an interesting model. Could you:
All LLM PRs will go in a new |
Author
Thanks @EwoutH! Same CI issue as the other LLM PRs |
…hinavk0220/mesa-examples into add/llm-schelling-segregation
for more information, see https://pre-commit.ci
…elling model - Replace separate HappinessPlot + SegregationPlot with combined SchellingStatsPlot to fix component overlap in SolaraViz layout - Add dark theme styling to stats panel (matching other LLM examples) - Change default grid size from 10x10 to 5x5 to reduce LLM API calls per step - Add explicit dotenv loading and UTF-8 reconfiguration for Solara compatibility - Switch default LLM provider from gemini to cerebras/llama3.1-8b Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add step0_initial and step2_integration screenshots from live run - Document emergent finding: LLM agents chose integration over segregation (zero segregation index, all agents happy after 2 steps) - Rewrite README to match LLM examples format with visualization section, round-by-round table, and contrast with classical Schelling (1971) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add noqa: E402 to imports after load_dotenv (required for Solara) - Replace lambda wrapper with direct static method reference (PLW0108) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
An LLM-powered reimplementation of Schelling's (1971) classic segregation model where agents reason in natural language about their neighborhood before deciding to stay or move — instead of following a fixed satisfaction threshold.
The Model
Agents of two groups (A and B) are placed on a 5×5 grid. Each step:
The simulation tracks happiness levels and a segregation index over time. The model stops when all agents are happy.
What makes this different from classical Schelling
Classical Schelling uses a fixed threshold rule — an agent moves if fewer than X% of its neighbors share its group. The outcome is mathematically determined by that threshold.
Here, agents reason at each step:
This produces qualitatively different dynamics. LLM agents weigh context, not just ratios.
Visualization
Initial state (Step 0) — random placement:
After 2 LLM reasoning steps — stable integration:
Why this matters: The classical Schelling model always produces segregation from even mild preferences. The LLM version produced zero segregation — agents reasoned their way to comfort in a diverse neighborhood. No hardcoded tolerance parameter. The LLM considered context and decided the mixed state was acceptable. This is something a fixed-threshold agent cannot do.
How to Run
cp .env.example .env # fill in your API key pip install -r requirements.txt solara run app.pySupported LLM Providers
Gemini, OpenAI, Anthropic, Ollama (local) — configured via `.env`.
Reference
Schelling, T.C. (1971). Dynamic models of segregation.
Journal of Mathematical Sociology, 1(2), 143–186.
GSoC contributor checklist
Context & motivation
The Schelling Segregation model is one of the most studied models in ABM — it shows how individual preferences produce macro-level segregation. Adding LLM agents replaces the fixed satisfaction threshold with genuine reasoning, letting agents weigh multiple factors before deciding to move.
What I learned
LLM agents produced zero segregation in this run — all 19 agents chose to stay happy in their mixed neighborhoods after just 2 reasoning steps. The model stopped because `all(agent.is_happy for agent in model.agents)` became True. The classical Schelling model never stops this way — it keeps segregating until equilibrium. The LLM version found a stable, integrated equilibrium through reasoning, not rules.
Learning repo
🔗 My learning repo: https://github.com/abhinavk0220/GSoC-learning-space
🔗 Relevant model(s): https://github.com/abhinavk0220/GSoC-learning-space/tree/main/models/llm_schelling
Readiness checks
AI Assistance Disclosure
This PR was developed with AI assistance (Claude) for code generation and debugging. All code has been reviewed, tested, and understood by the contributor.
Mesa Examples Review Checklist (#390)
Does it belong?
Is it correct?
Is it clean?