Skip to content

Add misinformation spread model with LLM and rule-based agents#384

Open
savirpatil wants to merge 21 commits intomesa:mainfrom
savirpatil:add-misinformation-spread-model
Open

Add misinformation spread model with LLM and rule-based agents#384
savirpatil wants to merge 21 commits intomesa:mainfrom
savirpatil:add-misinformation-spread-model

Conversation

@savirpatil
Copy link

@savirpatil savirpatil commented Mar 14, 2026

Summary

Adds a misinformation spread model comparing LLM-driven agents against a
rule-based approach on an Erdos-Renyi social network.

Motive

To use Mesa-LLM where agents accumulate memory across steps to simulate social pressure.
This model demonstrates a scientifically meaningful phenomenon and shows
how personality types and repeated exposure shape misinformation dynamics at
the population level.

Implementation

  • Builds a random social network using networkx.erdos_renyi_graph wrapped
    in Mesa's Network discrete space
  • Three LLM agent types (BelieverAgent, SkepticAgent, SpreaderAgent) call
    litellm.completion() directly, passing times_exposed and full decision
    history into each prompt to simulate social pressure
  • Rule-based equivalents (fixed probability thresholds) run on identical
    network structure as a baseline
  • Key finding: LLM skeptics resist initially but eventually
    convince after repeated exposure, producing an S-curve that overtakes
    the rule-based baseline

Usage

# Static 2-panel comparison chart
python run.py

# Interactive Solara dashboard
solara run app.py

Additional Notes

  • Requires Ollama running locally with llama3 pulled (ollama pull llama3)
  • Includes pytests covering initialization, stepping, bounds, and
    reproducibility
Screenshot 2026-03-15 at 5 22 03 PM Screenshot 2026-03-15 at 5 22 13 PM Screenshot 2026-03-15 at 5 22 24 PM

This PR was developed with AI assistance (Claude) for code generation
and debugging. All code has been reviewed, tested, and understood before making the pull request.

@EwoutH
Copy link
Member

EwoutH commented Mar 15, 2026

Thanks for the PR, looks like a cool model.

Could you:

  • Checkout the test failure
  • Add a screenshot or gif of the visualisation to the PR description
  • Request a peer-review (and maybe do one or two yourself)

@abhinavk0220
Copy link

Summary

Adds a misinformation spread model comparing LLM-driven agents against a rule-based approach on an Erdos-Renyi social network.

Motive

To use Mesa-LLM where agents accumulate memory across steps to simulate social pressure. This model demonstrates a scientifically meaningful phenomenon and shows how personality types and repeated exposure shape misinformation dynamics at the population level.

Implementation

  • Builds a random social network using networkx.erdos_renyi_graph wrapped
    in Mesa's Network discrete space
  • Three LLM agent types (BelieverAgent, SkepticAgent, SpreaderAgent) call
    litellm.completion() directly, passing times_exposed and full decision
    history into each prompt to simulate social pressure
  • Rule-based equivalents (fixed probability thresholds) run on identical
    network structure as a baseline
  • Key finding: LLM skeptics resist initially but eventually
    convince after repeated exposure, producing an S-curve that overtakes
    the rule-based baseline

Usage

# Static 2-panel comparison chart
python run.py

# Interactive Solara dashboard
solara run app.py

Additional Notes

  • Requires Ollama running locally with llama3 pulled (ollama pull llama3)
  • Includes pytests covering initialization, stepping, bounds, and
    reproducibility

Hi @savirpatil! Reviewed against the #390 checklist.

Does it belong?
The LLM vs rule-based comparison angle is genuinely interesting and
the S-curve finding is a meaningful result. No significant overlap
with existing examples.

Is it correct?

  • The model calls litellm.completion() directly rather than using
    mesa_llm's LLMAgent and reasoning classes. Since this is a
    mesa-examples LLM model, using the mesa-llm abstractions would be
    more consistent with the other LLM examples and better showcases
    what mesa-llm offers.
  • Requiring Ollama running locally is a barrier could you add a
    note in the README about this requirement and consider supporting
    cloud providers (Gemini, OpenAI) as alternatives via the .env
    pattern other LLM examples use?
  • Does the model produce deterministic results with a fixed rng
    seed passed to Model()?

Is it clean?

  • At 1,209 lines this is quite large. Are there separate rule-based
    and LLM model classes that could potentially be split or simplified?
  • The README explains the model well. Could you add a screenshot or
    GIF of the Solara visualization?

Overall a cool concept the personality types (Believer, Skeptic,
Spreader) and repeated exposure mechanics are a nice touch. The main
suggestion is aligning with mesa-llm patterns for consistency.

@ZhehaoZhao423
Copy link

Hi @savirpatil, reviewing this as part of the GSoC peer-review process (following the #390 guidelines). First off, great minds think alike! I recently built an Information Cascade benchmark in my learning space, so your focus on misinformation spread and emergent social pressure really resonates with me. Excellent concept!

Here are my thoughts after running and reviewing the code:

1. Model Behavior & Emergence (Run & Play):
I ran the model locally. The S-curve emergence of the SkepticAgent is a textbook example of non-linear tipping points in network science. Seeing the skeptics initially resist and then completely cave under repeated exposure (social pressure) purely driven by LLM context accumulation is a fantastic demonstration of why LLMs elevate traditional ABMs.

2. Architecture & Latency "Ghost" Overhead (Important):
Looking closely at agents.py, I noticed a significant architectural redundancy. You initialize self.memory = STLTMemory(...) in your agents, but in your step() function, you manually construct the prompt using a custom self.history = [] list.

  • The Danger: Even though you aren't actively using self.memory in your prompt, STLTMemory is still running in the background. Once its hidden short-term buffer fills up, it will trigger synchronous, blocking LLM calls (_update_long_term_memory) that do nothing but massively spike your step latency. (I empirically benchmarked this bottleneck recently: it caused the main thread to freeze for 67 to 98 seconds during a single step with just 4 agents! See my data here: information cascade)
  • Suggestion: Since you are managing history manually, you should completely remove the STLTMemory initialization to save API costs and eliminate this ghost latency.

3. API Compliance (Mesa 4.x):
In model.py (lines 126 & 185), you are using self.steps for your print statements (print(f"\n--- Model step {self.steps} ---")).

  • Heads-up: In the latest Mesa 4.x architecture, model.steps has been deprecated and completely removed in favor of model.time. This will throw an AttributeError on the latest main branch. Simply updating it to self.time will future-proof your code!

4. Cleanliness:
I agree with Abhinav regarding the file size. Splitting the rule-based baseline into a separate file might make the repository more approachable for beginners.

Overall, this is a highly valuable addition to the examples library. Great job isolating the LLM vs. Rule-based dynamics!

@Harshini2411
Copy link

Cool model, the LLM vs rule-based comparison is a genuinely interesting angle and the S-curve result is a nice emergent behavior to showcase.

Abhinav and ZhehaoZhao already caught the main issues and it looks like the STLTMemory ghost overhead and self.steps fixes have been addressed. Good work on turning those around quickly.

One thing still open from Abhinav's review - the agents call litellm.completion() directly instead of using mesa-llm's LLMAgent abstractions. Since the other LLM examples in this repo use mesa-llm patterns, it would be worth aligning here for consistency. Also the .env pattern for supporting cloud providers (OpenAI, Gemini) as alternatives to local Ollama would lower the barrier for people trying this out.

Otherwise looking good!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants