Add misinformation spread model with LLM and rule-based agents#384
Add misinformation spread model with LLM and rule-based agents#384savirpatil wants to merge 21 commits intomesa:mainfrom
Conversation
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
|
Thanks for the PR, looks like a cool model. Could you:
|
Hi @savirpatil! Reviewed against the #390 checklist. Does it belong? Is it correct?
Is it clean?
Overall a cool concept the personality types (Believer, Skeptic, |
|
Hi @savirpatil, reviewing this as part of the GSoC peer-review process (following the #390 guidelines). First off, great minds think alike! I recently built an Information Cascade benchmark in my learning space, so your focus on misinformation spread and emergent social pressure really resonates with me. Excellent concept! Here are my thoughts after running and reviewing the code: 1. Model Behavior & Emergence (Run & Play): 2. Architecture & Latency "Ghost" Overhead (Important):
3. API Compliance (Mesa 4.x):
4. Cleanliness: Overall, this is a highly valuable addition to the examples library. Great job isolating the LLM vs. Rule-based dynamics! |
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
|
Cool model, the LLM vs rule-based comparison is a genuinely interesting angle and the S-curve result is a nice emergent behavior to showcase. Abhinav and ZhehaoZhao already caught the main issues and it looks like the STLTMemory ghost overhead and self.steps fixes have been addressed. Good work on turning those around quickly. One thing still open from Abhinav's review - the agents call litellm.completion() directly instead of using mesa-llm's LLMAgent abstractions. Since the other LLM examples in this repo use mesa-llm patterns, it would be worth aligning here for consistency. Also the .env pattern for supporting cloud providers (OpenAI, Gemini) as alternatives to local Ollama would lower the barrier for people trying this out. Otherwise looking good! |
Summary
Adds a misinformation spread model comparing LLM-driven agents against a
rule-based approach on an Erdos-Renyi social network.
Motive
To use Mesa-LLM where agents accumulate memory across steps to simulate social pressure.
This model demonstrates a scientifically meaningful phenomenon and shows
how personality types and repeated exposure shape misinformation dynamics at
the population level.
Implementation
networkx.erdos_renyi_graphwrappedin Mesa's
Networkdiscrete spacelitellm.completion()directly, passingtimes_exposedand full decisionhistory into each prompt to simulate social pressure
network structure as a baseline
convince after repeated exposure, producing an S-curve that overtakes
the rule-based baseline
Usage
Additional Notes
ollama pull llama3)reproducibility
This PR was developed with AI assistance (Claude) for code generation
and debugging. All code has been reviewed, tested, and understood before making the pull request.