Skip to content

docs: add llms.txt for AI-assisted development#425

Open
Vanya-kapoor wants to merge 1 commit intomesa:mainfrom
Vanya-kapoor:docs/add-llms-txt
Open

docs: add llms.txt for AI-assisted development#425
Vanya-kapoor wants to merge 1 commit intomesa:mainfrom
Vanya-kapoor:docs/add-llms-txt

Conversation

@Vanya-kapoor
Copy link

Summary

Adds a llms.txt file at the repo root — a plain Markdown document that gives
AI coding assistants (Claude, Copilot, Cursor, ChatGPT) accurate, structured
context about this repository in a single fetch.

This directly addresses the AI discoverability question raised in #417: human
discoverability is solved by catalog pages generated from metadata.toml;
AI discoverability is solved by llms.txt. They are complementary layers of
the same system.

Changes

  • Added /llms.txt at repo root

No code changes. No tests required. No formatting checks required.

What the file contains

  • Branch/version tablemain targets Mesa 4.x dev, mesa-3.x is stable.
    A large fraction of reported breakage is branch mismatch, not bugs. This is
    now explicit and machine-readable.
  • Full example index — every community example with a description of what
    Mesa feature or pattern it actually demonstrates (not just what the model
    simulates)
  • GIS / Mesa-Geo section — all 7 GIS examples, currently absent from any
    machine-readable index
  • Health status tables — working vs broken, with exact error messages, drawn
    from a systematic audit of all 21 examples on Mesa 3.3.1 / Python 3.11
  • Mesa 3.x migration error table — 5 most common breakage patterns with fixes
  • Mesa core API quick reference — AgentSet, DataCollector, batch_run, Solara,
    with the 3.x changes called out explicitly
  • Known gaps — ContinuousSpace, Mesa-Frames, dedicated batch_run example;
    surfaces contribution opportunities for new contributors
  • mesa_models installable package — documented so AI tools can suggest
    pip install + direct imports instead of manual cloning
  • Links — docs, API reference, migration guide, contributing guide, discussions

Why this matters

Right now, when a user asks an AI tool "how do I build a Mesa model?" or "why
is my mesa-examples code broken?", the AI either uses training data (which
predates Mesa 3.x) or scrapes HTML and gets noise. This means users routinely
get suggestions to use AgentSet.to_list() (removed in 3.x), model.time
instead of model.steps, and recommendations to run broken examples as if
they work.

llms.txt is a proposed open standard already adopted
by FastAPI, Pydantic, and others in the Python ecosystem. An AI tool that fetches
this file once gets the correct Mesa 3.x mental model, the full working example
list, and the migration error table — without scraping 40 HTML pages.

The broken-example health table means AI tools will stop recommending broken
examples to new users, which is one of the most common sources of friction for
people discovering Mesa for the first time.

Maintenance

The file is plain Markdown. It should be updated when a broken example is fixed,
a new example is added, or Mesa versions change. It can eventually be partially
automated from metadata.toml — the health status table is a direct projection
of CI results and example metadata, which are already machine-readable.

Related

GSoC contributor checklist

Context & motivation

I was auditing examples against Mesa 3.3.1 and noticed that AI tools kept giving me wrong answers while navigating the codebase .The next day, I read #417 discussion and noticed that every proposal addressed human discoverability but nobody mentioned AI discoverability. This is the gap I’m trying to fill.

What I learned

What took me the longest to understand was that the broken examples fail for three different reasons — removed APIs, Python import mechanics, and Solara viz layer issues — and you can't tell which one from the error message alone until you understand what each layer is doing. I also didn't know Mesa-Geo existed until I read the README,which is the kind of thing that should be in an AI-readable index but currently isn't.

Learning repo

🔗 My learning repo: REPO

Readiness checks

  • This PR addresses an agreed-upon problem (proposed and discussed in Mesa-examples revival: vision & open questions #417)
  • I have read the contributing guide and deprecation policy
  • I have performed a self-review: reviewed the PR diff and left inline comments on anything that needs explanation
  • Another GSoC contributor has reviewed this PR: @
  • No code changes — pytest and ruff checks not applicable
  • Documentation updated: this PR is the documentation addition

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant