docs: add llms.txt for AI-assisted development#425
Open
Vanya-kapoor wants to merge 1 commit intomesa:mainfrom
Open
docs: add llms.txt for AI-assisted development#425Vanya-kapoor wants to merge 1 commit intomesa:mainfrom
Vanya-kapoor wants to merge 1 commit intomesa:mainfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds a
llms.txtfile at the repo root — a plain Markdown document that givesAI coding assistants (Claude, Copilot, Cursor, ChatGPT) accurate, structured
context about this repository in a single fetch.
This directly addresses the AI discoverability question raised in #417: human
discoverability is solved by catalog pages generated from
metadata.toml;AI discoverability is solved by
llms.txt. They are complementary layers ofthe same system.
Changes
/llms.txtat repo rootNo code changes. No tests required. No formatting checks required.
What the file contains
maintargets Mesa 4.x dev,mesa-3.xis stable.A large fraction of reported breakage is branch mismatch, not bugs. This is
now explicit and machine-readable.
Mesa feature or pattern it actually demonstrates (not just what the model
simulates)
machine-readable index
from a systematic audit of all 21 examples on Mesa 3.3.1 / Python 3.11
with the 3.x changes called out explicitly
surfaces contribution opportunities for new contributors
mesa_modelsinstallable package — documented so AI tools can suggestpip install+ direct imports instead of manual cloningWhy this matters
Right now, when a user asks an AI tool "how do I build a Mesa model?" or "why
is my mesa-examples code broken?", the AI either uses training data (which
predates Mesa 3.x) or scrapes HTML and gets noise. This means users routinely
get suggestions to use
AgentSet.to_list()(removed in 3.x),model.timeinstead of
model.steps, and recommendations to run broken examples as ifthey work.
llms.txtis a proposed open standard already adoptedby FastAPI, Pydantic, and others in the Python ecosystem. An AI tool that fetches
this file once gets the correct Mesa 3.x mental model, the full working example
list, and the migration error table — without scraping 40 HTML pages.
The broken-example health table means AI tools will stop recommending broken
examples to new users, which is one of the most common sources of friction for
people discovering Mesa for the first time.
Maintenance
The file is plain Markdown. It should be updated when a broken example is fixed,
a new example is added, or Mesa versions change. It can eventually be partially
automated from
metadata.toml— the health status table is a direct projectionof CI results and example metadata, which are already machine-readable.
Related
schema, and CI validation are companion contributions
GSoC contributor checklist
Context & motivation
I was auditing examples against Mesa 3.3.1 and noticed that AI tools kept giving me wrong answers while navigating the codebase .The next day, I read #417 discussion and noticed that every proposal addressed human discoverability but nobody mentioned AI discoverability. This is the gap I’m trying to fill.
What I learned
What took me the longest to understand was that the broken examples fail for three different reasons — removed APIs, Python import mechanics, and Solara viz layer issues — and you can't tell which one from the error message alone until you understand what each layer is doing. I also didn't know Mesa-Geo existed until I read the README,which is the kind of thing that should be in an AI-readable index but currently isn't.
Learning repo
🔗 My learning repo: REPO
Readiness checks
pytestandruffchecks not applicable