The first open-source linter for GEO (Generative Engine Optimization). Validates your content for AI search visibility -- then lets your AI agent fix it automatically.
Install the geo-lint skill in one command:
curl -fsSL https://raw.githubusercontent.com/IJONIS/geo-lint/main/install.sh | bashThen in any Claude Code session:
/geo-lint audit # Full sweep — find and fix all violations
/geo-lint fix <slug> # Fix a single content file
/geo-lint rules # Show all 97 rules with fix strategies
/geo-lint init # Set up geo-lint.config.ts for a new project
/geo-lint report # Generate a GEO/SEO health summary
The skill runs an autonomous lint-fix loop: scan your content, read structured violations, fix them using each violation's suggestion field, and re-lint until clean. Parallel subagents handle multiple files simultaneously.
The install also adds /content-creator -- a self-configuring content pipeline that creates SEO & GEO-optimized content matched to your project:
/content-creator setup # Auto-discover your project + configure brand voice
/content-creator create # Create content with keyword research + validation
/content-creator voice # Analyze and adjust brand voice
/content-creator calendar # Plan monthly content calendar
/content-creator refresh # Update config when your project evolves
On first use, the skill scans your project to learn its framework, content schema, categories, and authors. It then asks a few questions about your brand voice and audience. From that point on, every piece of content is created with full project awareness and validated against geo-lint's 97 rules until clean -- zero manual intervention for mechanical quality.
Or paste this into any AI agent (Claude Code, Cursor, Windsurf, Copilot):
Run npx geo-lint --format=json, then fix every violation using each
violation's suggestion field. Re-run until the output is [].
I run multiple content-heavy sites and there was no deterministic way to validate whether my content was actually optimized -- not "probably fine," but actually checked against concrete rules. SEO linters exist, but they're either paid SaaS, not automatable, or completely ignore the structural patterns that AI search engines use when deciding what to cite.
So I built one. GEO (Generative Engine Optimization) is the practice of structuring content so it gets cited by ChatGPT, Perplexity, Google AI Overviews, and Gemini. Traditional SEO gets you into search result lists. GEO gets you cited in AI-generated answers. Both matter -- and no existing open-source tool checks for GEO.
The goal was simple: install one tool, point your AI agent at it, and walk away. The agent runs the linter, reads the JSON violations, fixes the content, re-lints until clean -- across an entire site, no manual input. One command, both SEO and GEO validated.
97 rules: 36 GEO, 34 SEO, 14 content quality, 10 technical, 3 i18n. Readability analysis inspired by Yoast SEO. We researched the current state of GEO and AEO to make sure the rules reflect what actually gets content cited -- not outdated advice.
npm install -D @ijonis/geo-lintCreate geo-lint.config.ts:
import { defineConfig } from '@ijonis/geo-lint';
export default defineConfig({
siteUrl: 'https://your-site.com',
contentPaths: [{
dir: 'content/blog',
type: 'blog',
urlPrefix: '/blog/',
}],
});Run it:
npx geo-lint # Human-readable output
npx geo-lint --format=json # Machine-readable for AI agentsWorks out of the box with .md/.mdx files. For Astro, HTML, or other formats, see Custom Adapters.
Three examples of what GEO rules catch and how to fix them. See all 7 core GEO rules with examples.
AI systems use the first sentence after a heading as the citation snippet. Filler openings get skipped.
Before:
## What is serverless computing?
In this section, we will take a closer look at serverless computing and
what it means for modern development teams.After:
## What is serverless computing?
Serverless computing is a cloud execution model where the provider
dynamically allocates compute resources per request, eliminating the
need to provision or manage servers.AI answers prefer citable claims backed by numbers. Vague statements get passed over.
Before:
Adopting TypeScript significantly reduces bugs in large codebases.After:
Adopting TypeScript reduces production bugs by 38% in codebases
exceeding 50,000 lines of code, according to a 2023 study by
Microsoft Research.Tables are highly structured and unambiguous -- ideal for AI extraction. Content with comparison tables is cited significantly more often than equivalent prose.
Before:
React is component-based and uses a virtual DOM. Vue is also
component-based but uses a reactivity system. Svelte compiles
components at build time.After:
| Framework | Architecture | Bundle Size | Learning Curve |
|-----------|------------------|-------------|----------------|
| React | Virtual DOM | 42 KB | Moderate |
| Vue | Reactivity proxy | 33 KB | Low |
| Svelte | Compile-time | 1.6 KB | Low || Category | Rules | Severity Mix | Focus |
|---|---|---|---|
| SEO | 34 | 6 errors, 28 warnings | Titles, descriptions, headings, slugs, OG images, canonical URLs, keywords, links, schema, sameAs, service pages |
| Content | 14 | 2 errors, 12 warnings | Word count, readability, dates, categories, jargon density, repetition, vocabulary diversity, transition words, sentence variety |
| Technical | 10 | 3 errors, 7 warnings | Broken links, image files, trailing slashes, external URLs, performance, feeds, llms.txt |
| i18n | 3 | 0 errors, 3 warnings | Translation pairs, locale metadata |
| GEO | 36 | 0 errors, 36 warnings | AI citation readiness: E-E-A-T signals, content structure, freshness, RAG optimization, author entity type |
See the complete rule reference with descriptions and severity for every rule.
AI agents: Claude Code, Cursor, Windsurf, GitHub Copilot -- any agent that can run shell commands and edit files
Content formats: Markdown and MDX out of the box. Astro, HTML, Nuxt, any CMS via custom adapters
Build tools: Runs in any CI pipeline. JSON output for programmatic consumption
Runtime: Node.js >= 18. Zero peer dependencies
This linter is deterministic -- same content in, same violations out, every time. Your AI agent provides the creativity to fix the content; geo-lint provides the guardrails to verify it's correct. The loop runs until violations hit zero.
Install the skill and use /geo-lint audit to validate and fix your entire content directory automatically. The skill runs the full lint-fix loop with parallel subagents -- one per file. See Use with Claude Code above.
Paste this into Cursor, Windsurf, Copilot, or any AI coding agent:
Run npx geo-lint --format=json, then fix every violation in the reported
files using each violation's suggestion field. After fixing, re-run the
linter and repeat until the output is an empty array []. Preserve the
author's voice -- restructure, don't rewrite.
That's it. The agent will iterate automatically.
npx geo-lint --format=json[
{
"file": "blog/my-post",
"field": "body",
"rule": "geo-no-question-headings",
"severity": "warning",
"message": "Only 1/5 (20%) H2/H3 headings are question-formatted",
"suggestion": "Rephrase some headings as questions (e.g., 'How does X work?') to improve LLM snippet extraction."
}
]Every violation includes:
suggestion-- plain-language fix instruction the agent follows directlyfixStrategy-- machine-readable fix pattern for the rulefile,field,line-- exact location so the agent edits the right place
An empty array [] means zero violations -- the content is clean. The agent knows to stop.
See the full Agent Integration Guide for per-agent setup, a Claude Code skill, and handling edge cases.
Override any rule's severity or disable it entirely:
import { defineConfig } from '@ijonis/geo-lint';
export default defineConfig({
siteUrl: 'https://your-site.com',
contentPaths: [{ dir: 'content/blog', type: 'blog', urlPrefix: '/blog/' }],
rules: {
'geo-missing-table': 'off', // disable a rule
'orphan-content': 'error', // upgrade to error
},
});See the full Configuration Reference for all options, thresholds, and GEO-specific settings.
Lint any content source -- Astro, HTML, a headless CMS -- by writing a small adapter:
import { lint, createAdapter } from '@ijonis/geo-lint';
const adapter = createAdapter(async (projectRoot) => {
// Map your content into ContentItem objects
return [{ title, slug, description, body, permalink, contentType, filePath, rawContent }];
});
await lint({ adapter });See the Custom Adapters Guide for the full ContentItem interface and ready-to-use examples for Astro, HTML, and CMS sources.
import { lint, lintQuiet } from '@ijonis/geo-lint';
const exitCode = await lint({ format: 'json' }); // with console output
const results = await lintQuiet({ projectRoot: '.' }); // raw LintResult[]See the API Reference for all options and types.
Usage:
geo-lint [options]
Options:
--root=<path> Project root directory (default: cwd)
--config=<path> Explicit config file path
--format=pretty Human-readable colored output (default)
--format=json Machine-readable JSON output (for AI agents)
--rules List all registered rules with fix strategies
-h, --help Show this help message
-v, --version Show version
See CONTRIBUTING.md for development setup, testing, and how to add new rules. Changes are tracked in the CHANGELOG.
Questions or ideas? Open a GitHub Discussion. Bugs or feature requests? Open a GitHub Issue.
Built by IJONIS -- we help companies become visible to AI search engines. This linter is extracted from the same toolchain we use on production client content.
