Read this file completely before making ANY change. If you violate these rules, your change will be rejected.
Kōdo (コード) is a compiled, general-purpose programming language designed for AI agents to write, reason about, and maintain software — while remaining fully transparent and auditable by humans.
Core thesis: Remove ambiguity, make intent explicit, embed contracts into the grammar, make every module self-describing. AI agents produce software that is correct by construction.
This is NOT a toy language. Kōdo has: zero syntactic ambiguity (LL(1)), contracts as first-class citizens (requires/ensures), self-describing modules (mandatory meta), intent-driven programming (intent blocks), linear ownership (own/ref/mut), structured concurrency, and agent traceability annotations (@authored_by, @confidence).
See docs/DESIGN.md for the full language specification.
Source (.ko)
│
▼
[kodo_lexer] → Token stream (logos)
│
▼
[kodo_parser] → AST (hand-written recursive descent LL(1))
│
▼
[kodo_types] → Typed AST (type checking, no inference across modules)
│
▼
[kodo_contracts]→ Verified AST (Z3 SMT for static, runtime fallback)
│
▼
[kodo_resolver] → Expanded AST (intents → concrete code)
│
▼
[kodo_mir] → Mid-level IR (CFG, optimization, borrow check)
│
▼
[kodo_codegen] → Native binary (Cranelift)
kodo_ast ← no internal deps (shared foundation)
kodo_lexer ← kodo_ast
kodo_parser ← kodo_ast, kodo_lexer
kodo_types ← kodo_ast
kodo_contracts ← kodo_ast, kodo_types
kodo_resolver ← kodo_ast, kodo_types, kodo_contracts
kodo_mir ← kodo_ast, kodo_types
kodo_codegen ← kodo_mir
kodo_std ← kodo_ast
kodoc ← all crates + clap + tracing + ariadne
kotest ← clap + serde_json (UI test harness, no internal deps)
Kōdo's design is grounded in established compiler and PL theory. When making design decisions, consult the relevant references:
| Decision Area | Consult | Key Concept |
|---|---|---|
| Lexer design | [CI] Ch.4, [EC] Ch.2 | DFA scanning, maximal munch, token design |
| Parser structure | [CI] Ch.6–8, [EC] Ch.3 | Recursive descent, LL(1), FIRST/FOLLOW |
| AST node design | [CI] Ch.5, [EC] Ch.4–5 | Spans, visitor pattern, IR taxonomy |
| Type safety | [TAPL] Ch.1–11 | Progress + preservation, no implicit conversions |
| Generics / System F | [TAPL] Ch.22–26, [PLP] Ch.7–8 | Bounded quantification, parametric polymorphism |
| Ownership (own/ref/mut) | [ATAPL] Ch.1 | Linear and affine type systems |
| Contracts (requires/ensures) | [SF] Vol.1–2, [CC] Ch.1–6 | Hoare logic, decision procedures |
| SMT verification | [CC] Ch.10–12 | Z3, satisfiability, automated proving |
| MIR / optimization | [Tiger] Ch.7–8, [EC] Ch.8–10 | CFG, SSA, basic blocks, data-flow |
| Code generation | [Tiger] Ch.9–11, [EC] Ch.11–13 | Instruction selection, register allocation |
Abbreviations:
- [CI] Crafting Interpreters (Nystrom) — [EC] Engineering a Compiler (Cooper & Torczon)
- [TAPL] Types and Programming Languages (Pierce) — [ATAPL] Advanced Topics in Types and PL (Pierce, ed.)
- [SF] Software Foundations (Pierce et al.) — [CC] The Calculus of Computation (Bradley & Manna)
- [Tiger] Modern Compiler Implementation in ML (Appel) — [PLP] Programming Language Pragmatics (Scott)
See docs/REFERENCES.md for the full bibliography with chapter-by-chapter mapping.
cargo fmt --all— Must pass. Seerustfmt.toml.cargo clippy --workspace -- -D warnings— Zero warnings.cargo test --workspace— All tests pass. No skipped tests.cargo doc --workspace --no-deps— Documentation compiles.
#![deny(missing_docs)]
#![deny(clippy::unwrap_used, clippy::expect_used)]
#![warn(clippy::pedantic)]- Library crates: ZERO
unwrap()orexpect(). Usethiserrorenums. - Binary crates (kodoc, ko):
unwrap()/expect()only inmain()or test code. - Test code:
unwrap()/expect()is fine. - Every crate defines its own
Errorenum andtype Result<T>alias.
- Every public item has a
///doc comment. - Every module has a
//!doc comment explaining its purpose. - Doc comments explain WHY, not just WHAT.
- Unit tests: Every crate, every significant function.
- Snapshot tests:
instafor lexer output, parser AST, error messages. - Property tests:
proptestfor fuzzing lexer and parser. - Integration tests:
crates/kodoc/tests/for full pipeline. - UI tests:
kotestharness with.kofiles intests/ui/(inspired by Rust'scompiletest). - Benchmarks:
criterionincrates/kodo_lexer/benches/. - Test fixtures live in
tests/fixtures/{valid,invalid}/. - UI test files live in
tests/ui/organized by feature (basics, types, ownership, contracts, etc.).
UI tests use //@ directives in .ko files:
//@ check-pass— must compile without errors//@ compile-fail— must fail compilation//@ run-pass— must compile AND run successfully//@ run-fail— must compile but fail at runtime//@ error-code: E0200— expected error code//@ compile-flags: --contracts=static— extra kodoc flags//~ ERROR E0200: message— inline annotation (bidirectional verification)
Error messages are how AI agents interact with the compiler. Every error MUST:
- Have a unique error code (e.g.,
E0042). Seedocs/error_index.md. - Include source location (file, line, column, span).
- Provide structured JSON alongside human-readable output (
--json-errors). - Suggest a fix when possible.
- Reference the relevant spec section.
Error code ranges:
E0001–E0099: LexerE0100–E0199: ParserE0200–E0299: TypesE0300–E0399: ContractsE0400–E0499: ResolverE0500–E0599: MIRE0600–E0699: Codegen
Format: <phase>: <description>
Prefixes: lexer:, parser:, ast:, types:, contracts:, resolver:, mir:, codegen:, stdlib:, cli:, docs:, test:, bench:, ci:, chore:
Examples:
parser: add support for intent blocks with route declarationstypes: implement generic type resolutioncontracts: integrate Z3 for static precondition verificationcli: add --json-errors flag for agent consumption
- Correctness — The compiler must NEVER produce wrong code.
- Error quality — Bad errors make agents produce worse code.
- Performance — Fast compilation enables tight agent feedback loops.
- Features — Only add when the foundation is solid.
- NO parser generators (no LALRPOP, no pest). Hand-written recursive descent only.
- NO implicit type conversions anywhere.
- NO
unwrap()/expect()in library code. - NO circular dependencies between crates.
- NO
unsafewithout a// SAFETY:comment and a damn good reason. - NO feature without tests.
- NO dead code — if it's not used, delete it.
- NO magic numbers — use named constants.
| File | Purpose |
|---|---|
Cargo.toml |
Workspace root with all dependency versions |
docs/DESIGN.md |
Full language specification |
docs/grammar.ebnf |
Formal grammar (LL(1)) |
docs/error_index.md |
Error code catalog |
docs/intent_system.md |
Intent resolver documentation |
docs/REFERENCES.md |
Academic references mapped to compiler phases |
rustfmt.toml |
Formatting rules |
clippy.toml |
Lint configuration |
deny.toml |
Dependency audit rules |
Makefile |
Build, test, and run shortcuts |
crates/kotest/ |
UI test harness (compiletest-inspired) |
tests/ui/ |
UI test files organized by feature |
scripts/validate-doc-examples.sh |
Validates every doc example compiles, runs, and produces correct output |
~/dev/kodo-website |
Kōdo language website (update when user-facing changes occur) |
~/dev/kodo-website/public/llms.txt |
llms.txt for AI agent discoverability (update when docs pages are added/removed/renamed) |
module name {
meta { key: "value" }
fn name(param: Type) -> ReturnType
requires { precondition }
ensures { postcondition }
{
let x: Int = 42
let s: String = "hello"
if condition { ... } else { ... }
return value
}
intent name {
config_key: value
}
}
Primitives: Int, Int8-Int64, Uint, Uint8-Uint64, Float32, Float64, Bool, String, Byte
No null. Option<T> only. No exceptions. Result<T, E> only.
Kōdo exists for AI agents. Every feature, error message, and tool decision should be evaluated from the agent's perspective. These principles guide development priorities:
The #1 value proposition of Kōdo is the closed-loop repair cycle. Every compiler error should be:
- Machine-parseable: JSON with structured fields, not prose
- Auto-fixable when possible:
FixPatchwith byte offsets, not just suggestions - Classifiable: agents need to know instantly if they can fix it alone (
auto), need context (assisted), or need a human (manual)
When adding new errors: ALWAYS implement fix_patch() alongside the diagnostic. A suggestion without a patch is a half-finished feature. Target: >80% of errors should have machine-applicable patches.
Key files: crates/kodo_types/src/errors.rs (fix_patch), crates/kodoc/src/diagnostics.rs (JSON output), crates/kodo_parser/src/error.rs (parser patches).
Agents don't read prose — they parse structured data. Every kodoc subcommand that produces output should support --json:
kodoc check --json-errors✓ (done)kodoc confidence-report --json✓ (done)kodoc explain --json✓ (done)kodoc repl --json✓ (done)kodoc intent-explain --json✓ (done)kodoc describe --json✓ (done)
Contracts (requires/ensures) verified by Z3 are what no other language offers agents. Prioritize:
- More errors with contract-aware fix patches (e.g., "add
requires { x > 0 }to satisfy callee's precondition") - Contract status in compilation certificates (
verified_staticvsruntime_only) - Recoverable contract mode (
--contracts=recoverable) ✓ (done, tested E2E) - Contract-aware LSP completions (show
requires/ensuresin hover and autocomplete) ✓ (done, format_expr)
The @confidence → @reviewed_by enforcement is unique. Make it operationally useful:
- Store transitive confidence scores in
.ko.cert.json✓ (done) - Enable policy-based automation ✓ (done:
kodoc audit --policy "min_confidence=0.9,contracts=all_verified"exits 1 on violation) kodoc auditcommand combining confidence + contracts + annotations in one report ✓ (done, --json --policy)
Collections are fully wired through type checker → codegen as of v0.3.0:
- List: push, get, length, contains, pop, remove, set, slice, reverse, is_empty ✓
- Map: insert, get, contains_key, length, remove, is_empty, keys()/values()/entries() +
for-in✓ - JSON: parse, stringify, get_string, get_int, get_bool, get_array, get_float ✓
Next priorities for collections:
- List: sort, filter, map, fold, reduce, count, any (higher-order collection methods) -- DONE
- Map: merge, filter -- DONE
- Set: new collection type -- DONE (add, contains, remove, length, is_empty, union, intersection, difference, for-in)
For agents operating via IDE/editor integration, LSP quality directly impacts productivity:
- Hover MUST show full annotations:
@confidence(0.85),@authored_by(agent: "claude"), not just@confidence - Code actions should surface
FixPatchas one-click fixes ✓ (done) - Goto definition should work ✓ (done)
- Completions should be contract-aware ✓ (done, format_expr for requires/ensures)
When implementing features, be honest about current limitations in error messages and docs:
- Concurrency:
spawn/async/awaitcompile but execute sequentially in v1 - Channels: ✅ Generic as of v1.12.0 —
channel_new()infers T from context;channel_send/channel_recvwork forChannel<Int>,Channel<Bool>,Channel<String> - Result<T, E>: ✅ Custom error enums work end-to-end as of v1.12.0 —
Ecan be any enum, and nested patterns likeErr(AppError::NotFound)are fully supported - String:
substringis byte-based, not Unicode-aware - Contract violation: calls
abort()by default — use--contracts=recoverablefor graceful handling
After completing ANY task (feature, bugfix, refactor, etc.), you MUST execute ALL of the following steps before considering the task done. This is NON-NEGOTIABLE.
- Write unit tests for every new function or changed behavior.
- Write integration tests for new features (in
crates/kodoc/tests/). - If adding a new language feature, create a
.koexample inexamples/. - Add UI tests in
tests/ui/for new features or error messages (with//@directives). - Run
cargo test --workspaceand confirm ALL tests pass. - Run
make ui-testand confirm all UI tests pass.
- Run
cargo fmt --all -- --checkand fix any formatting issues. - Run
cargo clippy --workspace -- -D warningsand fix ALL warnings (zero tolerance). - Never suppress warnings without a documented reason.
- Update
docs/guide/if any user-facing feature was added or changed. - Update
docs/index.mdif new guide pages were created. - Update
README.mdif the "What Works Today" section is affected. - Update
docs/error_index.mdif new error codes were added. - Ensure all new public items have
///doc comments. - Update examples table in
README.mdif new.koexamples were added. - If any user-facing feature, documentation, or README content was changed, check if
~/dev/kodo-websiteneeds updates and update it accordingly. - If doc pages were added, removed, or renamed on the website, update
~/dev/kodo-website/public/llms.txtto keep the AI-facing sitemap in sync.
Run this exact sequence and confirm all pass:
cargo fmt --all -- --check
cargo clippy --workspace -- -D warnings
cargo test --workspace
make ui-testIf any user-facing feature or codegen change was made, also run:
make validate-docsThis compiles, runs, and verifies the output of every code example from the kodo-website documentation against the real compiler. If any example fails, either fix the compiler or update the documentation before reporting the task as complete.
If any step fails, fix the issue before reporting the task as complete.
Report to the user:
- What was changed (files modified/created)
- Number of tests passing
- Any documentation updated
- Any new examples added
Do NOT skip these steps. Do NOT report a task as done without running all checks.
# Check everything compiles
cargo check --workspace
# Run all tests
cargo test --workspace
# Lint
cargo clippy --workspace -- -D warnings
# Format
cargo fmt --all
# Build
cargo build --workspace
# Run benchmarks
cargo bench -p kodo_lexer
# Generate docs
cargo doc --workspace --no-deps --open
# Run the compiler
cargo run -p kodoc -- lex examples/hello.ko
cargo run -p kodoc -- parse examples/hello.ko
cargo run -p kodoc -- check examples/hello.ko
cargo run -p kodoc -- build examples/hello.ko
# Run UI tests (kotest harness)
make ui-test
# Auto-update UI test baselines
make ui-bless