Skip to content

A curated collection of process conventions for working effectively with AI coding assistants

License

Notifications You must be signed in to change notification settings

JMRussas/ai-engineering-conventions

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Engineering Conventions

A curated collection of process conventions for working effectively with AI coding assistants.

These aren't theoretical — they're patterns extracted from daily AI-augmented development across multiple projects, refined through real usage. Each convention is standalone: adopt what fits your workflow, skip what doesn't.

Who this is for

  • Individual developers using agentic AI workflows (Claude Code, Cursor Composer, Aider, etc.) who want more consistent, reliable results. Many patterns also apply to autocomplete-style assistants (Copilot, etc.).
  • Team leads establishing AI workflows and wanting guardrails that don't kill velocity
  • Anyone who's noticed that AI output quality varies wildly and suspects the problem is process, not the model

These conventions assume a single-developer-single-agent workflow. Multi-agent scenarios (multiple AI sessions, team members with different AI tools) may need adaptation.

Note: Examples in this repo use Claude Code conventions (CLAUDE.md, .claude/ directory), but the patterns themselves are tool-agnostic. Adapt the filenames and mechanics to your tool.

How to use this

Each convention in conventions/ follows the same format:

  • What — one paragraph summary
  • Why — what goes wrong without it
  • How — concrete implementation
  • Example — real config, code, or workflow snippet
  • When to skip — not everything applies everywhere

Start with the ones that address your biggest pain points. You don't need all of them.

New to AI-augmented development? Start with these three:

  1. Instruction Files — foundational; everything else builds on this
  2. Checkpoint Commits — immediate safety net, zero setup cost
  3. Planning Rigor — prevents the most common failure mode (AI builds the wrong thing)

Conventions

Planning & Design

Convention Summary
Planning Rigor Scale planning depth to risk, not task size
Design Change Protocol Stop and re-plan when implementation deviates from the plan
Test-First with AI Write failing tests before asking AI to implement

Documentation & Knowledge

Convention Summary
Documentation Layers Lightweight entry point + deep-dive docs on demand
Dependency Headers Explicit dependency maps in source files
Gotchas Docs Prevent the AI from re-learning the same lessons
Instruction Files AI config as version-controlled project artifacts
Memory Discipline Persistent memory that's curated, not dumped

Tooling & Infrastructure

Convention Summary
Project CLI AI builds its own inspection and validation tools
RAG-Augmented Dev Project-specific search indexes for accurate API knowledge
Guardrail Hooks Automated safety nets for AI-generated code

Process & Trust

Convention Summary
Incremental Trust Human stays in the approval loop for irreversible actions
Context Budgeting Be deliberate about what the AI sees
Checkpoint Commits Frequent micro-commits for easy rollback during AI sessions

Reference Implementation

The examples/ directory contains reference implementations:

Convention interactions

Some conventions create productive tension with each other. This is by design — you resolve the tension based on your context.

Convention A Convention B Tension Resolution
Checkpoint Commits Guardrail Hooks Hooks slow down rapid checkpointing Skip hooks for checkpoints (--no-verify) if CI is required before merge. Without CI, keep hooks on — they're your only safety net. Note: --no-verify means lint violations, type errors, and accidental secrets can land in your local history — CI catches them before merge, but if you force-push or cherry-pick from that history, you bypass the safety net entirely.
Context Budgeting RAG-Augmented Dev RAG retrieval can expand context Set top_k low, filter by source; RAG replaces context, not adds to it
Test-First Incremental Trust Should the AI run tests freely? Yes — running tests is local and reversible, always in the "free" trust tier
Design Change Protocol Planning Rigor L1 L1 says "just go"; protocol says "stop on deviations" Protocol only applies to L2+ tasks. L1 deviations are expected and fine.

How this was built

These conventions are maintained with the same rigor they describe. The collection itself was developed with AI assistance and structured review — demonstrating the pattern it advocates. After the initial 14-convention release, the collection went through 6 review passes catching factual errors (wrong Python exception types, incorrect Zustand async behavior), broken code examples (invalid regex, missing config), and contradictions between conventions. Each fix is a separate commit with a clear description of what was wrong and why.

See commit history for the full review trail.

Contributing

Found a convention that works for you? Open a PR. The bar is:

  1. You've used it in real work (not just theorized about it)
  2. It follows the What/Why/How/Example/When-to-skip format
  3. It's a process convention, not a tool-specific tip

License

MIT — use these however you want.

About

A curated collection of process conventions for working effectively with AI coding assistants

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors