An AI Agent's capability ceiling = the quality of context it can read.
Every AI IDE reads project files. But without structure, agents hallucinate, forget conventions, and produce inconsistent code. Antigravity solves this:
| Problem | Without Antigravity | With Antigravity |
|---|---|---|
| Agent forgets coding style | Repeats the same corrections | Reads .antigravity/conventions.md β gets it right the first time |
| Onboarding a new codebase | Agent guesses at architecture | ag refresh scans & documents it automatically |
| Switching between IDEs | Different rules everywhere | One .antigravity/ folder β every IDE reads it |
| Asking "how does X work?" | Agent reads random files | ag ask gives grounded answers from project context |
Architecture is files, not plugins. .cursorrules, CLAUDE.md, .antigravity/rules.md β these are the cognitive architecture. Portable across any IDE, any LLM, zero vendor lock-in.
# Install CLI (lightweight, no LLM dependencies)
pip install git+https://github.com/study8677/antigravity-workspace-template.git#subdirectory=cli
# Inject cognitive architecture into any project
ag init my-project && cd my-project
# Open in Cursor / Claude Code / Windsurf / any AI IDE β start promptingThat's it. Your IDE now reads .antigravity/rules.md, .cursorrules, CLAUDE.md, AGENTS.md automatically.
ag init Inject context files into any project (--force to overwrite)
β
βΌ
.antigravity/ Shared knowledge base β every IDE reads from here
β
ββββΊ ag refresh Multi-agent scan β auto-generated conventions.md
ββββΊ ag ask Grounded Q&A about your project
ββββΊ ag start-engine Full Think-Act-Reflect agent runtime
Knowledge Hub β Multi-agent pipeline that scans your codebase, understands languages/frameworks/structure, and writes living documentation. Powered by OpenAI Agent SDK + LiteLLM, works with Gemini, OpenAI, Ollama, or any compatible API.
Zero-Config Tools β Drop a .py file into tools/, add type hints and a docstring. The agent discovers it automatically at startup.
Infinite Memory β Recursive summarization compresses conversation history. Run for hours without hitting token limits.
Multi-Agent Swarm β Router-Worker orchestration delegates tasks to specialist agents (Coder, Reviewer, Researcher) and synthesizes results.
| Command | What it does | LLM needed? |
|---|---|---|
ag init <dir> |
Inject cognitive architecture templates | No |
ag init <dir> --force |
Re-inject, overwriting existing files | No |
ag refresh |
Scan project, generate .antigravity/conventions.md |
Yes |
ag ask "question" |
Answer questions about the project | Yes |
ag report "message" |
Log a finding to .antigravity/memory/ |
No |
ag log-decision "what" "why" |
Log an architectural decision | No |
ag start-engine |
Launch the full Agent Engine runtime | Yes |
All commands accept --workspace <dir> to target any directory.
antigravity-workspace-template/
βββ cli/ # ag CLI β lightweight, pip-installable
β βββ templates/ # .cursorrules, CLAUDE.md, .antigravity/, ...
βββ engine/ # Agent Engine β full runtime + Knowledge Hub
βββ antigravity_engine/
βββ agent.py # Think-Act-Reflect loop (Gemini / OpenAI / Ollama)
βββ hub/ # Knowledge Hub (scanner β agents β pipeline)
βββ tools/ # Drop a .py file β auto-discovered as a tool
βββ agents/ # Specialist agents (Coder, Reviewer, Researcher)
βββ swarm.py # Multi-agent orchestration (Router-Worker)
βββ sandbox/ # Code execution (local / microsandbox)
CLI (pip install .../cli) β Zero LLM deps. Injects templates, logs reports & decisions offline.
Engine (pip install .../engine) β Full runtime. Powers ag ask, ag refresh, ag start-engine. Supports Gemini, OpenAI, Ollama, or any OpenAI-compatible API.
# Install both for full experience
pip install "git+https://...#subdirectory=cli"
pip install "git+https://...#subdirectory=engine"ag init my-project
# Already initialized? Use --force to overwrite:
ag init my-project --forceCreates .antigravity/rules.md, .cursorrules, CLAUDE.md, AGENTS.md, .windsurfrules β each IDE reads its native config file, all pointing to the same .antigravity/ knowledge base.
ag refresh --workspace my-projectScans your codebase (languages, frameworks, structure), feeds the scan to a multi-agent pipeline, writes .antigravity/conventions.md. Next time your IDE opens, it reads richer context.
ag ask "How does auth work in this project?"Reads .antigravity/ context, feeds it to a reviewer agent, returns a grounded answer.
# engine/antigravity_engine/tools/my_tool.py
def check_api_health(url: str) -> str:
"""Check if an API endpoint is responding."""
import requests
return "up" if requests.get(url).ok else "down"Drop a file, restart. The agent discovers it automatically via type hints + docstrings.
Architecture is encoded in files β any agent that reads project files benefits:
| IDE | Config File |
|---|---|
| Cursor | .cursorrules |
| Claude Code | CLAUDE.md |
| Windsurf | .windsurfrules |
| VS Code + Copilot | .github/copilot-instructions.md |
| Gemini CLI / Codex | AGENTS.md |
| Cline | .clinerules |
| Google Antigravity | .antigravity/rules.md |
All generated by ag init. All reference .antigravity/ for shared project context.
Knowledge Hub β Multi-agent project intelligence pipeline
The Hub scans your project, identifies languages/frameworks/structure, and uses a multi-agent pipeline (OpenAI Agent SDK + LiteLLM) to generate living documentation:
# Generate conventions from codebase scan
ag refresh
# Only scan files changed since last refresh
ag refresh --quick
# Ask questions grounded in project context
ag ask "What testing patterns does this project use?"
# Log findings and decisions (no LLM needed)
ag report "Auth module needs refactoring"
ag log-decision "Use PostgreSQL" "Team has deep expertise"Works with Gemini, OpenAI, Ollama, or any OpenAI-compatible endpoint.
MCP Integration β Connect external tools (GitHub, databases, filesystems)
// mcp_servers.json
{
"servers": [
{
"name": "github",
"transport": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"enabled": true
}
]
}Set MCP_ENABLED=true in .env. See MCP docs.
Multi-Agent Swarm β Router-Worker orchestration for complex tasks
from antigravity_engine.swarm import SwarmOrchestrator
swarm = SwarmOrchestrator()
result = swarm.execute("Build and review a calculator")
# Routes to Coder β Reviewer β Researcher, synthesizes resultsSee Swarm docs.
Sandbox β Configurable code execution environment
| Variable | Default | Options |
|---|---|---|
SANDBOX_TYPE |
local |
local Β· microsandbox |
SANDBOX_TIMEOUT_SEC |
30 |
seconds |
See Sandbox docs.
Tested end-to-end with Moonshot Kimi K2.5 via NVIDIA's free API tier. Any OpenAI-compatible endpoint works the same way.
1. Configure .env
OPENAI_BASE_URL=https://integrate.api.nvidia.com/v1
OPENAI_API_KEY=nvapi-your-key-here
OPENAI_MODEL=moonshotai/kimi-k2.52. Scan your project
$ ag refresh --workspace .
Updated .antigravity/conventions.mdGenerated output (by Kimi K2.5):
# Project Conventions
## Primary Language & Frameworks
- **Language**: Python (5,135 files, 99%+ of codebase)
- **Infrastructure**: Docker, Docker Compose
- **CI/CD**: GitHub Actions
...3. Ask questions
$ ag ask "What LLM backends does this project support?"
Based on the context, the project supports NVIDIA API with Kimi K2.5.
The architecture uses OpenAI-compatible format, supporting any endpoint
including local LLMs via LiteLLM, NVIDIA NIM models, etc.4. Log decisions (no LLM needed)
$ ag report "Auth module needs refactoring"
Logged report to .antigravity/memory/reports.md
$ ag log-decision "Use PostgreSQL" "Team has deep expertise"
Logged decision to .antigravity/decisions/log.mdWorks with any OpenAI-compatible provider: NVIDIA, OpenAI, Ollama, vLLM, LM Studio, Groq, etc.
| π¬π§ English | docs/en/ |
| π¨π³ δΈζ | docs/zh/ |
| πͺπΈ EspaΓ±ol | docs/es/ |
Ideas are contributions too! Open an issue to report bugs, suggest features, or propose architecture.
![]() β Lling0000 Major Contributor Β· Creative suggestions Β· Project administrator Β· Project ideation & feedback |
![]() Alexander Daza Sandbox MVP Β· OpenSpec workflows Β· Technical analysis docs Β· PHILOSOPHY |
![]() Chen Yi First CLI prototype Β· 753-line refactor Β· DummyClient extraction Β· Quick-start docs |
![]() Subham Sangwan Dynamic tool & context loading (#4) Β· Multi-agent swarm protocol (#3) |
![]() shuofengzhang Memory context window fix Β· MCP shutdown graceful handling (#28) |
![]() goodmorning10 Enhanced ag ask context loading β added CONTEXT.md, AGENTS.md, and memory/*.md as context sources (#29)
|
MIT License. See LICENSE for details.
Built for the AI-native development era







