Skip to content

study8677/antigravity-workspace-template

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

106 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Antigravity Workspace

AI Workspace Template

The missing cognitive layer for AI-powered IDEs.

One command. Every AI IDE becomes an expert on your codebase.

Language: English | δΈ­ζ–‡ | EspaΓ±ol

License Python CI DeepWiki


Cursor Claude Code Windsurf Gemini CLI VS Code Codex Cline Aider

Before vs After Antigravity

Why Antigravity?

An AI Agent's capability ceiling = the quality of context it can read.

Every AI IDE reads project files. But without structure, agents hallucinate, forget conventions, and produce inconsistent code. Antigravity solves this:

Problem Without Antigravity With Antigravity
Agent forgets coding style Repeats the same corrections Reads .antigravity/conventions.md β€” gets it right the first time
Onboarding a new codebase Agent guesses at architecture ag refresh scans & documents it automatically
Switching between IDEs Different rules everywhere One .antigravity/ folder β€” every IDE reads it
Asking "how does X work?" Agent reads random files ag ask gives grounded answers from project context

Architecture is files, not plugins. .cursorrules, CLAUDE.md, .antigravity/rules.md β€” these are the cognitive architecture. Portable across any IDE, any LLM, zero vendor lock-in.


Quick Start

# Install CLI (lightweight, no LLM dependencies)
pip install git+https://github.com/study8677/antigravity-workspace-template.git#subdirectory=cli

# Inject cognitive architecture into any project
ag init my-project && cd my-project

# Open in Cursor / Claude Code / Windsurf / any AI IDE β†’ start prompting

That's it. Your IDE now reads .antigravity/rules.md, .cursorrules, CLAUDE.md, AGENTS.md automatically.


Features at a Glance

  ag init           Inject context files into any project (--force to overwrite)
       β”‚
       β–Ό
  .antigravity/     Shared knowledge base β€” every IDE reads from here
       β”‚
       β”œβ”€β”€β–Ί ag refresh     Multi-agent scan β†’ auto-generated conventions.md
       β”œβ”€β”€β–Ί ag ask         Grounded Q&A about your project
       └──► ag start-engine   Full Think-Act-Reflect agent runtime

Knowledge Hub β€” Multi-agent pipeline that scans your codebase, understands languages/frameworks/structure, and writes living documentation. Powered by OpenAI Agent SDK + LiteLLM, works with Gemini, OpenAI, Ollama, or any compatible API.

Zero-Config Tools β€” Drop a .py file into tools/, add type hints and a docstring. The agent discovers it automatically at startup.

Infinite Memory β€” Recursive summarization compresses conversation history. Run for hours without hitting token limits.

Multi-Agent Swarm β€” Router-Worker orchestration delegates tasks to specialist agents (Coder, Reviewer, Researcher) and synthesizes results.


CLI Commands

Command What it does LLM needed?
ag init <dir> Inject cognitive architecture templates No
ag init <dir> --force Re-inject, overwriting existing files No
ag refresh Scan project, generate .antigravity/conventions.md Yes
ag ask "question" Answer questions about the project Yes
ag report "message" Log a finding to .antigravity/memory/ No
ag log-decision "what" "why" Log an architectural decision No
ag start-engine Launch the full Agent Engine runtime Yes

All commands accept --workspace <dir> to target any directory.


Two Packages, One Workflow

antigravity-workspace-template/
β”œβ”€β”€ cli/                     # ag CLI β€” lightweight, pip-installable
β”‚   └── templates/           # .cursorrules, CLAUDE.md, .antigravity/, ...
└── engine/                  # Agent Engine β€” full runtime + Knowledge Hub
    └── antigravity_engine/
        β”œβ”€β”€ agent.py         # Think-Act-Reflect loop (Gemini / OpenAI / Ollama)
        β”œβ”€β”€ hub/             # Knowledge Hub (scanner β†’ agents β†’ pipeline)
        β”œβ”€β”€ tools/           # Drop a .py file β†’ auto-discovered as a tool
        β”œβ”€β”€ agents/          # Specialist agents (Coder, Reviewer, Researcher)
        β”œβ”€β”€ swarm.py         # Multi-agent orchestration (Router-Worker)
        └── sandbox/         # Code execution (local / microsandbox)

CLI (pip install .../cli) β€” Zero LLM deps. Injects templates, logs reports & decisions offline.

Engine (pip install .../engine) β€” Full runtime. Powers ag ask, ag refresh, ag start-engine. Supports Gemini, OpenAI, Ollama, or any OpenAI-compatible API.

# Install both for full experience
pip install "git+https://...#subdirectory=cli"
pip install "git+https://...#subdirectory=engine"

How It Works

1. ag init β€” Inject context files

ag init my-project
# Already initialized? Use --force to overwrite:
ag init my-project --force

Creates .antigravity/rules.md, .cursorrules, CLAUDE.md, AGENTS.md, .windsurfrules β€” each IDE reads its native config file, all pointing to the same .antigravity/ knowledge base.

2. ag refresh β€” Build project intelligence

ag refresh --workspace my-project

Scans your codebase (languages, frameworks, structure), feeds the scan to a multi-agent pipeline, writes .antigravity/conventions.md. Next time your IDE opens, it reads richer context.

3. ag ask β€” Query your project

ag ask "How does auth work in this project?"

Reads .antigravity/ context, feeds it to a reviewer agent, returns a grounded answer.

4. Build tools β€” Zero config

# engine/antigravity_engine/tools/my_tool.py
def check_api_health(url: str) -> str:
    """Check if an API endpoint is responding."""
    import requests
    return "up" if requests.get(url).ok else "down"

Drop a file, restart. The agent discovers it automatically via type hints + docstrings.


IDE Compatibility

Architecture is encoded in files β€” any agent that reads project files benefits:

IDE Config File
Cursor .cursorrules
Claude Code CLAUDE.md
Windsurf .windsurfrules
VS Code + Copilot .github/copilot-instructions.md
Gemini CLI / Codex AGENTS.md
Cline .clinerules
Google Antigravity .antigravity/rules.md

All generated by ag init. All reference .antigravity/ for shared project context.


Advanced Features

Knowledge Hub β€” Multi-agent project intelligence pipeline

The Hub scans your project, identifies languages/frameworks/structure, and uses a multi-agent pipeline (OpenAI Agent SDK + LiteLLM) to generate living documentation:

# Generate conventions from codebase scan
ag refresh

# Only scan files changed since last refresh
ag refresh --quick

# Ask questions grounded in project context
ag ask "What testing patterns does this project use?"

# Log findings and decisions (no LLM needed)
ag report "Auth module needs refactoring"
ag log-decision "Use PostgreSQL" "Team has deep expertise"

Works with Gemini, OpenAI, Ollama, or any OpenAI-compatible endpoint.

MCP Integration β€” Connect external tools (GitHub, databases, filesystems)
// mcp_servers.json
{
  "servers": [
    {
      "name": "github",
      "transport": "stdio",
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "enabled": true
    }
  ]
}

Set MCP_ENABLED=true in .env. See MCP docs.

Multi-Agent Swarm β€” Router-Worker orchestration for complex tasks
from antigravity_engine.swarm import SwarmOrchestrator

swarm = SwarmOrchestrator()
result = swarm.execute("Build and review a calculator")
# Routes to Coder β†’ Reviewer β†’ Researcher, synthesizes results

See Swarm docs.

Sandbox β€” Configurable code execution environment
Variable Default Options
SANDBOX_TYPE local local Β· microsandbox
SANDBOX_TIMEOUT_SEC 30 seconds

See Sandbox docs.


Real-World Demo: NVIDIA API + Kimi K2.5

Tested end-to-end with Moonshot Kimi K2.5 via NVIDIA's free API tier. Any OpenAI-compatible endpoint works the same way.

1. Configure .env

OPENAI_BASE_URL=https://integrate.api.nvidia.com/v1
OPENAI_API_KEY=nvapi-your-key-here
OPENAI_MODEL=moonshotai/kimi-k2.5

2. Scan your project

$ ag refresh --workspace .
Updated .antigravity/conventions.md

Generated output (by Kimi K2.5):

# Project Conventions
## Primary Language & Frameworks
- **Language**: Python (5,135 files, 99%+ of codebase)
- **Infrastructure**: Docker, Docker Compose
- **CI/CD**: GitHub Actions
...

3. Ask questions

$ ag ask "What LLM backends does this project support?"
Based on the context, the project supports NVIDIA API with Kimi K2.5.
The architecture uses OpenAI-compatible format, supporting any endpoint
including local LLMs via LiteLLM, NVIDIA NIM models, etc.

4. Log decisions (no LLM needed)

$ ag report "Auth module needs refactoring"
Logged report to .antigravity/memory/reports.md

$ ag log-decision "Use PostgreSQL" "Team has deep expertise"
Logged decision to .antigravity/decisions/log.md

Works with any OpenAI-compatible provider: NVIDIA, OpenAI, Ollama, vLLM, LM Studio, Groq, etc.


Documentation

πŸ‡¬πŸ‡§ English docs/en/
πŸ‡¨πŸ‡³ δΈ­ζ–‡ docs/zh/
πŸ‡ͺπŸ‡Έ EspaΓ±ol docs/es/

Contributing

Ideas are contributions too! Open an issue to report bugs, suggest features, or propose architecture.

Contributors


⭐ Lling0000

Major Contributor Β· Creative suggestions Β· Project administrator Β· Project ideation & feedback

Alexander Daza

Sandbox MVP Β· OpenSpec workflows Β· Technical analysis docs Β· PHILOSOPHY

Chen Yi

First CLI prototype Β· 753-line refactor Β· DummyClient extraction Β· Quick-start docs

Subham Sangwan

Dynamic tool & context loading (#4) Β· Multi-agent swarm protocol (#3)

shuofengzhang

Memory context window fix Β· MCP shutdown graceful handling (#28)

goodmorning10

Enhanced ag ask context loading β€” added CONTEXT.md, AGENTS.md, and memory/*.md as context sources (#29)

Star History

Star History Chart

License

MIT License. See LICENSE for details.


πŸ“š Full Documentation β†’

Built for the AI-native development era

About

πŸͺ The ultimate starter kit for AI IDEs, Claude code,codex, and other agentic coding environments.

Topics

Resources

License

Stars

Watchers

Forks

Packages