Skip to content

inclusionAI/AWorld

Repository files navigation

AWorld: The Agent Harness for Your World

"The Next Frontier for AI is Your Expertise"

Twitter Follow WeChat QR Code Discord License: MIT DeepWiki Tutorial


General AI often hits a "wall of context"—the nuanced data, workflows, and intuition that define your world. An agent's true power lies not in the model alone, but in its Agent Harness: the framework orchestrating its tools, memory, context, and execution.

This is the AWorld Thesis: A powerful harness is not enough. True AI scaling is unlocked only when experts like you embed the invaluable knowledge, effectively building the gate in that wall.

AWorld is the platform designed for this singular purpose. We provide a complete, battle-tested Harness as the recipe for you, the expert, to forge your knowledge into a fleet of autonomous agents. Together, we move beyond AI's generic promise to create robust, precise applications that master your specific domain.

From Expertise to Product

See what happens when expert knowledge is encoded into reusable Skills. The creations below are orchestrated by the AWorld Agent, demonstrating our core scaling law: as the community contributes more expertise, the entire ecosystem becomes more powerful.

This is what's possible today. Imagine what we'll build with your expertise.

Capability Expertise See it in Action Recipe
Create App • Auto-creation by base model
• Auto-evaluation by UI Evaluation Skill
App create demo View Recipe
Create Video • Auto-creation by Remotion Skill
• Human evaluation
Video create demo View Recipe

Your Journey with AWorld-CLI

The journey from an idea to an evolved, autonomous agent begins at your fingertips.

Install and Activate

Install once, configure globally, and run anywhere.

Install AWorld-CLI

git clone https://github.com/inclusionAI/AWorld && cd AWorld

conda create -n aworld_env python=3.11 -y && conda activate aworld_env 

pip install -e . && cd aworld-cli && pip install -e .

Config & Launch

cd your working directory

aworld-cli --config

Once configured, simply type aworld-cli in your terminal to start your journey.

Alternatively, you can configure by creating a .env file in your working directory with your model and API settings. See Environment configuration for details.

Automate Creation with AWorld-CLI

AWorld-CLI goes beyond simple scaffolding. It acts as a central brain, the AWorld Agent, which orchestrates a team of specialized sub-agents to build, evaluate, and even evolve other agents autonomously.

This multi-agent system works in concert to turn your ideas into reality:

Agent NameRole & Core Function
👑 AWorld AgentThe Orchestrator: The central brain that interprets user goals, creates a plan, and delegates tasks to the appropriate sub-agents. It manages the entire workflow from start to finish.
🧑‍💻 DeveloperThe Builder: The master craftsman responsible for writing, debugging, and refactoring code.
🧐 EvaluatorThe Judge: The quality assurance expert. It assesses the Developer's output against objective criteria, providing the critical feedback required for the evolution loop.

The Evolution Loop: Build -> Evaluate -> Evolve

Imagine you ask: "Help me create an English word learning mini-app with a UI quality score above 0.9."

  • The Developer Builds: The Developer analyzes requirements and writes code (e.g., HTML) using CAST.
  • The Evaluator Judges: The Evaluator inspects the output using our verified Skill.
  • The Loop Refines: If the score is below target (e.g., 0.9), AWorld instructs the Developer to fix specific issues identified by the Evaluator. This loop continues until your criteria are met.

📹 See the Self-Evolution Loop in Action

aworld_cli_intro.mp4

No Evaluation, No Evolution

For an agent to improve, it must first understand what "good" looks like. This evaluation is the core of our autonomous evolution loop, but it's a complex challenge. It ranges from objective tasks with clear metrics (e.g., solving a math problem) to subjective ones requiring human preference. Real-world evolution is further complicated by massive codebases, limited context windows, and the need for precise iteration.

AWorld provides the complete infrastructure to master both evaluation scenarios, turning your expertise into the definitive driving force that steers an agent through the entire evolution loop.

CAST: Conquering Code Complexity

Agents often fail because of overwhelming code complexity. We built CAST (Code Abstract Syntax Tree) to solve this. Instead of seeing a flat text file, CAST gives the agent an architectural blueprint of the code. This enables:

  • Hierarchical Navigation: Instantly understand code structure and purpose without getting lost in implementation details.
  • Nearly Infinite Context: Intelligently compresses code to feed the agent only relevant information, breaking the context window limitation.
  • Surgical Code Modification: Perform precise changes with full dependency awareness, avoiding the clumsy errors of "blind" text replacement.

Your Expertise as the Evaluator

CAST provides the technical capability for change, but your knowledge provides the direction. AWorld's Shared Skill System makes your expertise the ultimate measure of quality.

Automated Evaluation: Evaluator agent judge performance and identify flaws, setting a clear, objective target for the Developer agent. This creates a powerful synergy: the Evaluator sets the target, and the Developer uses the same knowledge to hit it.

Human Evaluation: For tasks demanding subjective judgment, your intuition is the ceiling. You are the ultimate evaluator. Provide natural language feedback at any stage, and the AWorld agent will interpret it as a high-priority instruction for the next evolutionary cycle.

Whether it's an automated score from a Skill you contributed or your direct manual guidance, in AWorld, precise feedback drives precise evolution.

A Proven Harness: Benchmark Excellence

The following top rankings on competitive benchmarks are more than just agent achievements—they are direct validation of the AWorld **Harness**. They prove our robust, battle-tested infrastructure provides the essential foundation for building state-of-the-art AI.

Agent Benchmarking

Category Achievement Performance Key Innovation Date
🤖 Agent
Try Online
GAIA Benchmark
Excellence

GAIA
Pass@1: 67.89
Pass@3: 83.49
(109 tasks) Code
Multi-agent system
stability & orchestration
Paper
2025/08/06
🧠 Reasoning IMO 2025
Problem Solving

IMO
5/6 problems
solved in 6 hours
Code
Multi-agent collaboration
beats solo models
2025/07/25
🖼️ Multi-Modal OSWorld
Rank 1st

OSWorld
58.0%
Success Rate
Code
The more tools the better? 2025/09/18
🖼️ Multi-Modal VisualWebArena Rank 1st in September
VWA
36.5%
Success Rate
Code
Automated tool generation
Paper
2025/09/25
🔍 Deep-Search Xbench Excellence
xbench
Pass@1: 51
Pass@3: 61
Code
AWorld has its own context engine: Amni. 2025/10/23

Data Synthesis

  1. FunReason-MT Technical Report: Overcoming the Complexity Barrier in Multi-Turn Function Calling arxiv, 2025. paper, code, model, dataset

    Zengzhuang Xu, Bingguang Hao, Zechuan Wang, Yuntao Wen, Maolin Wang, etc.

  2. From Failure to Mastery: Generating Hard Samples for Tool-use Agents arxiv, 2026. paper, code, model, dataset

    Bingguang Hao, Zengzhuang Xu, Yuntao Wen, Xinyi Xu, Yang Liu, etc.

Model Training

  1. AWorld: Orchestrating the Training Recipe for Agentic AI. arxiv, 2025. paper, code, model

    Chengyue Yu, Siyuan Lu, Chenyi Zhuang, Dong Wang, Qintong Wu, etc.

  2. FunReason: Enhancing Large Language Models' Function Calling via Self-Refinement Multiscale Loss and Automated Data Refinement. arxiv, 2025. paper, model

    Bingguang Hao, Maolin Wang, Zengzhuang Xu, Cunyin Peng, etc.

  3. Exploring Superior Function Calls via Reinforcement Learning. arxiv, 2025. paper, code

    Bingguang Hao, Maolin Wang, Zengzhuang Xu, Yicheng Chen, etc.

  4. RAG-R1 : Incentivize the Search and Reasoning Capabilities of LLMs through Multi-query Parallelism. arxiv, 2025. paper, code, model

    Zhiwen Tan, Jiaming Huang, Qintong Wu, Hongxuan Zhang, Chenyi Zhuang, Jinjie Gu

  5. V2P: From Background Suppression to Center Peaking for Robust GUI Grounding Task. arxiv, 2025. paper, code

    Jikai Chen, Long Chen, Dong Wang, Leilei Gan, Chenyi Zhuang, Jinjie Gu

  6. Don’t Just Fine-tune the Agent, Tune the Environment arxiv, 2025. paper

    Siyuan Lu, Zechuan Wang, Hongxuan Zhang, Qintong Wu, Leilei Gan, Chenyi Zhuang, etc.

Meta Learning

  1. Profile-Aware Maneuvering: A Dynamic Multi-Agent System for Robust GAIA Problem Solving by AWorld. arxiv, 2025. paper, code

    Zhitian Xie, Qintong Wu, Chengyue Yu, Chenyi Zhuang, Jinjie Gu

  2. Recon-Act: A Self-Evolving Multi-Agent Browser-Use System via Web Reconnaissance, Tool Generation, and Task Execution. arxiv, 2025. paper, code

    Kaiwen He, Zhiwei Wang, Chenyi Zhuang, Jinjie Gu

Contributing

Our roadmap includes expanding our AI for Science & Business initiative, deepening our self-evolution capabilities, and growing our library of community-contributed Skills.

We warmly welcome developers, researchers, and domain experts to join us. Whether you're enhancing the framework or contributing a Skill from your field of expertise, your work is valuable.

For academic citations or wish to contact us, please use the following BibTeX entry:

@misc{yu2025aworldorchestratingtrainingrecipe,
      title={AWorld: Orchestrating the Training Recipe for Agentic AI}, 
      author={Chengyue Yu and Siyuan Lu and Chenyi Zhuang and Dong Wang and Qintong Wu and Zongyue Li and Runsheng Gan and Chunfeng Wang and Siqi Hou and Gaochi Huang and Wenlong Yan and Lifeng Hong and Aohui Xue and Yanfeng Wang and Jinjie Gu and David Tsai and Tao Lin},
      year={2025},
      eprint={2508.20404},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2508.20404}, 
}

Packages

 
 
 

Contributors