This repository contains the code for the paper "STeCa: Step-level Trajectory Calibration for LLM Agent Learning"
In this work, We propose Step-Level Trajectory Calibration (STeCa), a novel framework for improving LLM agents. Specifically, STeCa identifies suboptimal actions through a step-level reward comparison during explorations. It constructs calibrated trajectories using LLM-driven reflection, enabling agents to learn from improved decision-making processes. These calibrated trajectories, together with successful trajectory data, are utilized for reinforced training.
Coming soon...
Please refer to dataset/ for the released data of ALFWorld and VirtualHome.
If you find this repo helpful, please cite out paper: