Skip to content

Commit a2e8cc7

Browse files
authored
fix: token limit inversion (#923)
# Motivation <!-- Why is this change necessary? --> # Content <!-- Please include a summary of the change --> # Testing <!-- How was the change tested? --> # Please check the following before marking your PR as ready for review - [ ] I have added tests for my changes - [ ] I have updated the documentation or added new documentation as needed
1 parent 2000b49 commit a2e8cc7

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/codegen/extensions/tools/observation.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ def render_as_string(self, max_tokens: int = 8000) -> str:
5656
their string output format.
5757
"""
5858
rendered = json.dumps(self.model_dump(), indent=2)
59-
if 3 * len(rendered) > max_tokens:
59+
if len(rendered) > (max_tokens * 3):
6060
logger.error(f"Observation is too long to render: {len(rendered) * 3} tokens")
6161
return rendered[:max_tokens] + "\n\n...truncated...\n\n"
6262
return rendered

0 commit comments

Comments
 (0)