-
Notifications
You must be signed in to change notification settings - Fork 15
Logging added for prompts, raw responses and parsing errors #73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logging added for prompts, raw responses and parsing errors #73
Conversation
WalkthroughAdded module-level logging to the LLM generator module; debug logs now record constructed prompts and raw responses for both Llama and OpenAI flows, and JSON decode failures are logged at error level before being re-raised. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/agentunit/generators/llm_generator.py (1)
120-131: Guard prompt/response logging to avoid sensitive data leakage.
These logs can include PII or secrets from prompts/responses; even at debug level, this can end up in centralized logs. Consider making payload logging opt‑in and/or truncating/redacting.🔒 Suggested hardening (opt‑in + truncation)
`@dataclass` class GeneratorConfig: """Configuration for dataset generation.""" @@ include_edge_cases: bool = True edge_case_ratio: float = 0.3 + log_payloads: bool = False + log_payload_max_chars: int = 2000 @@ - logger.debug("Llama generated prompt:\n%s", prompt) + if self.config.log_payloads: + logger.debug( + "Llama generated prompt:\n%s", + prompt[: self.config.log_payload_max_chars], + ) @@ - logger.debug("Llama raw response:\n%s", response) + if self.config.log_payloads: + logger.debug( + "Llama raw response:\n%s", + response[: self.config.log_payload_max_chars], + ) @@ - logger.debug( - "OpenAI generated prompt (messages):\n%s", - json.dumps(messages, indent=2) - ) + if self.config.log_payloads: + logger.debug( + "OpenAI generated prompt (messages):\n%s", + json.dumps(messages, indent=2)[: self.config.log_payload_max_chars], + ) @@ - logger.debug("OpenAI raw response text:\n%s", response_text) + if self.config.log_payloads: + logger.debug( + "OpenAI raw response text:\n%s", + response_text[: self.config.log_payload_max_chars], + )Also applies to: 265-279
🤖 Fix all issues with AI agents
In `@src/agentunit/generators/llm_generator.py`:
- Around line 163-169: The error handler for the json.JSONDecodeError is logging
"OpenAI" which is misleading; update the logger.error message in the except
json.JSONDecodeError block inside llm_generator.py to reference "Llama" (or
"Llama/LLM") instead of "OpenAI" while preserving the raw response argument and
exc_info=True so the same contextual data is logged; also update the
commented-out msg string if present to match the corrected Llama wording.
- Around line 313-319: The except block catching json.JSONDecodeError
incorrectly places the `raise` outside the except, causing "No active exception
to reraise" and the log message incorrectly labels the source as "Llama"; fix by
moving the `raise` into the except block so the original JSONDecodeError is
re-raised, and update the logger.error message label to "OpenAI" (keep using
`response_text` and `exc_info=True` as currently used to preserve details).
|
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
aviralgarg05
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also fix the lint issue, rest it is good to go
aviralgarg05
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!, Thanks for your contribution
Thank you for the review and for merging the PR! |
Description
This PR improves the observability by adding the logging at different levels (debug and error)
Fixes #71
Type of Change
Changes Made
Testing
Test Configuration
Test Results
DEBUG | agentunit.generators.llm_generator | Generated prompt logged successfully
DEBUG | agentunit.generators.llm_generator | Raw LLM response logged before parsing
ERROR | agentunit.generators.llm_generator | JSON parsing failure logged with traceback
Code Quality
Documentation
Breaking Changes
Dependencies
Performance Impact
Additional Context
does not affect normal execution.
Checklist
Reviewer Notes
Please pay special attention to:
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.