-
Notifications
You must be signed in to change notification settings - Fork 19.8k
feat(core): Capture raw provider response #33922
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
feat(core): Capture raw provider response #33922
Conversation
CodSpeed Performance ReportMerging #33922 will improve performances by 31.32%Comparing
|
| Mode | Benchmark | BASE |
HEAD |
Change | |
|---|---|---|---|---|---|
| ⚡ | WallTime | test_async_callbacks_in_sync |
24.3 ms | 18.5 ms | +31.32% |
| ⚡ | WallTime | test_import_time[BaseChatModel] |
510.7 ms | 460.5 ms | +10.89% |
| ⚡ | WallTime | test_import_time[CallbackManager] |
446.1 ms | 403.7 ms | +10.51% |
| ⚡ | WallTime | test_import_time[ChatPromptTemplate] |
580.5 ms | 524.1 ms | +10.75% |
| ⚡ | WallTime | test_import_time[Document] |
184.8 ms | 166.1 ms | +11.28% |
| ⚡ | WallTime | test_import_time[HumanMessage] |
262.2 ms | 237.3 ms | +10.47% |
| ⚡ | WallTime | test_import_time[InMemoryVectorStore] |
592.4 ms | 537.6 ms | +10.19% |
| ⚡ | WallTime | test_import_time[PydanticOutputParser] |
519.2 ms | 458.7 ms | +13.19% |
| ⚡ | WallTime | test_import_time[tool] |
504.8 ms | 447.2 ms | +12.87% |
Footnotes
-
15 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports. ↩
|
I will handle the failing checks |
|
So far, all the tests (make format, make lint, make test) pass locally. But there are still some CI failiings I need to check. |
Status Update✅ All core tests passing Remaining: Pydantic Compatibility SnapshotsThe following tests need snapshot updates due to the new
The schema changes are expected - the new field is now included in the serialized message format. Could a maintainer help regenerate these snapshots? I attempted to update them locally but encountered environment setup issues on Windows/WSL. Thank you! |
Summary
Resolves #33884
Design Decisions
1. Configuration Location
include_raw_responsein model wrapper classes, NOT core message classes2. Data Structure Flexibility
dict[str, Any] | list[dict[str, Any]] | None3. Stream Processing Strategy
add_ai_message_chunks()only whenchunk_position="last"received4. Backward Compatibility
Key Changes
1. Core Message System (
libs/core/langchain_core/messages/ai.py)AIMessage Class
raw_response: dict[str, Any] | list[dict[str, Any]] | None = None__init__()to accept and storeraw_responseparameterdict()method to include non-Noneraw_responsevaluesAIMessageChunk Class
raw_response: dict[str, Any] | None = NoneNew Function:
add_ai_message_chunks()Aggregates multiple
AIMessageChunkobjects into a single finalAIMessageKey Features:
raw_responsemerging:chunk_position="last"marker receivedExample Usage:
2. Message Utilities (
libs/core/langchain_core/messages/utils.py)message_chunk_to_message() Function
raw_responsefrom chunk to final message3. Public API Exports (
libs/core/langchain_core/messages/__init__.py)all List
"add_ai_message_chunks"_dynamic_imports Dictionary
"add_ai_message_chunks": "ai"4. OpenAI Integration (
libs/partners/openai/langchain_openai/chat_models/base.py)BaseChatOpenAI Class
New Configuration Field (line ~686):
Implementation Locations:
_convert_chunk_to_generation_chunk() (~line 1071)
if self.include_raw_response: message_chunk.raw_response = chunk_stream() (~line 1274)
if self.include_raw_response and chunk_position == "last": ..._create_chat_result() (~line 1440)
if self.include_raw_response and isinstance(message, AIMessage): ..._astream() (~line 1529)
Scope
Currently, only the OpenAI model class has been adapted.
If this PR is accepted, similar adaptations will be implemented for other partner integrations (Anthropic, Google, Bedrock, etc.) in follow-up PRs.
Testing
tests/unit_tests/core/andtests/unit_tests/partners/openai/.