fix(core): include llm_output in streaming LLMResult #34060
+82
−27
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Fixes #34057 - Ensures that streaming mode includes
llm_outputfield inLLMResult, fixing broken callback integrations.Description
Previously, when using streaming mode (
stream()orastream()), theLLMResultpassed toon_llm_endcallbacks was missing thellm_outputfield. This caused issues for callback handlers like Langfuse that rely on this field to extract metadata such as model names.This PR adds
llm_output={}to all streamingon_llm_endcalls in bothBaseLLMandBaseChatModel, ensuring consistency with non-streaming behavior.Changes
BaseLLM.stream()to includellm_output={}in LLMResultBaseLLM.astream()to includellm_output={}in LLMResultBaseChatModel.stream()to includellm_output={}in LLMResultBaseChatModel.astream()to includellm_output={}in LLMResulttest_stream_llm_result_contains_llm_output()to verify the fixTest Plan
llm_outputfield is present and is a dict in streaming modeGenericFakeChatModelusing callback handler