You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you have an assistant configured with an LLM-based conversation agent with the option "Prefer handling commands locally", then, since the 2025.3 beta, the commands are always sent to the LLM-based agent even if they could have been processed locally.
How to reproduce.
Create an Assistant with a LLM-based conversation agen
Tick the box (IT should be ticked by default) "Prefer handling commands locally"
Save it
Open the Assist chat box, you do not even need special voice hardware to reproduce this issue, you can simply write to it.
Write a very simple, locally understood command. Such as "Turn on the lights in the living room". (OR even "Hello")
Result
The command is sent to the LLM instead of being processed locally.
Expected results
"Prefer handling commands locally" works as expected. Commands are processed locally if possible before being sent to the LLM.
What version of Home Assistant Core has the issue?
core-2025.3.0b0
What was the last working version of Home Assistant Core?
Hey there @balloob, @synesthesiam, mind taking a look at this issue as it has been labeled with an integration (assist_pipeline) you are listed as a code owner for? Thanks!
Code owner commands
Code owners of assist_pipeline can trigger bot actions by commenting:
@home-assistant close Closes the issue.
@home-assistant rename Awesome new title Renames the issue.
@home-assistant reopen Reopen the issue.
@home-assistant unassign assist_pipeline Removes the current integration label and assignees on the issue, add the integration domain after the command.
@home-assistant add-label needs-more-information Add a label (needs-more-information, problem in dependency, problem in custom component) to the issue.
@home-assistant remove-label needs-more-information Remove a label (needs-more-information, problem in dependency, problem in custom component) on the issue.
The problem
If you have an assistant configured with an LLM-based conversation agent with the option "Prefer handling commands locally", then, since the 2025.3 beta, the commands are always sent to the LLM-based agent even if they could have been processed locally.
How to reproduce.
Result
Expected results
What version of Home Assistant Core has the issue?
core-2025.3.0b0
What was the last working version of Home Assistant Core?
core-2025.2
What type of installation are you running?
Home Assistant OS
Integration causing the issue
assist_pipeline
Link to integration documentation on our website
https://www.home-assistant.io/integrations/assist_pipeline/
Diagnostics information
No particular logs related to that. It seems to fail "silently"
Example YAML snippet
Anything in the logs that might be useful for us?
Additional information
No response
The text was updated successfully, but these errors were encountered: