Replies: 1 comment
-
asking LLMs what model are they was never reliable way to verify it. You can turn on debug and see chat the api responses and model field to verify your model config: [CopilotChat] data: {"choices":[{"index":0,"delta":{"content":" with your programming project today?","role":"assistant"}}],"created":1741672965,"id":"03bf4ada-a952-49e2-bdc6-adf1620011cd","model":"claude-3.7-sonnet"} But otherwise most AI interfaces just hardcode the actual model answer in default system prompt. For example claude: https://docs.anthropic.com/en/release-notes/system-prompts#feb-24th-2025 I can technically hardcode it as well but idk if it serves any point really other than wasting tokens |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Steps To Reproduce:
Add and authorize the Github copilot extension "models"
Change the AI Agent and Model via command line as shown below:
Expected Result:
Copilot Chat should indicate that it is using claude-3.7-sonnet
Beta Was this translation helpful? Give feedback.
All reactions