Replies: 7 comments
-
Is it only first question or further questions as well? Also depends on model, gpt-4o is a lot faster than claude for example. But on first question the plugin needs to fetch models, agents and check policies, question after will cache this data so should be fine. |
Beta Was this translation helpful? Give feedback.
-
it's all questions, first and subsequent. The default gpt-4o is selected. |
Beta Was this translation helpful? Give feedback.
-
well are you opening big file? as the content of the file needs to be sent every time when asking the question (default behaviour is sent whole buffer). and the history also needs to be sent every time, thats how every LLM works mostly. 10 seconds is still a lot tho, maybe it could be also curl related? Can you output what |
Beta Was this translation helpful? Give feedback.
-
I merged some optimizations but im still curious about your curl version + the size of file |
Beta Was this translation helpful? Give feedback.
-
Here is the checkhealth output: CopilotChat.nvim ~
CopilotChat.nvim [core] ~
CopilotChat.nvim [commands] ~
CopilotChat.nvim [dependencies] ~
|
Beta Was this translation helpful? Give feedback.
-
Can you try on latest canary? Added some status reporting for embedding files as well. Also how big is the file again? char count/line numbers will do. You could also try upgrading curl, 7.81 is very old and see if it helps. |
Beta Was this translation helpful? Give feedback.
-
Looks like upgrading to latest curl (latest from github) doesn't change much. |
Beta Was this translation helpful? Give feedback.
-
Maybe there is a quick fix for it, but on the same machine when I talk to Copilot (for company account) via the Web interface it takes at most 2 sec, even when it has a lot of context and searches company files.
When CopilotChat it takes ~10 sec even for very simple questions in which I don't ask to analyze any code.
Is there a way to accelerate the process?
Beta Was this translation helpful? Give feedback.
All reactions