-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature add Add LlamaCppChatCompletionClient and llama-cpp #5326
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #5326 +/- ##
==========================================
- Coverage 76.09% 75.15% -0.95%
==========================================
Files 157 159 +2
Lines 9475 9595 +120
==========================================
+ Hits 7210 7211 +1
- Misses 2265 2384 +119
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/__init__.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
Show resolved
Hide resolved
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
Show resolved
Hide resolved
Will be working on this today. |
…on and error handling; add unit tests for functionality
@ekzhu I completed the tests please have another look |
@microsoft-github-policy-service agree company="Microsoft" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the interest of smaller change set, let's focus on create
and raise NoteImplementedError
in create_stream
.
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
Show resolved
Hide resolved
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
Show resolved
Hide resolved
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
Show resolved
Hide resolved
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
Outdated
Show resolved
Hide resolved
… improve type hints in tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I think there are more work needed for this PR to be ready.
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
Outdated
Show resolved
Hide resolved
|
||
|
||
class LlamaCppChatCompletionClient(ChatCompletionClient): | ||
def __init__( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After some thought, I think mixing both LLama.__init__
and Llama.from_pretrained
in our constructor is a bit of a footgun because now the argument lists can be either one set or another set and its confusing.
Let's mirror our constructor with LlamaCpp.__init__
to ensure the constructor matches. We can define a TypedDict for all the LlamaCpp __init__
parameters, e.g., LlamaCppParams
, and use Unpack[LlamaCppParams]
as the type hint for the **kwargs
in our constructor.
Note: we need to add model_info
to our constructor.
def __init__(self, model_path: str, *, model_info: ModelInfo | None = None, **kwargs: Unpack[LlamaCppParams]) -> None:
Then, create a separate static method from_pretrained
which mirrors the LlamaCpp.from_pretrained
static method, with the same arguments:
@staticmethod
def from_pretrained(repo_id: str, filename: str, model_info: ModelInfo | None = None, additional_files=None, local_dir=None, local_dir_use_symlinks='auto', cache_dir=None, **kwargs: Unpack[LlamaCppParams]) -> Llama
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure this is right thing to do the more I think about it because if llama-cpp changes the kwargs they accept the code will break when it otherwise would have been stable. What do you think?
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
Show resolved
Hide resolved
return result | ||
|
||
|
||
class LlamaCppChatCompletionClient(ChatCompletionClient): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need proper API documentation including 3 example code blocks. See other model clients's API docs for reference.
1 code block to show basic usage with a tool-calling model.
1 code block to show from_pretrained
method.
1 code block to show tool calling with Phi-4.
…pChatCompletionClient initialization and create methods
This pull request introduces the integration of the
llama-cpp
library into theautogen-ext
package, with significant changes to the project dependencies and the implementation of a new chat completion client. The most important changes include updating the project dependencies, adding a new module for theLlamaCppChatCompletionClient
, and implementing the client with various functionalities.Project Dependencies:
python/packages/autogen-ext/pyproject.toml
: Addedllama-cpp-python
as a new dependency under thellama-cpp
section.New Module:
python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/__init__.py
: Introduced theLlamaCppChatCompletionClient
class and handled import errors with a descriptive message for missing dependencies.Implementation of
LlamaCppChatCompletionClient
:python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py
:LlamaCppChatCompletionClient
class with methods to initialize the client, create chat completions, detect and execute tools, and handle streaming responses.Why are these changes needed?
Related issue number
Checks