Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature add Add LlamaCppChatCompletionClient and llama-cpp #5326

Open
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

aribornstein
Copy link

@aribornstein aribornstein commented Feb 2, 2025

This pull request introduces the integration of the llama-cpp library into the autogen-ext package, with significant changes to the project dependencies and the implementation of a new chat completion client. The most important changes include updating the project dependencies, adding a new module for the LlamaCppChatCompletionClient, and implementing the client with various functionalities.

Project Dependencies:

New Module:

Implementation of LlamaCppChatCompletionClient:

  • python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py:
    • Added the LlamaCppChatCompletionClient class with methods to initialize the client, create chat completions, detect and execute tools, and handle streaming responses.
    • Included detailed logging for debugging purposes and implemented methods to count tokens, track usage, and provide model information.…d chat capabilities

Why are these changes needed?

Related issue number

Checks

Copy link

codecov bot commented Feb 2, 2025

Codecov Report

Attention: Patch coverage is 0% with 120 lines in your changes missing coverage. Please review.

Project coverage is 75.15%. Comparing base (227b875) to head (8646d54).

Files with missing lines Patch % Lines
...t/models/llama_cpp/_llama_cpp_completion_client.py 0.00% 115 Missing ⚠️
...n-ext/src/autogen_ext/models/llama_cpp/__init__.py 0.00% 5 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #5326      +/-   ##
==========================================
- Coverage   76.09%   75.15%   -0.95%     
==========================================
  Files         157      159       +2     
  Lines        9475     9595     +120     
==========================================
+ Hits         7210     7211       +1     
- Misses       2265     2384     +119     
Flag Coverage Δ
unittests 75.15% <0.00%> (-0.95%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@aribornstein
Copy link
Author

Will be working on this today.

…on and error handling; add unit tests for functionality
@aribornstein
Copy link
Author

@ekzhu I completed the tests please have another look

@aribornstein
Copy link
Author

@microsoft-github-policy-service agree company="Microsoft"

@microsoft-github-policy-service agree company="Microsoft"

Copy link
Collaborator

@ekzhu ekzhu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the interest of smaller change set, let's focus on create and raise NoteImplementedError in create_stream.

Copy link
Collaborator

@ekzhu ekzhu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I think there are more work needed for this PR to be ready.



class LlamaCppChatCompletionClient(ChatCompletionClient):
def __init__(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After some thought, I think mixing both LLama.__init__ and Llama.from_pretrained in our constructor is a bit of a footgun because now the argument lists can be either one set or another set and its confusing.

Let's mirror our constructor with LlamaCpp.__init__ to ensure the constructor matches. We can define a TypedDict for all the LlamaCpp __init__ parameters, e.g., LlamaCppParams, and use Unpack[LlamaCppParams] as the type hint for the **kwargs in our constructor.

Note: we need to add model_info to our constructor.

def __init__(self, model_path: str, *, model_info: ModelInfo | None = None, **kwargs: Unpack[LlamaCppParams]) -> None:

Then, create a separate static method from_pretrained which mirrors the LlamaCpp.from_pretrained static method, with the same arguments:

@staticmethod
def from_pretrained(repo_id: str, filename: str, model_info: ModelInfo | None = None, additional_files=None, local_dir=None, local_dir_use_symlinks='auto', cache_dir=None, **kwargs: Unpack[LlamaCppParams]) -> Llama

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this is right thing to do the more I think about it because if llama-cpp changes the kwargs they accept the code will break when it otherwise would have been stable. What do you think?

return result


class LlamaCppChatCompletionClient(ChatCompletionClient):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need proper API documentation including 3 example code blocks. See other model clients's API docs for reference.

1 code block to show basic usage with a tool-calling model.
1 code block to show from_pretrained method.
1 code block to show tool calling with Phi-4.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants