Skip to content

Commit

Permalink
Merge pull request #2734 from hlohaus/16Feb
Browse files Browse the repository at this point in the history
Improve tools support in OpenaiTemplate and GeminiPro
  • Loading branch information
hlohaus authored Feb 21, 2025
2 parents c3ed6d0 + 9ebdadd commit f989b52
Show file tree
Hide file tree
Showing 39 changed files with 485 additions and 259 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -66,3 +66,4 @@ bench.py
to-reverse.txt
g4f/Provider/OpenaiChat2.py
generated_images/
projects/windows/
95 changes: 95 additions & 0 deletions docs/pydantic_ai.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
# PydanticAI Integration with G4F Client

This README provides an overview of how to integrate PydanticAI with the G4F client to create an agent that interacts with a language model. With this setup, you'll be able to apply patches to use PydanticAI models, enable debugging, and run simple agent-based interactions synchronously. However, please note that tool calls within AI requests are currently **not fully supported** in this environment.

## Requirements

Before starting, make sure you have the following Python dependencies installed:

- `g4f`: A client that interfaces with various LLMs.
- `pydantic_ai`: A module that provides integration with Pydantic-based models.

### Installation

To install these dependencies, you can use `pip`:

```bash
pip install g4f pydantic_ai
```

## Step-by-Step Setup

### 1. Patch G4F to Use PydanticAI Models

In order to use PydanticAI models with G4F, you need to apply the necessary patch to the client. This can be done by importing `apply_patch` from `g4f.tools.pydantic_ai`. The `api_key` parameter is optional, so if you have one, you can provide it. If not, the system will proceed without it.

```python
from g4f.tools.pydantic_ai import apply_patch

apply_patch(api_key="your_api_key_here") # Optional
```

If you don't have an API key, simply omit the `api_key` argument.

### 2. Enable Debug Logging

For troubleshooting and monitoring purposes, you may want to enable debug logging. This can be achieved by setting `g4f.debug.logging` to `True`.

```python
import g4f.debug

g4f.debug.logging = True
```

This will log detailed information about the internal processes and interactions.

### 3. Create a Simple Agent

Now you are ready to create a simple agent that can interact with the LLM. The agent is initialized with a model, and you can also define a system prompt. Here's an example where a basic agent is created with the model `g4f:Gemini:Gemini` and a simple system prompt:

```python
from pydantic_ai import Agent

# Define the agent
agent = Agent(
'g4f:Gemini:Gemini', # g4f:provider:model_name or g4f:model_name
system_prompt='Be concise, reply with one sentence.',
)
```

### 4. Run the Agent Synchronously

Once the agent is set up, you can run it synchronously to interact with the LLM. The `run_sync` method sends a query to the LLM and returns the result.

```python
# Run the agent synchronously with a user query
result = agent.run_sync('Where does "hello world" come from?')

# Output the response
print(result.data)
```

In this example, the agent will send the system prompt along with the user query (`"Where does 'hello world' come from?"`) to the LLM. The LLM will process the request and return a concise answer.

### Example Output

```bash
The phrase "hello world" is commonly used in programming tutorials to demonstrate basic syntax and the concept of outputting text to the screen.
```

## Tool Calls and Limitations

**Important**: Tool calls (such as applying external functions or calling APIs within the AI request itself) are **currently not fully supported**. If your system relies on invoking specific external tools or functions during the conversation with the model, you will need to implement this functionality outside the agent's context or handle it before or after the agent's request.

For example, you can process your query or interact with external systems before passing the data to the agent.

## Conclusion

By following these steps, you have successfully integrated PydanticAI models into the G4F client, created an agent, and enabled debugging. This allows you to conduct conversations with the language model, pass system prompts, and retrieve responses synchronously.

### Notes:
- The `api_key` parameter when calling `apply_patch` is optional. If you don’t provide it, the system will still work without an API key.
- Modify the agent’s `system_prompt` to suit the nature of the conversation you wish to have.
- **Tool calls within AI requests are not fully supported** at the moment. Use the agent's basic functionality for generating responses and handle external calls separately.

For further customization and advanced use cases, refer to the G4F and PydanticAI documentation.
4 changes: 4 additions & 0 deletions etc/unittest/__main__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
import unittest

import g4f.debug

g4f.debug.version_check = False

from .asyncio import *
from .backend import *
from .main import *
Expand Down
6 changes: 5 additions & 1 deletion etc/unittest/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,12 @@
DEFAULT_MESSAGES = [{'role': 'user', 'content': 'Hello'}]

class TestGetLastProvider(unittest.TestCase):

def test_get_latest_version(self):
current_version = g4f.version.utils.current_version
if current_version is not None:
self.assertIsInstance(g4f.version.utils.current_version, str)
self.assertIsInstance(g4f.version.utils.latest_version, str)
try:
self.assertIsInstance(g4f.version.utils.latest_version, str)
except VersionNotFoundError:
pass
4 changes: 2 additions & 2 deletions g4f/Provider/DDG.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,13 +36,13 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin):
supports_message_history = True

default_model = "gpt-4o-mini"
models = [default_model, "o3-mini", "claude-3-haiku-20240307", "meta-llama/Llama-3.3-70B-Instruct-Turbo", "mistralai/Mixtral-8x7B-Instruct-v0.1"]
models = [default_model, "o3-mini", "claude-3-haiku-20240307", "meta-llama/Llama-3.3-70B-Instruct-Turbo", "mistralai/Mistral-Small-24B-Instruct-2501"]

model_aliases = {
"gpt-4": "gpt-4o-mini",
"claude-3-haiku": "claude-3-haiku-20240307",
"llama-3.3-70b": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
"mixtral-8x7b": "mistralai/Mixtral-8x7B-Instruct-v0.1",
"mixtral-small-24b": "mistralai/Mistral-Small-24B-Instruct-2501",
}

last_request_time = 0
Expand Down
37 changes: 19 additions & 18 deletions g4f/Provider/PerplexityLabs.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@

from ..typing import AsyncResult, Messages
from ..requests import StreamSession, raise_for_status
from ..providers.response import FinishReason
from ..errors import ResponseError
from ..providers.response import FinishReason, Sources
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin

API_URL = "https://www.perplexity.ai/socket.io/"
Expand All @@ -15,10 +16,11 @@ class PerplexityLabs(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://labs.perplexity.ai"
working = True

default_model = "sonar-pro"
default_model = "r1-1776"
models = [
"sonar",
default_model,
"sonar-pro",
"sonar",
"sonar-reasoning",
"sonar-reasoning-pro",
]
Expand All @@ -32,19 +34,10 @@ async def create_async_generator(
**kwargs
) -> AsyncResult:
headers = {
"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:121.0) Gecko/20100101 Firefox/121.0",
"Accept": "*/*",
"Accept-Language": "de,en-US;q=0.7,en;q=0.3",
"Accept-Encoding": "gzip, deflate, br",
"Origin": cls.url,
"Connection": "keep-alive",
"Referer": f"{cls.url}/",
"Sec-Fetch-Dest": "empty",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "same-site",
"TE": "trailers",
}
async with StreamSession(headers=headers, proxies={"all": proxy}) as session:
async with StreamSession(headers=headers, proxy=proxy, impersonate="chrome") as session:
t = format(random.getrandbits(32), "08x")
async with session.get(
f"{API_URL}?EIO=4&transport=polling&t={t}"
Expand All @@ -60,17 +53,22 @@ async def create_async_generator(
) as response:
await raise_for_status(response)
assert await response.text() == "OK"
async with session.get(
f"{API_URL}?EIO=4&transport=polling&t={t}&sid={sid}",
data=post_data
) as response:
await raise_for_status(response)
assert (await response.text()).startswith("40")
async with session.ws_connect(f"{WS_URL}?EIO=4&transport=websocket&sid={sid}", autoping=False) as ws:
await ws.send_str("2probe")
assert(await ws.receive_str() == "3probe")
await ws.send_str("5")
assert(await ws.receive_str())
assert(await ws.receive_str() == "6")
message_data = {
"version": "2.16",
"version": "2.18",
"source": "default",
"model": model,
"messages": messages
"messages": messages,
}
await ws.send_str("42" + json.dumps(["perplexity_labs", message_data]))
last_message = 0
Expand All @@ -82,12 +80,15 @@ async def create_async_generator(
await ws.send_str("3")
continue
try:
if last_message == 0 and model == cls.default_model:
yield "<think>"
data = json.loads(message[2:])[1]
yield data["output"][last_message:]
last_message = len(data["output"])
if data["final"]:
if data["citations"]:
yield Sources(data["citations"])
yield FinishReason("stop")
break
except Exception as e:
print(f"Error processing message: {message} - {e}")
raise RuntimeError(f"Message: {message}") from e
raise ResponseError(f"Message: {message}") from e
14 changes: 9 additions & 5 deletions g4f/Provider/PollinationsAI.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,9 +122,6 @@ async def create_async_generator(
except ModelNotFoundError:
if model not in cls.image_models:
raise

if not cache and seed is None:
seed = random.randint(0, 10000)

if model in cls.image_models:
async for chunk in cls._generate_image(
Expand All @@ -134,6 +131,7 @@ async def create_async_generator(
width=width,
height=height,
seed=seed,
cache=cache,
nologo=nologo,
private=private,
enhance=enhance,
Expand Down Expand Up @@ -165,11 +163,14 @@ async def _generate_image(
width: int,
height: int,
seed: Optional[int],
cache: bool,
nologo: bool,
private: bool,
enhance: bool,
safe: bool
) -> AsyncResult:
if not cache and seed is None:
seed = random.randint(9999, 99999999)
params = {
"seed": str(seed) if seed is not None else None,
"width": str(width),
Expand All @@ -182,9 +183,10 @@ async def _generate_image(
}
params = {k: v for k, v in params.items() if v is not None}
query = "&".join(f"{k}={quote_plus(v)}" for k, v in params.items())
url = f"{cls.image_api_endpoint}prompt/{quote_plus(prompt)}?{query}"
prefix = f"{model}_{seed}" if seed is not None else model
url = f"{cls.image_api_endpoint}prompt/{prefix}_{quote_plus(prompt)}?{query}"
yield ImagePreview(url, prompt)

async with ClientSession(headers=DEFAULT_HEADERS, connector=get_connector(proxy=proxy)) as session:
async with session.get(url, allow_redirects=True) as response:
await raise_for_status(response)
Expand All @@ -206,6 +208,8 @@ async def _generate_text(
seed: Optional[int],
cache: bool
) -> AsyncResult:
if not cache and seed is None:
seed = random.randint(9999, 99999999)
json_mode = False
if response_format and response_format.get("type") == "json_object":
json_mode = True
Expand Down
2 changes: 2 additions & 0 deletions g4f/Provider/PollinationsImage.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ async def create_async_generator(
width: int = 1024,
height: int = 1024,
seed: Optional[int] = None,
cache: bool = False,
nologo: bool = True,
private: bool = False,
enhance: bool = False,
Expand All @@ -41,6 +42,7 @@ async def create_async_generator(
width=width,
height=height,
seed=seed,
cache=cache,
nologo=nologo,
private=private,
enhance=enhance,
Expand Down
8 changes: 5 additions & 3 deletions g4f/Provider/hf/HuggingChat.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@
from typing import AsyncIterator

try:
from curl_cffi.requests import Session, CurlMime
from curl_cffi.requests import Session
from curl_cffi import CurlMime
has_curl_cffi = True
except ImportError:
has_curl_cffi = False
Expand Down Expand Up @@ -39,14 +40,15 @@ class HuggingChat(AsyncAuthedProvider, ProviderModelMixin):
default_model = default_model
model_aliases = model_aliases
image_models = image_models
text_models = fallback_models

@classmethod
def get_models(cls):
if not cls.models:
try:
text = requests.get(cls.url).text
text = re.sub(r',parameters:{[^}]+?}', '', text)
text = re.search(r'models:(\[.+?\]),oldModels:', text).group(1)
text = re.sub(r',parameters:{[^}]+?}', '', text)
text = text.replace('void 0', 'null')
def add_quotation_mark(match):
return f'{match.group(1)}"{match.group(2)}":'
Expand All @@ -56,7 +58,7 @@ def add_quotation_mark(match):
cls.models = cls.text_models + cls.image_models
cls.vision_models = [model["id"] for model in models if model["multimodal"]]
except Exception as e:
debug.log(f"HuggingChat: Error reading models: {type(e).__name__}: {e}")
debug.error(f"{cls.__name__}: Error reading models: {type(e).__name__}: {e}")
cls.models = [*fallback_models]
return cls.models

Expand Down
16 changes: 9 additions & 7 deletions g4f/Provider/hf/HuggingFaceAPI.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
from ...typing import ImagesType
from ...requests import StreamSession, raise_for_status
from ...errors import ModelNotSupportedError
from ...providers.helper import get_last_user_message
from ..template.OpenaiTemplate import OpenaiTemplate
from .models import model_aliases, vision_models, default_vision_model
from .HuggingChat import HuggingChat
Expand All @@ -22,7 +23,7 @@ class HuggingFaceAPI(OpenaiTemplate):
vision_models = vision_models
model_aliases = model_aliases

pipeline_tag: dict[str, str] = {}
pipeline_tags: dict[str, str] = {}

@classmethod
def get_models(cls, **kwargs):
Expand All @@ -36,17 +37,17 @@ def get_models(cls, **kwargs):

@classmethod
async def get_pipline_tag(cls, model: str, api_key: str = None):
if model in cls.pipeline_tag:
return cls.pipeline_tag[model]
if model in cls.pipeline_tags:
return cls.pipeline_tags[model]
async with StreamSession(
timeout=30,
headers=cls.get_headers(False, api_key),
) as session:
async with session.get(f"https://huggingface.co/api/models/{model}") as response:
await raise_for_status(response)
model_data = await response.json()
cls.pipeline_tag[model] = model_data.get("pipeline_tag")
return cls.pipeline_tag[model]
cls.pipeline_tags[model] = model_data.get("pipeline_tag")
return cls.pipeline_tags[model]

@classmethod
async def create_async_generator(
Expand All @@ -73,10 +74,11 @@ async def create_async_generator(
if len(messages) > 6:
messages = messages[:3] + messages[-3:]
if calculate_lenght(messages) > max_inputs_lenght:
last_user_message = [{"role": "user", "content": get_last_user_message(messages)}]
if len(messages) > 2:
messages = [m for m in messages if m["role"] == "system"] + messages[-1:]
messages = [m for m in messages if m["role"] == "system"] + last_user_message
if len(messages) > 1 and calculate_lenght(messages) > max_inputs_lenght:
messages = [messages[-1]]
messages = last_user_message
debug.log(f"Messages trimmed from: {start} to: {calculate_lenght(messages)}")
async for chunk in super().create_async_generator(model, messages, api_base=api_base, api_key=api_key, max_tokens=max_tokens, images=images, **kwargs):
yield chunk
Expand Down
Loading

0 comments on commit f989b52

Please sign in to comment.