Skip to content

Commit

Permalink
Merge pull request #2599 from hlohaus/25Jan
Browse files Browse the repository at this point in the history
Add "Selecting a Provider" Documentation
  • Loading branch information
hlohaus authored Jan 24, 2025
2 parents 7ff8264 + ef9dcfa commit dda3175
Show file tree
Hide file tree
Showing 13 changed files with 176 additions and 30 deletions.
16 changes: 13 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,18 @@ docker pull hlohaus789/g4f
```

## 🆕 What's New
- **For comprehensive details on new features and updates, please refer to our** [Releases](https://github.com/xtekky/gpt4free/releases) **page**
- **Join our Telegram Channel:** 📨 [telegram.me/g4f_channel](https://telegram.me/g4f_channel)
- **Join our Discord Group:** 💬🆕️ [https://discord.gg/5E39JUWUFa](https://discord.gg/5E39JUWUFa)

- **Explore the latest features and updates**
Find comprehensive details on our [Releases Page](https://github.com/xtekky/gpt4free/releases).

- **Stay updated with our Telegram Channel** 📨
Join us at [telegram.me/g4f_channel](https://telegram.me/g4f_channel).

- **Get support in our Discord Community** 🤝💻
Reach out for help in our [Support Group: discord.gg/qXA4Wf4Fsm](https://discord.gg/qXA4Wf4Fsm).

- **Subscribe to our Discord News Channel** 💬🆕️
Stay informed about updates via our [News Channel: discord.gg/5E39JUWUFa](https://discord.gg/5E39JUWUFa).

## 🔻 Site Takedown

Expand Down Expand Up @@ -218,6 +227,7 @@ The **Interference API** enables seamless integration with OpenAI's services thr
- **Documentation**: [Interference API Docs](docs/interference-api.md)
- **Endpoint**: `http://localhost:1337/v1`
- **Swagger UI**: Explore the OpenAPI documentation via Swagger UI at `http://localhost:1337/docs`
- **Provider Selection**: [How to Specify a Provider?](docs/selecting_a_provider.md)

This API is designed for straightforward implementation and enhanced compatibility with other OpenAI integrations.

Expand Down
8 changes: 7 additions & 1 deletion docs/interference-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@
- [Basic Usage](#basic-usage)
- [With OpenAI Library](#with-openai-library)
- [With Requests Library](#with-requests-library)
- [Selecting a Provider](#selecting-a-provider)
- [Key Points](#key-points)
- [Conclusion](#conclusion)


## Introduction
The G4F Interference API is a powerful tool that allows you to serve other OpenAI integrations using G4F (Gpt4free). It acts as a proxy, translating requests intended for the OpenAI API into requests compatible with G4F providers. This guide will walk you through the process of setting up, running, and using the Interference API effectively.
Expand Down Expand Up @@ -149,6 +149,12 @@ for choice in json_response:

```

## Selecting a Provider

**Provider Selection**: [How to Specify a Provider?](docs/selecting_a_provider.md)

Selecting the right provider is a key step in configuring the G4F Interference API to suit your needs. Refer to the guide linked above for detailed instructions on choosing and specifying a provider.

## Key Points
- The Interference API translates OpenAI API requests into G4F provider requests.
- It can be run from either the PyPI package or the cloned repository.
Expand Down
132 changes: 132 additions & 0 deletions docs/selecting_a_provider.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@

### Selecting a Provider

**The Interference API also allows you to specify which provider(s) to use for processing requests. This is done using the `provider` parameter, which can be included alongside the `model` parameter in your API requests. Providers can be specified as a space-separated string of provider IDs.**

#### How to Specify a Provider

To select one or more providers, include the `provider` parameter in your request body. This parameter accepts a string of space-separated provider IDs. Each ID represents a specific provider available in the system.

#### Example: Getting a List of Available Providers

Use the following Python code to fetch the list of available providers:

```python
import requests

url = "http://localhost:1337/v1/providers"

response = requests.get(url, headers={"accept": "application/json"})
providers = response.json()

for provider in providers:
print(f"ID: {provider['id']}, URL: {provider['url']}")
```

#### Example: Getting Detailed Information About a Specific Provider

Retrieve details about a specific provider, including supported models and parameters:

```python
provider_id = "HuggingChat"
url = f"http://localhost:1337/v1/providers/{provider_id}"

response = requests.get(url, headers={"accept": "application/json"})
provider_details = response.json()

print(f"Provider ID: {provider_details['id']}")
print(f"Supported Models: {provider_details['models']}")
print(f"Parameters: {provider_details['params']}")
```

#### Example: Using a Single Provider in Text Generation

Specify a single provider (`HuggingChat`) in the request body:

```python
import requests

url = "http://localhost:1337/v1/chat/completions"

payload = {
"model": "gpt-4o-mini",
"provider": "HuggingChat",
"messages": [
{"role": "user", "content": "Write a short story about a robot"}
]
}

response = requests.post(url, json=payload, headers={"Content-Type": "application/json"})
data = response.json()

if "choices" in data:
for choice in data["choices"]:
print(choice["message"]["content"])
else:
print("No response received")
```

#### Example: Using Multiple Providers in Text Generation

Specify multiple providers by separating their IDs with a space:

```python
import requests

url = "http://localhost:1337/v1/chat/completions"

payload = {
"model": "gpt-4o-mini",
"provider": "HuggingChat AnotherProvider",
"messages": [
{"role": "user", "content": "What are the benefits of AI in education?"}
]
}

response = requests.post(url, json=payload, headers={"Content-Type": "application/json"})
data = response.json()

if "choices" in data:
for choice in data["choices"]:
print(choice["message"]["content"])
else:
print("No response received")
```

#### Example: Using a Provider for Image Generation

You can also use the `provider` parameter for image generation:

```python
import requests

url = "http://localhost:1337/v1/images/generate"

payload = {
"prompt": "a futuristic cityscape at sunset",
"model": "flux",
"provider": "HuggingSpace",
"response_format": "url"
}

response = requests.post(url, json=payload, headers={"Content-Type": "application/json"})
data = response.json()

if "data" in data:
for item in data["data"]:
print(f"Image URL: {item['url']}")
else:
print("No response received")
```

### Key Points About Providers
- **Flexibility:** Use the `provider` parameter to select one or more providers for your requests.
- **Discoverability:** Fetch available providers using the `/providers` endpoint.
- **Compatibility:** Check provider details to ensure support for the desired models and parameters.

By specifying providers in a space-separated string, you can efficiently target specific providers or combine multiple providers in a single request. This approach gives you fine-grained control over how your requests are processed.


---

[Go to Interference API Docs](docs/interference-api.md)
1 change: 1 addition & 0 deletions g4f/Provider/CablyAI.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
from .needs_auth import OpenaiAPI

class CablyAI(OpenaiAPI):
label = __name__
url = "https://cablyai.com"
login_url = None
needs_auth = False
Expand Down
1 change: 1 addition & 0 deletions g4f/Provider/DeepInfraChat.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
from .needs_auth import OpenaiAPI

class DeepInfraChat(OpenaiAPI):
label = __name__
url = "https://deepinfra.com/chat"
login_url = None
needs_auth = False
Expand Down
4 changes: 2 additions & 2 deletions g4f/Provider/hf_space/BlackForestLabsFlux1Dev.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ class BlackForestLabsFlux1Dev(AsyncGeneratorProvider, ProviderModelMixin):

default_model = 'black-forest-labs-flux-1-dev'
default_image_model = default_model
image_models = [default_image_model]
model_aliases = {"flux-dev": default_model, "flux": default_model}
image_models = [default_image_model, *model_aliases.keys()]
models = image_models
model_aliases = {"flux-dev": default_model}

@classmethod
async def create_async_generator(
Expand Down
4 changes: 2 additions & 2 deletions g4f/Provider/hf_space/BlackForestLabsFlux1Schnell.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ class BlackForestLabsFlux1Schnell(AsyncGeneratorProvider, ProviderModelMixin):

default_model = "black-forest-labs-flux-1-schnell"
default_image_model = default_model
image_models = [default_image_model]
model_aliases = {"flux-schnell": default_model, "flux": default_model}
image_models = [default_image_model, *model_aliases.keys()]
models = image_models
model_aliases = {"flux-schnell": default_model}

@classmethod
async def create_async_generator(
Expand Down
5 changes: 1 addition & 4 deletions g4f/Provider/hf_space/CohereForAI.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
from __future__ import annotations

import json
import uuid
from aiohttp import ClientSession, FormData

from ...typing import AsyncResult, Messages
Expand All @@ -24,12 +23,10 @@ class CohereForAI(AsyncGeneratorProvider, ProviderModelMixin):
"command-r",
"command-r7b-12-2024",
]

model_aliases = {
"command-r-plus": "command-r-plus-08-2024",
"command-r": "command-r-08-2024",
"command-r7b": "command-r7b-12-2024",

}

@classmethod
Expand Down Expand Up @@ -99,4 +96,4 @@ async def create_async_generator(
elif data["type"] == "title":
yield TitleGeneration(data["title"])
elif data["type"] == "finalAnswer":
break
break
6 changes: 3 additions & 3 deletions g4f/Provider/hf_space/VoodoohopFlux1Schnell.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,14 @@
class VoodoohopFlux1Schnell(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://voodoohop-flux-1-schnell.hf.space"
api_endpoint = "https://voodoohop-flux-1-schnell.hf.space/call/infer"

working = True

default_model = "voodoohop-flux-1-schnell"
default_image_model = default_model
image_models = [default_image_model]
model_aliases = {"flux-schnell": default_model, "flux": default_model}
image_models = [default_image_model, *model_aliases.keys()]
models = image_models
model_aliases = {"flux-schnell": default_model}

@classmethod
async def create_async_generator(
Expand Down
9 changes: 5 additions & 4 deletions g4f/Provider/hf_space/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
from __future__ import annotations

import random

from ...typing import AsyncResult, Messages, ImagesType
from ...errors import ResponseError
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
Expand All @@ -15,15 +17,13 @@
class HuggingSpace(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://huggingface.co/spaces"
parent = "HuggingFace"

working = True

default_model = Qwen_Qwen_2_72B_Instruct.default_model
default_image_model = BlackForestLabsFlux1Dev.default_model
default_vision_model = Qwen_QVQ_72B.default_model
providers = [BlackForestLabsFlux1Dev, BlackForestLabsFlux1Schnell, VoodoohopFlux1Schnell, CohereForAI, Qwen_QVQ_72B, Qwen_Qwen_2_72B_Instruct, StableDiffusion35Large]



@classmethod
def get_parameters(cls, **kwargs) -> dict:
Expand Down Expand Up @@ -57,6 +57,7 @@ async def create_async_generator(
if not model and images is not None:
model = cls.default_vision_model
is_started = False
random.shuffle(cls.providers)
for provider in cls.providers:
if model in provider.model_aliases:
async for chunk in provider.create_async_generator(provider.model_aliases[model], messages, **kwargs):
Expand Down
8 changes: 4 additions & 4 deletions g4f/Provider/needs_auth/OpenaiChat.py
Original file line number Diff line number Diff line change
Expand Up @@ -264,7 +264,7 @@ def create_messages(cls, messages: Messages, image_requests: ImageRequest = None
return messages

@classmethod
async def get_generated_image(cls, auth_result: AuthResult, session: StreamSession, element: dict, prompt: str = None) -> ImageResponse:
async def get_generated_image(cls, session: StreamSession, auth_result: AuthResult, element: dict, prompt: str = None) -> ImageResponse:
try:
prompt = element["metadata"]["dalle"]["prompt"]
file_id = element["asset_pointer"].split("file-service://", 1)[1]
Expand Down Expand Up @@ -452,7 +452,7 @@ async def create_authed(
await raise_for_status(response)
buffer = u""
async for line in response.iter_lines():
async for chunk in cls.iter_messages_line(session, line, conversation, sources):
async for chunk in cls.iter_messages_line(session, auth_result, line, conversation, sources):
if isinstance(chunk, str):
chunk = chunk.replace("\ue203", "").replace("\ue204", "").replace("\ue206", "")
buffer += chunk
Expand Down Expand Up @@ -500,7 +500,7 @@ def replacer(match):
yield FinishReason(conversation.finish_reason)

@classmethod
async def iter_messages_line(cls, session: StreamSession, line: bytes, fields: Conversation, sources: Sources) -> AsyncIterator:
async def iter_messages_line(cls, session: StreamSession, auth_result: AuthResult, line: bytes, fields: Conversation, sources: Sources) -> AsyncIterator:
if not line.startswith(b"data: "):
return
elif line.startswith(b"data: [DONE]"):
Expand Down Expand Up @@ -546,7 +546,7 @@ async def iter_messages_line(cls, session: StreamSession, line: bytes, fields: C
generated_images = []
for element in c.get("parts"):
if isinstance(element, dict) and element.get("content_type") == "image_asset_pointer":
image = cls.get_generated_image(session, cls._headers, element)
image = cls.get_generated_image(session, auth_result, element)
generated_images.append(image)
for image_response in await asyncio.gather(*generated_images):
if image_response is not None:
Expand Down
3 changes: 0 additions & 3 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,6 @@ uvicorn
flask
brotli
beautifulsoup4
aiohttp_socks
pywebview
plyer
setuptools
cryptography
nodriver
Expand Down
9 changes: 5 additions & 4 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,11 +48,11 @@
'slim': [
"curl_cffi>=0.6.2",
"certifi",
"browser_cookie3",
"duckduckgo-search>=5.0" ,# internet.search
"beautifulsoup4", # internet.search and bing.create_images
"aiohttp_socks", # proxy
"pillow", # image
"cairosvg", # svg image
"werkzeug", "flask", # gui
"fastapi", # api
"uvicorn", # api
Expand All @@ -68,7 +68,8 @@
"webview": [
"pywebview",
"platformdirs",
"cryptography"
"plyer",
"cryptography",
],
"api": [
"loguru", "fastapi",
Expand All @@ -79,10 +80,10 @@
"werkzeug", "flask",
"beautifulsoup4", "pillow",
"duckduckgo-search>=5.0",
"browser_cookie3",
],
"search": [
"beautifulsoup4", "pillow",
"beautifulsoup4",
"pillow",
"duckduckgo-search>=5.0",
],
"local": [
Expand Down

0 comments on commit dda3175

Please sign in to comment.