You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@pholz I've been testing other models in Azure AI Foundry. I realised that while DeepSeek-R1 works fine, other models will not work due to the different API endpoint format.
Phi-4 and other models: https://[deployment_name].services.ai.azure.com/models/
GraphRAG auto completes the base_url into the following:
DeepSeek-R1: https://[DeepSeek-R1-deploymentname].eastus2.models.ai.azure.com/v1/chat/completions
which is the correct endpoint for accessing DeepSeek-R1 models (no API version)
Phi-4 and other models: https://[deployment_name].services.ai.azure.com/models/chat/completions
which lacks the API version query string required for these models. The full endpoint should be: https://[deployment_name].services.ai.azure.com/models/chat/completions?2024-05-01-preview
Seeking help from the GraphRAG team to add support for these Azure AI Model Inference models. Appreciate any workaround!
The text was updated successfully, but these errors were encountered:
yueqianh
changed the title
Azure AI Model Inference (not Azure OpenAI) models do not work in GraphRAG
[Bug]: Azure AI Model Inference (not Azure OpenAI) models do not work in GraphRAG
Feb 10, 2025
Originally posted by @yueqianh in #1678
The text was updated successfully, but these errors were encountered: