Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Azure AI Model Inference (not Azure OpenAI) models do not work in GraphRAG #1688

Closed
yueqianh opened this issue Feb 10, 2025 · 0 comments

Comments

@yueqianh
Copy link

@pholz I've been testing other models in Azure AI Foundry. I realised that while DeepSeek-R1 works fine, other models will not work due to the different API endpoint format.

base_url in OpenAI Chat mode:

  • DeepSeek-R1: https://[DeepSeek-R1-deploymentname].eastus2.models.ai.azure.com/v1/
  • Phi-4 and other models: https://[deployment_name].services.ai.azure.com/models/

GraphRAG auto completes the base_url into the following:

  • DeepSeek-R1: https://[DeepSeek-R1-deploymentname].eastus2.models.ai.azure.com/v1/chat/completions
    which is the correct endpoint for accessing DeepSeek-R1 models (no API version)
  • Phi-4 and other models: https://[deployment_name].services.ai.azure.com/models/chat/completions
    which lacks the API version query string required for these models. The full endpoint should be: https://[deployment_name].services.ai.azure.com/models/chat/completions?2024-05-01-preview

Seeking help from the GraphRAG team to add support for these Azure AI Model Inference models. Appreciate any workaround!

Originally posted by @yueqianh in #1678

@yueqianh yueqianh changed the title Azure AI Model Inference (not Azure OpenAI) models do not work in GraphRAG [Bug]: Azure AI Model Inference (not Azure OpenAI) models do not work in GraphRAG Feb 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant