You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: This document contains information about the Apache APISIX ai-rag Plugin.
8
+
- AI
9
+
- LLM
10
+
description: The ai-rag Plugin enhances LLM outputs with Retrieval-Augmented Generation (RAG), efficiently retrieving relevant documents to improve accuracy and contextual relevance in responses.
9
11
---
10
12
11
13
<!--
@@ -27,57 +29,61 @@ description: This document contains information about the Apache APISIX ai-rag P
The `ai-rag`plugin integrates Retrieval-Augmented Generation (RAG) capabilities with AI models.
33
-
It allows efficient retrieval of relevant documents or information from external data sources and
34
-
augments the LLM responses with that data, improving the accuracy and context of generated outputs.
38
+
The `ai-rag`Plugin provides Retrieval-Augmented Generation (RAG) capabilities with LLMs. It facilitates the efficient retrieval of relevant documents or information from external data sources, which are used to enhance the LLM responses, thereby improving the accuracy and contextual relevance of the generated outputs.
39
+
40
+
The Plugin supports using [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) and [Azure AI Search](https://azure.microsoft.com/en-us/products/ai-services/ai-search) services for generating embeddings and performing vector search.
35
41
36
42
**_As of now only [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) and [Azure AI Search](https://azure.microsoft.com/en-us/products/ai-services/ai-search) services are supported for generating embeddings and performing vector search respectively. PRs for introducing support for other service providers are welcomed._**
| embeddings_provider |Yes| object | Configurations of the embedding models provider |
43
-
| embeddings_provider.azure_openai |Yes| object | Configurations of [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) as the embedding models provider. |
| embeddings_provider.azure_openai.api_key |Yes| string | Azure OpenAI API key|
46
-
| vector_search_provider |Yes| object | Configuration for the vector search provider |
47
-
| vector_search_provider.azure_ai_search |Yes| object | Configuration for Azure AI Search |
48
-
| vector_search_provider.azure_ai_search.endpoint |Yes| string | Azure AI Search endpoint |
49
-
| vector_search_provider.azure_ai_search.api_key |Yes| string | Azure AI Search API key|
48
+
| embeddings_provider |True| object | Configurations of the embedding models provider.|
49
+
| embeddings_provider.azure_openai |True| object | Configurations of [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) as the embedding models provider. |
50
+
| embeddings_provider.azure_openai.endpoint |True| string | Azure OpenAI embedding model endpoint.|
51
+
| embeddings_provider.azure_openai.api_key |True| string | Azure OpenAI API key.|
52
+
| vector_search_provider |True| object | Configuration for the vector search provider.|
53
+
| vector_search_provider.azure_ai_search |True| object | Configuration for Azure AI Search.|
54
+
| vector_search_provider.azure_ai_search.endpoint |True| string | Azure AI Search endpoint.|
55
+
| vector_search_provider.azure_ai_search.api_key |True| string | Azure AI Search API key.|
50
56
51
57
## Request Body Format
52
58
53
59
The following fields must be present in the request body.
| ai_rag | object |Configuration for AI-RAG (Retrieval Augmented Generation)|
63
+
| ai_rag | object |Request body RAG specifications. |
58
64
| ai_rag.embeddings | object | Request parameters required to generate embeddings. Contents will depend on the API specification of the configured provider. |
59
65
| ai_rag.vector_search | object | Request parameters required to perform vector search. Contents will depend on the API specification of the configured provider. |
| input |Yes| string | Input text used to compute embeddings, encoded as a string. |
68
-
| user |No| string | A unique identifier representing your end-user, which can help in monitoring and detecting abuse. |
69
-
| encoding_format |No| string | The format to return the embeddings in. Can be either `float` or `base64`. Defaults to `float`. |
70
-
| dimensions |No| integer | The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models. |
73
+
| input |True| string | Input text used to compute embeddings, encoded as a string. |
74
+
| user |False| string | A unique identifier representing your end-user, which can help in monitoring and detecting abuse. |
75
+
| encoding_format |False| string | The format to return the embeddings in. Can be either `float` or `base64`. Defaults to `float`. |
76
+
| dimensions |False| integer | The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models. |
71
77
72
78
For other parameters please refer to the [Azure OpenAI embeddings documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#embeddings).
| fields |Yes| String | Fields for the vector search |
86
+
| fields |True| String | Fields for the vector search.|
81
87
82
88
For other parameters please refer the [Azure AI Search documentation](https://learn.microsoft.com/en-us/rest/api/searchservice/documents/search-post).
83
89
@@ -95,106 +101,135 @@ Example request body:
95
101
}
96
102
```
97
103
98
-
## Example usage
104
+
## Example
105
+
106
+
To follow along the example, create an [Azure account](https://portal.azure.com) and complete the following steps:
99
107
100
-
First initialise these shell variables:
108
+
* In [Azure AI Foundry](https://oai.azure.com/portal), deploy a generative chat model, such as `gpt-4o`, and an embedding model, such as `text-embedding-3-large`. Obtain the API key and model endpoints.
109
+
* Follow [Azure's example](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/basic-vector-workflow/azure-search-vector-python-sample.ipynb) to prepare for a vector search in [Azure AI Search](https://azure.microsoft.com/en-us/products/ai-services/ai-search) using Python. The example will create a search index called `vectest` with the desired schema and upload the [sample data](https://github.com/Azure/azure-search-vector-samples/blob/main/data/text-sample.json) which contains 108 descriptions of various Azure services, for embeddings `titleVector` and `contentVector` to be generated based on `title` and `content`. Complete all the setups before performing vector searches in Python.
110
+
* In [Azure AI Search](https://azure.microsoft.com/en-us/products/ai-services/ai-search), [obtain the Azure vector search API key and the search service endpoint](https://learn.microsoft.com/en-us/azure/search/search-get-started-vector?tabs=api-key#retrieve-resource-information).
111
+
112
+
Save the API keys and endpoints to environment variables:
You can fetch the `admin_key` from `config.yaml` and save to an environment variable with the following command:
132
+
133
+
```bash
134
+
admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
110
135
```
111
136
112
-
Create a route with the `ai-rag` and `ai-proxy` plugin like so:
137
+
:::
138
+
139
+
### Integrate with Azure for RAG-Enhaned Responses
140
+
141
+
The following example demonstrates how you can use the [`ai-proxy`](./ai-proxy.md) Plugin to proxy requests to Azure OpenAI LLM and use the `ai-rag` Plugin to generate embeddings and perform vector search to enhance LLM responses.
142
+
143
+
Create a Route as such:
113
144
114
145
```shell
115
-
curl "http://127.0.0.1:9180/apisix/admin/routes/1" -X PUT \
146
+
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
116
147
-H "X-API-KEY: ${ADMIN_API_KEY}" \
117
148
-d '{
149
+
"id": "ai-rag-route",
118
150
"uri": "/rag",
119
151
"plugins": {
120
152
"ai-rag": {
121
153
"embeddings_provider": {
122
154
"azure_openai": {
123
-
"endpoint": "'"$EMBEDDINGS_ENDPOINT"'",
124
-
"api_key": "'"$EMBEDDINGS_KEY"'"
155
+
"endpoint": "'"$AZ_EMBEDDINGS_ENDPOINT"'",
156
+
"api_key": "'"$AZ_OPENAI_API_KEY"'"
125
157
}
126
158
},
127
159
"vector_search_provider": {
128
160
"azure_ai_search": {
129
-
"endpoint": "'"$VECTOR_SEARCH_ENDPOINT"'",
130
-
"api_key": "'"$SEARCH_KEY"'"
161
+
"endpoint": "'"$AZ_AI_SEARCH_ENDPOINT"'",
162
+
"api_key": "'"$AZ_AI_SEARCH_KEY"'"
131
163
}
132
164
}
133
165
},
134
166
"ai-proxy": {
167
+
"provider": "openai",
135
168
"auth": {
136
169
"header": {
137
-
"api-key": "'"$AZURE_OPENAI_KEY"'"
138
-
},
139
-
"query": {
140
-
"api-version": "2023-03-15-preview"
141
-
}
142
-
},
143
-
"model": {
144
-
"provider": "openai",
145
-
"name": "gpt-4",
146
-
"options": {
147
-
"max_tokens": 512,
148
-
"temperature": 1.0
170
+
"api-key": "'"$AZ_OPENAI_API_KEY"'"
149
171
}
150
172
},
173
+
"model": "gpt-4o",
151
174
"override": {
152
-
"endpoint": "'"$AZURE_OPENAI_ENDPOINT"'"
175
+
"endpoint": "'"$AZ_CHAT_ENDPOINT"'"
153
176
}
154
177
}
155
-
},
156
-
"upstream": {
157
-
"type": "roundrobin",
158
-
"nodes": {
159
-
"someupstream.com:443": 1
160
-
},
161
-
"scheme": "https",
162
-
"pass_host": "node"
163
178
}
164
179
}'
165
180
```
166
181
167
-
The `ai-proxy` plugin is used here as it simplifies access to LLMs. Alternatively, you may configure the LLM service address in the upstream configuration and update the route URI as well.
168
-
169
-
Now send a request:
182
+
Send a POST request to the Route with the vector fields name, embedding model dimensions, and an input prompt in the request body:
170
183
171
184
```shell
172
-
curl http://127.0.0.1:9080/rag -XPOST -H 'Content-Type: application/json' -d '{"ai_rag":{"vector_search":{"fields":"contentVector"},"embeddings":{"input":"which service is good for devops","dimensions":1024}}}'
185
+
curl "http://127.0.0.1:9080/rag" -X POST \
186
+
-H "Content-Type: application/json" \
187
+
-d '{
188
+
"ai_rag":{
189
+
"vector_search":{
190
+
"fields":"contentVector"
191
+
},
192
+
"embeddings":{
193
+
"input":"Which Azure services are good for DevOps?",
194
+
"dimensions":1024
195
+
}
196
+
}
197
+
}'
173
198
```
174
199
175
-
You will receive a response like this:
200
+
You should receive an `HTTP/1.1 200 OK`response similar to the following:
176
201
177
202
```json
178
203
{
179
204
"choices": [
180
205
{
206
+
"content_filter_results": {
207
+
...
208
+
},
181
209
"finish_reason": "length",
182
210
"index": 0,
211
+
"logprobs": null,
183
212
"message": {
184
-
"content": "Here are the details for some of the services you inquired about from your Azure search context:\n\n ... <rest of the response>",
213
+
"content": "Here is a list of Azure services categorized along with a brief description of each based on the provided JSON data:\n\n### Developer Tools\n- **Azure DevOps**: A suite of services that help you plan, build, and deploy applications, including Azure Boards, Azure Repos, Azure Pipelines, Azure Test Plans, and Azure Artifacts.\n- **Azure DevTest Labs**: A fully managed service to create, manage, and share development and test environments in Azure, supporting custom templates, cost management, and integration with Azure DevOps.\n\n### Containers\n- **Azure Kubernetes Service (AKS)**: A managed container orchestration service based on Kubernetes, simplifying deployment and management of containerized applications with features like automatic upgrades and scaling.\n- **Azure Container Instances**: A serverless container runtime to run and scale containerized applications without managing the underlying infrastructure.\n- **Azure Container Registry**: A fully managed Docker registry service to store and manage container images and artifacts.\n\n### Web\n- **Azure App Service**: A fully managed platform for building, deploying, and scaling web apps, mobile app backends, and RESTful APIs with support for multiple programming languages.\n- **Azure SignalR Service**: A fully managed real-time messaging service to build and scale real-time web applications.\n- **Azure Static Web Apps**: A serverless hosting service for modern web applications using static front-end technologies and serverless APIs.\n\n### Compute\n- **Azure Virtual Machines**: Infrastructure-as-a-Service (IaaS) offering for deploying and managing virtual machines in the cloud.\n- **Azure Functions**: A serverless compute service to run event-driven code without managing infrastructure.\n- **Azure Batch**: A job scheduling service to run large-scale parallel and high-performance computing (HPC) applications.\n- **Azure Service Fabric**: A platform to build, deploy, and manage scalable and reliable microservices and container-based applications.\n- **Azure Quantum**: A quantum computing service to build and run quantum applications.\n- **Azure Stack Edge**: A managed edge computing appliance to run Azure services and AI workloads on-premises or at the edge.\n\n### Security\n- **Azure Bastion**: A fully managed service providing secure and scalable remote access to virtual machines.\n- **Azure Security Center**: A unified security management service to protect workloads across Azure and on-premises infrastructure.\n- **Azure DDoS Protection**: A cloud-based service to protect applications and resources from distributed denial-of-service (DDoS) attacks.\n\n### Databases\n",
0 commit comments