diff --git a/en/community/docs-contribution.md b/en/community/docs-contribution.md
index 207542293..a7325f69f 100644
--- a/en/community/docs-contribution.md
+++ b/en/community/docs-contribution.md
@@ -13,7 +13,7 @@ We categorize documentation issues into two main types:
If you encounter errors while reading a document or wish to suggest modifications, please use the **"Edit on GitHub"** button located in the table of contents on the right side of the document page. Utilize GitHub's built-in online editor to make your changes, then submit a pull request with a concise description of your edits. Please format your pull request title as `Fix: Update xxx`. We'll review your submission and merge the changes if everything looks good.
-
+
Alternatively, you can post the document link on our [Issues page](https://github.com/langgenius/dify-docs/issues) with a brief description of the necessary modifications. We'll address these promptly upon receipt.
diff --git a/en/development/models-integration/gpustack.md b/en/development/models-integration/gpustack.md
index 7749a3d55..80ce7881e 100644
--- a/en/development/models-integration/gpustack.md
+++ b/en/development/models-integration/gpustack.md
@@ -36,7 +36,7 @@ Using a LLM hosted on GPUStack as an example:
3. Click `Save` to deploy the model.
-
+
## Create an API Key
@@ -60,6 +60,6 @@ Using a LLM hosted on GPUStack as an example:
Click "Save" to use the model in the application.
-
+
For more information about GPUStack, please refer to [Github Repo](https://github.com/gpustack/gpustack).
diff --git a/en/development/models-integration/hugging-face.md b/en/development/models-integration/hugging-face.md
index c27524e8b..d527f3941 100644
--- a/en/development/models-integration/hugging-face.md
+++ b/en/development/models-integration/hugging-face.md
@@ -11,7 +11,7 @@ The specific steps are as follows:
2. Set the API key of Hugging Face ([obtain address](https://huggingface.co/settings/tokens)).
3. Select a model to enter the [Hugging Face model list page](https://huggingface.co/models?pipeline\_tag=text-generation\&sort=trending).
-
+
Dify supports accessing models on Hugging Face in two ways:
@@ -24,17 +24,17 @@ Dify supports accessing models on Hugging Face in two ways:
Hosted inference API is supported only when there is an area containing Hosted inference API on the right side of the model details page. As shown in the figure below:
-
+
On the model details page, you can get the name of the model.
-
+
#### 2 Using access models in Dify
Select Hosted Inference API for Endpoint Type in `Settings > Model Provider > Hugging Face > Model Type`. As shown below:
-
+
API Token is the API Key set at the beginning of the article. The model name is the model name obtained in the previous step.
@@ -44,26 +44,26 @@ API Token is the API Key set at the beginning of the article. The model name is
Inference Endpoint is only supported for models with the Inference Endpoints option under the Deploy button on the right side of the model details page. As shown below:
-
+
#### 2 Deployment model
Click the Deploy button for the model and select the Inference Endpoint option. If you have not bound a bank card before, you will need to bind the card. Just follow the process. After binding the card, the following interface will appear: modify the configuration according to the requirements, and click Create Endpoint in the lower left corner to create an Inference Endpoint.
-
+
After the model is deployed, you can see the Endpoint URL.
-
+
#### 3 Using access models in Dify
Select Inference Endpoints for Endpoint Type in `Settings > Model Provider > Hugging face > Model Type`. As shown below:
-
+
The API Token is the API Key set at the beginning of the article. `The name of the Text-Generation model can be arbitrary, but the name of the Embeddings model needs to be consistent with Hugging Face.` The Endpoint URL is the Endpoint URL obtained after the successful deployment of the model in the previous step.
-
+
> Note: The "User name / Organization Name" for Embeddings needs to be filled in according to your deployment method on Hugging Face's [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/guides/access), with either the ''[User name](https://huggingface.co/settings/account)'' or the "[Organization Name](https://ui.endpoints.huggingface.co/)".
diff --git a/en/development/models-integration/litellm.md b/en/development/models-integration/litellm.md
index b5d2dc450..e4be3d1f2 100644
--- a/en/development/models-integration/litellm.md
+++ b/en/development/models-integration/litellm.md
@@ -51,7 +51,7 @@ On success, the proxy will start running on `http://localhost:4000`
In `Settings > Model Providers > OpenAI-API-compatible`, fill in:
-
+
* Model Name: `gpt-4`
* Base URL: `http://localhost:4000`
diff --git a/en/development/models-integration/replicate.md b/en/development/models-integration/replicate.md
index fb08b4243..8f7b59ed0 100644
--- a/en/development/models-integration/replicate.md
+++ b/en/development/models-integration/replicate.md
@@ -9,8 +9,8 @@ Specific steps are as follows:
3. Pick a model. Select the model under [Language models](https://replicate.com/collections/language-models) and [Embedding models](https://replicate.com/collections/embedding-models) .
4. Add models in Dify's `Settings > Model Provider > Replicate`.
-
+
The API key is the API Key set in step 2. Model Name and Model Version can be found on the model details page:
-
+
diff --git a/en/development/models-integration/xinference.md b/en/development/models-integration/xinference.md
index 2511e7168..5832c63f2 100644
--- a/en/development/models-integration/xinference.md
+++ b/en/development/models-integration/xinference.md
@@ -33,7 +33,7 @@ There are two ways to deploy Xinference, namely [local deployment](https://githu
Visit `http://127.0.0.1:9997`, select the model and specification you need to deploy, as shown below:
-
+
As different models have different compatibility on different hardware platforms, please refer to [Xinference built-in models](https://inference.readthedocs.io/en/latest/models/builtin/index.html) to ensure the created model supports the current hardware platform.
4. Obtain the model UID
diff --git a/en/guides/annotation/annotation-reply.md b/en/guides/annotation/annotation-reply.md
index a9d63b055..5cdee78f7 100644
--- a/en/guides/annotation/annotation-reply.md
+++ b/en/guides/annotation/annotation-reply.md
@@ -17,13 +17,13 @@ The annotated replies feature essentially provides another set of retrieval-enha
4. If no match is found, the question will continue through the regular process (passing to LLM or RAG).
5. Once the annotated replies feature is disabled, the system will no longer match responses from annotations.
-
Annotated Replies Workflow
+
Annotated Replies Workflow
### Enabling Annotated Replies in Prompt Orchestration
Enable the annotated replies switch by navigating to **“Orchestrate -> Add Features”**:
-
Enabling Annotated Replies in Prompt Orchestration
+
Enabling Annotated Replies in Prompt Orchestration
When enabling, you need to set the parameters for annotated replies, which include: Score Threshold and Embedding Model.
@@ -33,27 +33,27 @@ When enabling, you need to set the parameters for annotated replies, which inclu
Click save and enable, and the settings will take effect immediately. The system will generate embeddings for all saved annotations using the embedding model.
-
Setting Parameters for Annotated Replies
+
Setting Parameters for Annotated Replies
### Adding Annotations in the Conversation Debug Page
You can directly add or edit annotations on the model response information in the debug and preview pages.
-
Adding Annotated Replies
+
Adding Annotated Replies
Edit the response to the high-quality reply you need and save it.
-
Editing Annotated Replies
+
Editing Annotated Replies
Re-enter the same user question, and the system will use the saved annotation to reply to the user's question directly.
-
Replying to User Questions with Saved Annotations
+
Replying to User Questions with Saved Annotations
### Enabling Annotated Replies in Logs and Annotations
Enable the annotated replies switch by navigating to “Logs & Ann. -> Annotations”:
-
Enabling Annotated Replies in Logs and Annotations
+
Enabling Annotated Replies in Logs and Annotations
### Setting Parameters for Annotated Replies in the Annotation Backend
@@ -63,22 +63,22 @@ The parameters that can be set for annotated replies include: Score Threshold an
**Embedding Model:** This is used to vectorize the annotated text. Changing the model will regenerate the embeddings.
-
Setting Parameters for Annotated Replies
+
Setting Parameters for Annotated Replies
### Bulk Import of Annotated Q\&A Pairs
In the bulk import feature, you can download the annotation import template, edit the annotated Q\&A pairs according to the template format, and then import them in bulk.
-
Bulk Import of Annotated Q&A Pairs
+
Bulk Import of Annotated Q&A Pairs
### Bulk Export of Annotated Q\&A Pairs
Through the bulk export feature, you can export all saved annotated Q\&A pairs in the system at once.
-
Bulk Export of Annotated Q&A Pairs
+
Bulk Export of Annotated Q&A Pairs
### Viewing Annotation Hit History
In the annotation hit history feature, you can view the edit history of all hits on the annotation, the user's hit questions, the response answers, the source of the hits, the matching similarity scores, the hit time, and other information. You can use this information to continuously improve your annotated content.
-
Viewing Annotation Hit History
+
Viewing Annotation Hit History
diff --git a/en/guides/annotation/logs.md b/en/guides/annotation/logs.md
index 2adb428e4..f4829db6e 100644
--- a/en/guides/annotation/logs.md
+++ b/en/guides/annotation/logs.md
@@ -24,7 +24,7 @@ The logs currently do not include interaction records from the Prompt debugging
These annotations will be used for model fine-tuning in future versions of Dify to improve model accuracy and response style. The current preview version only supports annotations.
{% endhint %}
-
Mark logs to improve your app
+
Mark logs to improve your app
Clicking on a log entry will open the log details panel on the right side of the interface. In this panel, operators can annotate an interaction:
diff --git a/en/guides/application-orchestrate/app-toolkits/README.md b/en/guides/application-orchestrate/app-toolkits/README.md
index a825fc0cb..be03b1fd2 100644
--- a/en/guides/application-orchestrate/app-toolkits/README.md
+++ b/en/guides/application-orchestrate/app-toolkits/README.md
@@ -4,19 +4,19 @@ In **Application Orchestration**, click **Add Feature** to open the application
The application toolbox provides various additional features for Dify's [applications](../#application_type):
-
+
### Conversation Opening
In conversational applications, the AI will proactively say the first sentence or ask a question. You can edit the content of the opening, including the initial question. Using conversation openings can guide users to ask questions, explain the application background, and lower the barrier for initiating a conversation.
-
Conversation Opening
+
Conversation Opening
### Next Step Question Suggestions
Setting next step question suggestions allows the AI to generate 3 follow-up questions based on the previous conversation, guiding the next round of interaction.
-
+
### Citation and Attribution
diff --git a/en/guides/application-orchestrate/app-toolkits/moderation-tool.md b/en/guides/application-orchestrate/app-toolkits/moderation-tool.md
index 73c387212..5243597a3 100644
--- a/en/guides/application-orchestrate/app-toolkits/moderation-tool.md
+++ b/en/guides/application-orchestrate/app-toolkits/moderation-tool.md
@@ -2,7 +2,7 @@
In our interactions with AI applications, we often have stringent requirements in terms of content security, user experience, and legal regulations. At this point, we need the "Sensitive Word Review" feature to create a better interactive environment for end-users. On the orchestration page, click "Add Feature" and locate the "Content Review" toolbox at the bottom:
-
Content moderation
+
Content moderation
## Call the OpenAI Moderation API
@@ -10,16 +10,16 @@ OpenAI, along with most companies providing LLMs, includes content moderation fe
Now you can also directly call the OpenAI Moderation API on Dify; you can review either input or output content simply by entering the corresponding "preset reply."
-
Configuring Load Balancing from Add Model
+
Configuring Load Balancing from Add Model
## Keywords
Developers can customize the sensitive words they need to review, such as using "kill" as a keyword to perform an audit action when users input. The preset reply content should be "The content is violating usage policies." It can be anticipated that when a user inputs a text chuck containing "kill" at the terminal, it will trigger the sensitive word review tool and return the preset reply content.
-
Configuring Load Balancing from Add Model
+
Configuring Load Balancing from Add Model
## Moderation Extension
Different enterprises often have their own mechanisms for sensitive word moderation. When developing their own AI applications, such as an internal knowledge base ChatBot, enterprises need to moderate the query content input by employees for sensitive words. For this purpose, developers can write an API extension based on their enterprise's internal sensitive word moderation mechanisms, which can then be called on Dify to achieve a high degree of customization and privacy protection for sensitive word review.
-
Moderation Extension
+
Moderation Extension
diff --git a/en/guides/application-orchestrate/chatbot-application.md b/en/guides/application-orchestrate/chatbot-application.md
index 37e2f3ee9..382ab9faa 100644
--- a/en/guides/application-orchestrate/chatbot-application.md
+++ b/en/guides/application-orchestrate/chatbot-application.md
@@ -22,7 +22,7 @@ Click the "Create Application" button on the homepage to create an application.
After the application is successfully created, it will automatically redirect to the application overview page. Click on the button on the left menu: **"Orchestrate"** to compose the application.
-
+
**2.1 Fill in Prompts**
@@ -38,17 +38,17 @@ For a better experience, we will add an opening dialogue: `"Hello, {{name}}. I'm
To add the opening dialogue, click the "Add Feature" button in the upper left corner, and enable the "Conversation remarkers" feature:
-
+
And then edit the opening remarks:
-
+
**2.2 Adding Context**
If an application wants to generate content based on private contextual conversations, it can use our [knowledge](../knowledge-base/) feature. Click the "Add" button in the context to add a knowledge base.
-
+
**2.3 Uploading Documentation File**
@@ -62,11 +62,11 @@ Select an LLM that supports file reading and enable the "Documentation" feature.
Enter user inputs on the right side and check the respond content.
-
+
If the results are not satisfactory, you can adjust the prompts and model parameters. Click on the model name in the upper right corner to set the parameters of the model:
-
+
**Multiple Model Debugging:**
diff --git a/en/guides/application-orchestrate/creating-an-application.md b/en/guides/application-orchestrate/creating-an-application.md
index e026807f4..c39942576 100644
--- a/en/guides/application-orchestrate/creating-an-application.md
+++ b/en/guides/application-orchestrate/creating-an-application.md
@@ -12,17 +12,17 @@ When using Dify for the first time, you might be unfamiliar with creating applic
You can select "Studio" from the navigation menu, then choose "Create from Template" in the application list.
-
Create an application from a template
+
Create an application from a template
Select any template and click **Use this template.**
-
Dify application templates
+
Dify application templates
### Creating a New Application
If you need to create a blank application on Dify, you can select "Studio" from the navigation and then choose "Create from Blank" in the application list.
-
Create a blank application
+
Create a blank application
When creating an application for the first time, you might need to first understand the [basic concepts](./#application_type) of the four different types of applications on Dify: Chatbot, Text Generator, Agent, Chatflow and Workflow.
@@ -42,7 +42,7 @@ Dify DSL is an AI application engineering file standard defined by Dify.AI. The
If you have obtained a template (DSL file) from the community or others, you can choose "Import DSL File" from the studio. After importing, all configuration information of the original application will be loaded directly.
-
Create an application by importing a DSL file
+
Create an application by importing a DSL file
#### Import DSL file from URL
@@ -52,6 +52,6 @@ You can also import DSL files via a URL, using the following link format:
https://example.com/your_dsl.yml
```
-
Create an application by importing a DSL file
+
Create an application by importing a DSL file
> When importing a DSL file, the version will be checked. Significant discrepancies between DSL versions may lead to compatibility issues. For more details, please refer to [Application Management: Import](https://docs.dify.ai/guides/management/app-management#importing-application).
diff --git a/en/guides/application-publishing/launch-your-webapp-quickly/README.md b/en/guides/application-publishing/launch-your-webapp-quickly/README.md
index 57b7be8ff..52028a30f 100644
--- a/en/guides/application-publishing/launch-your-webapp-quickly/README.md
+++ b/en/guides/application-publishing/launch-your-webapp-quickly/README.md
@@ -9,7 +9,7 @@ One of the benefits of creating AI applications with Dify is that you can publis
Toggle the **"In service / Disabled"** switch, your Web App URL will be effective immediately publicly shared on the internet.
-
+
We have pre-set Web App UI for the following two types of applications:
@@ -29,7 +29,7 @@ We have pre-set Web App UI for the following two types of applications:
You can modify the language, color theme, copyright ownership, privacy policy link, and disclaimer by clicking the "setting" button.
-
+
Currently, Web App supports multiple languages: English, Simplified Chinese, Traditional Chinese, Portuguese, German, Japanese, Korean, Ukrainian, and Vietnamese. If you want more languages to be supported, you can submit an Issue on GitHub to seek support or submit a PR to contribute code.
diff --git a/en/guides/application-publishing/launch-your-webapp-quickly/conversation-application.md b/en/guides/application-publishing/launch-your-webapp-quickly/conversation-application.md
index f4d730bc8..1016306b6 100644
--- a/en/guides/application-publishing/launch-your-webapp-quickly/conversation-application.md
+++ b/en/guides/application-publishing/launch-your-webapp-quickly/conversation-application.md
@@ -13,29 +13,29 @@ Conversational applications engage in continuous dialogue with users in a questi
If you have set variable filling requirements during application orchestration, you will need to fill out the prompted information before entering the conversation window:
-
+
Fill in the necessary details and click the "Start Conversation" button to begin chatting. Hover over the AI's response to copy the conversation content, and provide "like" or "dislike" feedback.
-
+
### Creation, Pinning, and Deletion of Conversations
Click the "New Conversation" button to start a new conversation. Hover over a conversation to pin or delete it.
-
+
### Conversation Opener
If the "Conversation Opener" feature is enabled on the application orchestration page, the AI application will automatically initiate the first line of dialogue when a new conversation is created.
-
+
### Follow Up
If the "Follow-up" feature is enabled on the application orchestration page, the system will automatically generate 3 relevant question suggestions after the conversation:
-
+
### Speech-to-Text
@@ -43,7 +43,7 @@ If the "Speech-to-Text" feature is enabled during application orchestration, you
_Please ensure that your device environment is authorized to use the microphone._
-
+
### References and Attributions
diff --git a/en/guides/application-publishing/launch-your-webapp-quickly/text-generator.md b/en/guides/application-publishing/launch-your-webapp-quickly/text-generator.md
index 2cdcfe56d..39e4f500f 100644
--- a/en/guides/application-publishing/launch-your-webapp-quickly/text-generator.md
+++ b/en/guides/application-publishing/launch-your-webapp-quickly/text-generator.md
@@ -15,7 +15,7 @@ Let's introduce them separately.
Enter the query content, click the run button, and the result will be generated on the right, as shown in the following figure:
-
+
In the generated results section, click the "Copy" button to copy the content to the clipboard. Click the "Save" button to save the content. You can see the saved content in the "Saved" tab. You can also "like" and "dislike" the generated content.
@@ -29,17 +29,17 @@ In the above scenario, the batch operation function is used, which is convenient
Click the "Run Batch" tab to enter the batch run page.
-
+
#### Step 2 Download the template and fill in the content
Click the **"Download the template here"** button to obtain the template file. Edit the file and fill in the required content, then save it as a `.csv` file. Finally, upload the completed file back to Dify.
-
+
#### Step 3 Upload the file and run
-
+
If you need to export the generated content, you can click the download "button" in the upper right corner to export as a `csv` file.
@@ -49,10 +49,10 @@ If you need to export the generated content, you can click the download "button"
Click the "Save" button below the generated results to save the running results. In the "Saved" tab, you can see all saved content.
-
+
### Generate more similar results
If the "More like this" function is turned on the App's Orchestrate page,clicking the "More like this" button in the web application generates content similar to the current result. As shown below:
-
+
diff --git a/en/guides/extension/api-based-extension/README.md b/en/guides/extension/api-based-extension/README.md
index fd9f62c62..6cac12884 100644
--- a/en/guides/extension/api-based-extension/README.md
+++ b/en/guides/extension/api-based-extension/README.md
@@ -7,7 +7,7 @@ Developers can extend module capabilities through the API extension module. Curr
Before extending module capabilities, prepare an API and an API Key for authentication, which can also be automatically generated by Dify. In addition to developing the corresponding module capabilities, follow the specifications below so that Dify can invoke the API correctly.
-
Add API Extension
+
Add API Extension
## API Specifications
@@ -194,11 +194,11 @@ The default port is 8000. The complete address of the API is: `http://127.0.0.1:
#### Configure this API in Dify.
-
+
#### Select this API extension in the App.
-
+
When debugging the App, Dify will request the configured API and send the following content (example):
@@ -230,7 +230,7 @@ Since Dify's cloud version can't access internal network API services, you can u
1. Visit the Ngrok official website at [https://ngrok.com](https://ngrok.com/), register, and download the Ngrok file.
-
+
2. After downloading, go to the download directory. Unzip the package and run the initialization script as instructed:
@@ -241,7 +241,7 @@ $ ./ngrok config add-authtoken 你的Token
3. Check the port of your local API service.
-
+
Run the following command to start:
@@ -251,7 +251,7 @@ $ ./ngrok http [port number]
Upon successful startup, you'll see something like the following:
-
+
4. Find the 'Forwarding' address, like the sample domain `https://177e-159-223-41-52.ngrok-free.app`, and use it as your public domain.
diff --git a/en/guides/extension/api-based-extension/cloudflare-workers.md b/en/guides/extension/api-based-extension/cloudflare-workers.md
index 7607dea09..d832e5a15 100644
--- a/en/guides/extension/api-based-extension/cloudflare-workers.md
+++ b/en/guides/extension/api-based-extension/cloudflare-workers.md
@@ -43,9 +43,9 @@ npm run deploy
After successful deployment, you will get a public internet address, which you can add in Dify as an API Endpoint. Please note not to miss the `endpoint` path.
-
Adding API Endpoint in Dify
+
Adding API Endpoint in Dify
-
Adding API Tool in the App edit page
+
Adding API Tool in the App edit page
## Other Logic TL;DR
diff --git a/en/guides/knowledge-base/connect-external-knowledge.md b/en/guides/knowledge-base/connect-external-knowledge.md
index d871fe5f2..caa1f8e35 100644
--- a/en/guides/knowledge-base/connect-external-knowledge.md
+++ b/en/guides/knowledge-base/connect-external-knowledge.md
@@ -15,7 +15,7 @@ The **Connect to External Knowledge Base** feature enables integration between t
* The Dify platform can directly obtain the text content hosted in the cloud service provider's knowledge base, so that developers do not need to repeatedly move the content to the knowledge base in Dify;
* The Dify platform can directly obtain the text content processed by algorithms in the self-built knowledge base. Developers only need to focus on the information retrieval mechanism of the self-built knowledge base and continuously optimize and improve the accuracy of information retrieval.
-
Principle of external knowledge base connection
+
Principle of external knowledge base connection
Here are the detailed steps for connecting to external knowledge:
@@ -33,13 +33,13 @@ Navigate to the **"Knowledge"** page, click **"External Knowledge API"** in the
* API Endpoint The URL of the external knowledge base API endpoint, e.g., api-endpoint/retrieval; refer to the [External Knowledge API](external-knowledge-api-documentation.md) for detailed instructions;
* API Key Connection key for the external knowledge, refer to the [External Knowledge API](external-knowledge-api-documentation.md) for detailed instructions.
-
Associate External Knowledge API
+
Associate External Knowledge API
## 3. Connect to the External Knowledge Base
Go to the **"Knowledge"** page, click **"Connect to an External Knowledge Base"** under the Add Knowledge Base card to direct to the parameter configuration page.
-
Connect to the external knowledge base
+
Connect to the external knowledge base
Fill in the following parameters:
@@ -56,13 +56,13 @@ Fill in the following parameters:
**Score Threshold:** The similarity threshold for text chunk filtering, only retrievaling text chunks that exceed the set score. The default value is 0.5. A higher value indicates a higher requirement for similarity between the text and the question, expecting fewer retrievaled text chunks, and the results will be relatively more precise.
-
+
## 4. Test External Knowledge Base and Retrieval Results
After connected with the external knowledge base, developers can simulate possible question keywords in the **"Retrieval Testing"** to preview text chunks that might be retrieval. If you are unsatisfied with the retrieval results, try modifying the **External Knowledge Base Settings** or adjusting the retrieval strategy of the external knowledge base.
-
Test external knowledge base connection and retrieval
+
Test external knowledge base connection and retrieval
## 5. Integrating External Knowledge base in Applications
@@ -70,13 +70,13 @@ After connected with the external knowledge base, developers can simulate possib
Associate the external knowledge base in the orchestration page within Chatbot / Agent type applications.
-
Chatbot / Agent
+
Chatbot / Agent
* **Chatflow / Workflow** type application
Add a **"Knowledge Retrieval"** node and select the external knowledge base.
-
Chatflow / Workflow
+
Chatflow / Workflow
## 6. Manage External Knowledge
@@ -94,7 +94,7 @@ Navigate to the **"Knowledge"** page, external knowledge base cards will list an
The **"External Knowledge API"** and **"External Knowledge ID"** associated with the external knowledge base do not support modification. If modification is needed, please associate a new **"External Knowledge API"** and reset it.
-
+
### Connection Example
diff --git a/en/guides/knowledge-base/integrate-knowledge-within-application.md b/en/guides/knowledge-base/integrate-knowledge-within-application.md
index 94565df8e..bbbbce38e 100644
--- a/en/guides/knowledge-base/integrate-knowledge-within-application.md
+++ b/en/guides/knowledge-base/integrate-knowledge-within-application.md
@@ -24,7 +24,7 @@ In applications that utilize multiple knowledge bases, it is essential to config
The retriever scans all knowledge bases linked to the application for text content relevant to the user's question. The results are then consolidated. Below is the technical flowchart for the Multi-path Retrieval mode:
-
+
This method simultaneously queries all knowledge bases connected in **"Context"**, seeking relevant text chucks across multiple knowledge bases, collecting all content that aligns with the user's question, and ultimately applying the Rerank strategy to identify the most appropriate content to respond to the user. This retrieval approach offers more comprehensive and accurate results by leveraging multiple knowledge bases simultaneously.
@@ -60,7 +60,7 @@ While this method incurs some additional costs, it is more adept at handling com
Dify currently supports multiple Rerank models. To use external Rerank models, you'll need to provide an API Key. Enter the API Key for the Rerank model (such as Cohere, Jina AI, etc.) on the "Model Provider" page.
-
Configuring the Rerank model in the Model Provider
+
Configuring the Rerank model in the Model Provider
**Adjustable Parameters**
@@ -89,7 +89,7 @@ If the knowledge base is complex, making simple semantic or keyword matches insu
Here's how the knowledge base retrieval method affects Multi-path Retrieval:
-
+
3. **What should I do if I cannot adjust the “Weight Score” when referencing multiple knowledge bases and an error message appears?**
diff --git a/en/guides/knowledge-base/knowledge-and-documents-maintenance.md b/en/guides/knowledge-base/knowledge-and-documents-maintenance.md
index 609b1d06a..f041d0747 100644
--- a/en/guides/knowledge-base/knowledge-and-documents-maintenance.md
+++ b/en/guides/knowledge-base/knowledge-and-documents-maintenance.md
@@ -46,4 +46,4 @@ Dify Knowledge Base provides a comprehensive set of standard APIs. Developers ca
[maintain-dataset-via-api.md](knowledge-and-documents-maintenance/maintain-dataset-via-api.md)
{% endcontent-ref %}
-
Knowledge base API management
+
Knowledge base API management
diff --git a/en/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.md b/en/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.md
index f3de63c56..fa565957a 100644
--- a/en/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.md
+++ b/en/guides/knowledge-base/knowledge-and-documents-maintenance/maintain-dataset-via-api.md
@@ -17,7 +17,7 @@ Key advantages include:
Navigate to the knowledge base page, and you can switch to the **API ACCESS** page from the left navigation. On this page, you can view the dataset API documentation provided by Dify and manage the credentials for accessing the dataset API in **API Keys**.
-
Knowledge API Document
+
Knowledge API Document
### API Requesting Examples
diff --git a/en/guides/knowledge-base/retrieval-test-and-citation.md b/en/guides/knowledge-base/retrieval-test-and-citation.md
index 3f9cf4be9..c8417ac99 100644
--- a/en/guides/knowledge-base/retrieval-test-and-citation.md
+++ b/en/guides/knowledge-base/retrieval-test-and-citation.md
@@ -63,11 +63,11 @@ If you want to permanently modify the retrieval method for the knowledge base, g
When testing the knowledge base effect within the application, you can go to **Workspace -- Add Feature -- Citation and Attribution** to enable the citation attribution feature.
-
Enable citation and attribution feature
+
Enable citation and attribution feature
After enabling the feature, when the large language model responds to a question by citing content from the knowledge base, you can view specific citation paragraph information below the response content, including **original segment text, segment number, matching degree**, etc. Clicking **Link to Knowledge** above the cited segment allows quick access to the segment list in the knowledge base, facilitating developers in debugging and editing.
-
View citation information in response content
+
View citation information in response content
#### View Linked Applications in the Knowledge Base
diff --git a/en/guides/management/app-management.md b/en/guides/management/app-management.md
index fc73b5b0d..5d5bc6b6d 100644
--- a/en/guides/management/app-management.md
+++ b/en/guides/management/app-management.md
@@ -4,7 +4,7 @@
After creating an application, if you want to modify the application name or description, you can click "Edit info" in the upper left corner of the application to revise the application's icon, name, or description.
-
Edit App Info
+
Edit App Info
### Duplicating Application
@@ -17,13 +17,13 @@ Applications created in Dify support export in DSL format files, allowing you to
* Click "Export DSL" in the application menu button on the "Studio" page
* After entering the application's orchestration page, click "Export DSL" in the upper left corner
-
+
The DSL file does not include authorization information already filled in [Tool](../workflow/node/tools.md) nodes, such as API keys for third-party services.
If the environment variables contain variables of the `Secret` type, a prompt will appear during file export asking whether to allow the export of this sensitive information.
-
+
{% hint style="info" %}
Dify DSL is an AI application engineering file standard defined by Dify.AI in v0.6 and later. The file format is YML. This standard covers the basic description of the application, model parameters, orchestration configuration, and other information.
diff --git a/en/guides/management/personal-account-management.md b/en/guides/management/personal-account-management.md
index 4860382c5..3733d4455 100644
--- a/en/guides/management/personal-account-management.md
+++ b/en/guides/management/personal-account-management.md
@@ -8,7 +8,7 @@ The login methods supported by different versions of Dify are as follows:
> Note: For Dify Cloud Service, if the email associated with a GitHub or Google account is the same as the email used to log in with a verification code, the system will automatically link them as the same account, avoiding the need for manual binding and preventing duplicate registrations.
-
+
## Modifying Personal Information
@@ -27,7 +27,7 @@ You can modify the following details:
> Note: The password reset feature is only available in the Community Version.
-
+
### Login Methods
diff --git a/en/guides/management/subscription-management.md b/en/guides/management/subscription-management.md
index 8de5bf478..2b9e78875 100644
--- a/en/guides/management/subscription-management.md
+++ b/en/guides/management/subscription-management.md
@@ -10,7 +10,7 @@ After subscribing to Dify's paid services (Professional or Team plan), team owne
On the billing page, you can view the usage statistics for various team resources.
-
Team billing management
+
Team billing management
### Frequently Asked Questions
@@ -21,7 +21,7 @@ Team owners and administrators can navigate to **Settings** → **Billing**, the
* Upgrading from Professional to Team plan requires paying the difference for the current month and takes effect immediately.
* Downgrading from Team to Professional plan takes effect immediately.
-
Changing the paid plan
+
Changing the paid plan
Upon cancellation of the subscription plan, **the team will automatically transition to the Sandbox/Free plan at the end of the current billing cycle**. Subsequently, any team members and resources exceeding the Sandbox/Free plan limitations will become inaccessible.
diff --git a/en/guides/management/team-members-management.md b/en/guides/management/team-members-management.md
index c503e51c3..03bb2c9b9 100644
--- a/en/guides/management/team-members-management.md
+++ b/en/guides/management/team-members-management.md
@@ -14,14 +14,14 @@ Only team owners have permission to invite team members.
To add a member, the team owner can click on the avatar in the upper right corner, then select **"Members"** → **"Add"**. Enter the email address and assign member permissions to complete the process.
-
Assigning permissions to team members
+
Assigning permissions to team members
> For Community Edition, enabling email functionality requires the team owner to configure and activate the email service via system [environment variables](https://docs.dify.ai/getting-started/install-self-hosted/environments).
- If the invited member has not registered with Dify, they will receive an invitation email. They can complete registration by clicking the link in the email.
- If the invited member is already registered with Dify, permissions will be automatically assigned and **no invitation email will be sent**. The invited member can switch to the new workspace via the menu in the top right corner.
-
+
### Member Permissions
@@ -48,7 +48,7 @@ Only team owners have permission to remove team members.
To remove a member, click on the avatar in the upper right corner of the Dify team homepage, navigate to **"Settings"** → **"Members"**, select the member to be removed, and click **"Remove from team"**.
-
Removing a member
+
Removing a member
### Frequently Asked Questions
diff --git a/en/guides/model-configuration/README.md b/en/guides/model-configuration/README.md
index e24ca394f..22400fe80 100644
--- a/en/guides/model-configuration/README.md
+++ b/en/guides/model-configuration/README.md
@@ -6,7 +6,7 @@ description: Learn about the Different Models Supported by Dify.
Dify is a development platform for AI application based on LLM Apps, when you are using Dify for the first time, you need to go to **Settings --> Model Providers** to add and configure the LLM you are going to use.
-
Settings - Model Provider
+
Settings - Model Provider
Dify supports major model providers like OpenAI's GPT series and Anthropic's Claude series. Each model's capabilities and parameters differ, so select a model provider that suits your application's needs. **Obtain the API key from the model provider's official website before using it in Dify.**
@@ -39,13 +39,13 @@ Dify offers trial quotas for cloud service users to experiment with different mo
Dify automatically selects the default model based on usage. Configure this in `Settings > Model Provider`.
-
+
## Model Integration Settings
Choose your model in Dify's `Settings > Model Provider`.
-
+
Model providers fall into two categories:
@@ -73,4 +73,4 @@ Specific integration methods are not detailed here.
Once configured, these models are ready for application use.
-
+
diff --git a/en/guides/model-configuration/customizable-model.md b/en/guides/model-configuration/customizable-model.md
index 427097261..974f7862a 100644
--- a/en/guides/model-configuration/customizable-model.md
+++ b/en/guides/model-configuration/customizable-model.md
@@ -8,7 +8,7 @@ It is important to note that for custom models, each model integration requires
Unlike predefined models, custom vendor integration will always have the following two parameters, which do not need to be defined in the vendor YAML file.
-
+
In the previous section, we have learned that vendors do not need to implement `validate_provider_credential`. The Runtime will automatically call the corresponding model layer's `validate_credentials` based on the model type and model name selected by the user for validation.
diff --git a/en/guides/model-configuration/load-balancing.md b/en/guides/model-configuration/load-balancing.md
index da6b5c769..7647d5939 100644
--- a/en/guides/model-configuration/load-balancing.md
+++ b/en/guides/model-configuration/load-balancing.md
@@ -6,7 +6,7 @@ In enterprise-level large-scale model API calls, high concurrent requests can ex
You can enable this feature by navigating to **Model Provider -- Model List -- Configure Model Load Balancing** and adding multiple credentials (API keys) for the same model.
-
Model Load Balancing
+
Model Load Balancing
{% hint style="info" %}
Model load balancing is a paid feature. You can enable it by [subscribing to SaaS paid services](../../getting-started/cloud.md#subscription-plan) or purchasing the enterprise edition.
@@ -14,17 +14,17 @@ Model load balancing is a paid feature. You can enable it by [subscribing to Saa
The default API key is the credential added when initially configuring the model provider. You need to click **Add Configuration** to add different API keys for the same model to use the load balancing feature properly.
-
Configuring Load Balancing
+
Configuring Load Balancing
**At least one additional model credential** must be added to save and enable load balancing.
You can also **temporarily disable** or **delete** configured credentials.
-
+
Once configured, all models with load balancing enabled will be displayed in the model list.
-
Enabling Load Balancing
+
Enabling Load Balancing
{% hint style="info" %}
By default, load balancing uses the Round-robin strategy. If the rate limit is triggered, a 1-minute cooldown period will be applied.
@@ -32,4 +32,4 @@ By default, load balancing uses the Round-robin strategy. If the rate limit is t
You can also configure load balancing from **Add Model**, following the same process as above.
-
Configuring Load Balancing from Add Model
+
Configuring Load Balancing from Add Model
diff --git a/en/guides/monitoring/README.md b/en/guides/monitoring/README.md
index 54ad0ca3f..84901d931 100644
--- a/en/guides/monitoring/README.md
+++ b/en/guides/monitoring/README.md
@@ -2,4 +2,4 @@
You can monitor and track the performance of your application in a production environment within the **Overview** section. In the data analytics dashboard, you can analyze various metrics such as usage costs, latency, user feedback, and performance in the production environment. By continuously debugging and iterating, you can continually improve your application.
-
概览
+
概览
diff --git a/en/guides/monitoring/analysis.md b/en/guides/monitoring/analysis.md
index c7776666f..0d72a52ff 100644
--- a/en/guides/monitoring/analysis.md
+++ b/en/guides/monitoring/analysis.md
@@ -2,7 +2,7 @@
The **Overview -- Data Analysis** section displays metrics such as usage, active users, and LLM (Language Learning Model) invocation costs. This allows you to continuously improve the effectiveness, engagement, and cost-efficiency of your application operations. We will gradually provide more useful visualization capabilities, so please let us know what you need.
-
Overview—Data Analysis
+
Overview—Data Analysis
***
diff --git a/en/guides/monitoring/integrate-external-ops-tools/integrate-langfuse.md b/en/guides/monitoring/integrate-external-ops-tools/integrate-langfuse.md
index 4b40510a6..29db6948b 100644
--- a/en/guides/monitoring/integrate-external-ops-tools/integrate-langfuse.md
+++ b/en/guides/monitoring/integrate-external-ops-tools/integrate-langfuse.md
@@ -15,35 +15,35 @@ Introduction to Langfuse: [https://langfuse.com/](https://langfuse.com/)
1. Register and log in to Langfuse on the [official website](https://langfuse.com/)
2. Create a project in Langfuse. After logging in, click **New** on the homepage to create your own project. The **project** will be used to associate with **applications** in Dify for data monitoring.
-
Create a project in Langfuse
+
Create a project in Langfuse
Edit a name for the project.
-
Create a project in Langfuse
+
Create a project in Langfuse
3. Create project API credentials. In the left sidebar of the project, click **Settings** to open the settings.
-
Create project API credentials
+
Create project API credentials
In Settings, click **Create API Keys** to create project API credentials.
-
Create project API credentials
+
Create project API credentials
Copy and save the **Secret Key**, **Public Key**, and **Host**.
-
Get API Key configuration
+
Get API Key configuration
4. Configure Langfuse in Dify. Open the application you need to monitor, open **Monitoring** in the side menu, and select **Tracing app performance** on the page.
-
Configure Langfuse
+
Configure Langfuse
After clicking configure, paste the **Secret Key, Public Key, Host** created in Langfuse into the configuration and save.
-
Configure Langfuse
+
Configure Langfuse
Once successfully saved, you can view the status on the current page. If it shows as started, it is being monitored.
-
View configuration status
+
View configuration status
***
@@ -51,9 +51,9 @@ Once successfully saved, you can view the status on the current page. If it show
After configuration, debugging or production data of the application in Dify can be viewed in Langfuse.
-
Debugging Applications in Dify
+
Debugging Applications in Dify
-
Viewing application data in Langfuse
+
Viewing application data in Langfuse
***
diff --git a/en/guides/monitoring/integrate-external-ops-tools/integrate-langsmith.md b/en/guides/monitoring/integrate-external-ops-tools/integrate-langsmith.md
index 822b3aee3..2becea10f 100644
--- a/en/guides/monitoring/integrate-external-ops-tools/integrate-langsmith.md
+++ b/en/guides/monitoring/integrate-external-ops-tools/integrate-langsmith.md
@@ -18,39 +18,39 @@ For more details, please refer to [LangSmith](https://www.langchain.com/langsmit
Create a project in LangSmith. After logging in, click **New Project** on the homepage to create your own project. The **project** will be used to associate with **applications** in Dify for data monitoring.
-
Create a project in LangSmith
+
Create a project in LangSmith
Once created, you can view all created projects in the Projects section.
-
View created projects in LangSmith
+
View created projects in LangSmith
#### 3. Create Project Credentials
Find the project settings **Settings** in the left sidebar.
-
Project settings
+
Project settings
Click **Create API Key** to create project credentials.
-
Create a project API Key
+
Create a project API Key
Select **Personal Access Token** for subsequent API authentication.
-
Create an API Key
+
Create an API Key
Copy and save the created API key.
-
Copy API Key
+
Copy API Key
#### 4. Integrating LangSmith with Dify
Configure LangSmith in the Dify application. Open the application you need to monitor, open **Monitoring** in the side menu, and select **Tracing app performance** on the page.
-
Tracing app performance
+
Tracing app performance
After clicking configure, paste the **API Key** and **project name** created in LangSmith into the configuration and save.
-
Configure LangSmith
+
Configure LangSmith
{% hint style="info" %}
The configured project name needs to match the project set in LangSmith. If the project names do not match, LangSmith will automatically create a new project during data synchronization.
@@ -58,21 +58,21 @@ The configured project name needs to match the project set in LangSmith. If the
Once successfully saved, you can view the monitoring status on the current page.
-
View configuration status
+
View configuration status
### Viewing Monitoring Data in LangSmith
Once configured, the debug or production data from applications within Dify can be monitored in LangSmith.
-
Debugging Applications in Dify
+
Debugging Applications in Dify
When you switch to LangSmith, you can view detailed operation logs of Dify applications in the dashboard.
-
Viewing application data in LangSmith
+
Viewing application data in LangSmith
Detailed LLM operation logs through LangSmith will help you optimize the performance of your Dify application.
-
Viewing application data in LangSmith
+
Viewing application data in LangSmith
### Monitoring Data List
diff --git a/en/guides/tools/quick-tool-integration.md b/en/guides/tools/quick-tool-integration.md
index d0d7aef20..03d6d7edd 100644
--- a/en/guides/tools/quick-tool-integration.md
+++ b/en/guides/tools/quick-tool-integration.md
@@ -275,4 +275,4 @@ After the above steps are completed, we can see this tool on the frontend, and i
Of course, because google\_search needs a credential, before using it, you also need to input your credentials on the frontend.
-
+
diff --git a/en/guides/tools/tool-configuration/bing.md b/en/guides/tools/tool-configuration/bing.md
index 2f95fd2fa..aad3f5b9f 100644
--- a/en/guides/tools/tool-configuration/bing.md
+++ b/en/guides/tools/tool-configuration/bing.md
@@ -12,7 +12,7 @@ Please apply for an API Key on the [Azure platform](https://www.microsoft.com/en
In the Dify navigation page, click `Tools > Azure > Authorize` to fill in the API Key.
-
+
## 3. Use the tool
diff --git a/en/guides/tools/tool-configuration/dall-e.md b/en/guides/tools/tool-configuration/dall-e.md
index 9cf1b0571..54cd9569b 100644
--- a/en/guides/tools/tool-configuration/dall-e.md
+++ b/en/guides/tools/tool-configuration/dall-e.md
@@ -12,7 +12,7 @@ Please apply for an API Key at [OpenAI Platform](https://platform.openai.com/),
In the Dify navigation page, click `Tools > DALL-E > Authorize` and fill in the API Key.
-
+
## 3. Use the tool
diff --git a/en/guides/tools/tool-configuration/google.md b/en/guides/tools/tool-configuration/google.md
index 66cc96e01..9b542e610 100644
--- a/en/guides/tools/tool-configuration/google.md
+++ b/en/guides/tools/tool-configuration/google.md
@@ -12,7 +12,7 @@ Please apply for an API Key on the [Serp](https://serpapi.com/dashboard).
In the Dify navigation page, click `Tools > Google > Go to authorize` to fill in the API Key.
-
+
## 3. Using the tool
diff --git a/en/guides/tools/tool-configuration/perplexity.md b/en/guides/tools/tool-configuration/perplexity.md
index 6f94656d8..e0c211099 100644
--- a/en/guides/tools/tool-configuration/perplexity.md
+++ b/en/guides/tools/tool-configuration/perplexity.md
@@ -12,7 +12,7 @@ Please apply for an API Key at [Perplexity](https://www.perplexity.ai/settings/a
In the Dify navigation page, click on `Tools > Perplexity > Go to authorize` to fill in the API Key.
-
+
## 3. Using the tool
@@ -22,10 +22,10 @@ You can use the Perplexity Search tool in the following application types.
Both Chatflow and Workflow applications support adding Perplexity tool nodes. Pass the user's input content through variables to the "Query" box in the Perplexity tool node, adjust the built-in parameters of the Perplexity tool as needed, and finally select the output content of the Perplexity tool node in the response box of the "End" node.
-
+
* **Agent applications**
Add the `Perplexity Search` tool in the Agent application, then enter relevant commands to invoke this tool.
-
+
diff --git a/en/guides/tools/tool-configuration/searchapi.md b/en/guides/tools/tool-configuration/searchapi.md
index 00dc2b123..b1d282a77 100644
--- a/en/guides/tools/tool-configuration/searchapi.md
+++ b/en/guides/tools/tool-configuration/searchapi.md
@@ -12,7 +12,7 @@ Please apply for an API Key at [SearchApi](https://www.searchapi.io/).
In the Dify navigation page, click on `Tools > SearchApi > Go to authorize` to fill in the API Key.
-
+
## 3. Using the tool
@@ -22,7 +22,7 @@ You can use the SearchApi tool in the following application types.
Both Chatflow and Workflow applications support adding `SearchApi` series tool nodes, providing four tools: Google Jobs API, Google News API, Google Search API, and YouTube Scraper API.
-
+
* **Agent applications**
diff --git a/en/guides/tools/tool-configuration/serper.md b/en/guides/tools/tool-configuration/serper.md
index 718828c70..a558835e0 100644
--- a/en/guides/tools/tool-configuration/serper.md
+++ b/en/guides/tools/tool-configuration/serper.md
@@ -12,7 +12,7 @@ Please apply for an API Key on the [Serper platform](https://serper.dev/signup).
In the Dify navigation page, click `Tools > Serper > Authorize` to fill in the API Key.
-
+
## 3. Using the tool
diff --git a/en/guides/tools/tool-configuration/siliconflow.md b/en/guides/tools/tool-configuration/siliconflow.md
index d43da3367..24737eb46 100644
--- a/en/guides/tools/tool-configuration/siliconflow.md
+++ b/en/guides/tools/tool-configuration/siliconflow.md
@@ -12,7 +12,7 @@ Create a new API Key on the [SiliconCloud API management page](https://cloud.sil
In the Dify tool page, click on `SiliconCloud > To Authorize` and fill in the API Key.
-
+
## 3. Using the Tool
@@ -20,12 +20,12 @@ In the Dify tool page, click on `SiliconCloud > To Authorize` and fill in the AP
Chatflow and Workflow applications both support adding `SiliconFlow` tool nodes. You can pass user input content to the SiliconFlow tool node's "prompt" and "negative prompt" boxes through [variables](https://docs.dify.ai/v/zh-hans/guides/workflow/variables), adjust the built-in parameters as needed, and finally select the output content (text, images, etc.) of the SiliconFlow tool node in the "end" node's reply box.
-
+
* **Agent Application**
In the Agent application, add the `Stable Diffusion` or `Flux` tool, and then send a picture description in the conversation box to call the tool to generate images.
-
+
-
+
diff --git a/en/guides/workflow/README.md b/en/guides/workflow/README.md
index b5ea838d3..0b4920f37 100644
--- a/en/guides/workflow/README.md
+++ b/en/guides/workflow/README.md
@@ -9,7 +9,7 @@ Dify workflows are divided into two types:
* **Chatflow**: Designed for conversational scenarios, including customer service, semantic search, and other conversational applications that require multi-step logic in response construction.
* **Workflow**: Geared towards automation and batch processing scenarios, suitable for high-quality translation, data analysis, content generation, email automation, and more.
-
+
To address the complexity of user intent recognition in natural language input, Chatflow provides question understanding nodes. Compared to Workflow, it adds support for Chatbot features such as conversation history (Memory), annotated replies, and Answer nodes.
diff --git a/en/guides/workflow/additional-features.md b/en/guides/workflow/additional-features.md
index bd0657fac..8c01e60d6 100644
--- a/en/guides/workflow/additional-features.md
+++ b/en/guides/workflow/additional-features.md
@@ -55,7 +55,7 @@ This section will mainly introduce the specific usage of the **File Upload** fea
**For application users:** Chatflow applications with file upload enabled will display a "paperclip" icon on the right side of the dialogue box. Click it to upload files and interact with the LLM.
-
Upload file
+
Upload file
**For application developers:**
@@ -84,7 +84,7 @@ The orchestration steps are as follows:
2. Add an LLM node, enable the VISION feature, and select the `sys.files` variable.
3. Add a "Answer" node at the end, filling in the output variable of the LLM node.
-
Enable vision
+
Enable vision
* **Mixed File Types**
@@ -97,7 +97,7 @@ If you want the application to have the ability to process both document files a
After the application user uploads both document files and images, document files are automatically diverted to the document extractor node, and image files are automatically diverted to the LLM node to achieve joint processing of files.
-
Mixed File Types
+
Mixed File Types
* **Audio and Video Files**
diff --git a/en/guides/workflow/bulletin.md b/en/guides/workflow/bulletin.md
index 2ae823dbe..f52932a7c 100644
--- a/en/guides/workflow/bulletin.md
+++ b/en/guides/workflow/bulletin.md
@@ -4,11 +4,11 @@ The image upload feature has been integrated into the more comprehensive [File U
* The image upload option in Chatflow’s “Features” has been removed and replaced by the new “File Upload” feature. Within the “File Upload” feature, you can select the image file type. Additionally, the image upload icon in the application dialog has been replaced with a file upload icon.
-
+
* The image upload option in Workflow’s “Features” and the `sys.files` [variable](variables.md) will be deprecated in the future. Both have been marked as `LEGACY`, and developers are encouraged to use custom file variables to add file upload functionality to Workflow applications.
-
+
### Why Replace the “Image Upload” Feature?
@@ -25,7 +25,7 @@ To enhance the information processing capabilities of your applications, we have
* The file upload feature allows files to be uploaded, parsed, referenced, and downloaded as file variables within Workflow applications.
* Developers can now easily build applications capable of understanding and processing complex tasks involving images, audio, and video.
-
+
We no longer recommend using the standalone “Image Upload” feature and instead suggest transitioning to the more comprehensive “File Upload” feature to improve the application experience.
@@ -38,11 +38,11 @@ We no longer recommend using the standalone “Image Upload” feature and inste
If you have already created Chatflow applications with the “Image Upload” feature enabled and activated the Vision feature in the LLM node, the system will automatically switch the feature, and it will not affect the application’s image upload capability. If you need to update and republish the application, select the file variable in the Vision variable selection box of the LLM node, clear the item from the checklist, and republish the application.\
-
+
If you wish to add the “Image Upload” feature to a Chatflow application, enable “File Upload” in the features and select only the “image” file type. Then enable the Vision feature in the LLM node and specify the sys.files variable. The upload entry will appear as a “paperclip” icon. For detailed instructions, refer to [Additional Features](additional-features.md).
-
+
* **Workflow Applications**
diff --git a/en/guides/workflow/debug-and-preview/checklist.md b/en/guides/workflow/debug-and-preview/checklist.md
index c5039ffe3..2a757ed2b 100644
--- a/en/guides/workflow/debug-and-preview/checklist.md
+++ b/en/guides/workflow/debug-and-preview/checklist.md
@@ -2,4 +2,4 @@
Before publishing the App, you can check the checklist to see if there are any nodes with incomplete configurations or that have not been connected.
-
+
diff --git a/en/guides/workflow/debug-and-preview/history.md b/en/guides/workflow/debug-and-preview/history.md
index 7bd9a8273..a06a4c602 100644
--- a/en/guides/workflow/debug-and-preview/history.md
+++ b/en/guides/workflow/debug-and-preview/history.md
@@ -2,4 +2,4 @@
In the "Run History," you can view the run results and log information from the historical debugging of the current workflow.
-
+
diff --git a/en/guides/workflow/debug-and-preview/log.md b/en/guides/workflow/debug-and-preview/log.md
index e98aa849a..202a98fb8 100644
--- a/en/guides/workflow/debug-and-preview/log.md
+++ b/en/guides/workflow/debug-and-preview/log.md
@@ -4,4 +4,4 @@ Clicking **"Run History - View Log — Details"** allows you to see a comprehens
This detailed information enables you to review various aspects of each node throughout the complete execution process of the workflow. You can examine inputs and outputs, analyze token consumption, evaluate runtime duration, and assess other pertinent metrics.
-
+
diff --git a/en/guides/workflow/debug-and-preview/step-run.md b/en/guides/workflow/debug-and-preview/step-run.md
index a6fc85079..b9e9e2d5e 100644
--- a/en/guides/workflow/debug-and-preview/step-run.md
+++ b/en/guides/workflow/debug-and-preview/step-run.md
@@ -2,8 +2,8 @@
Workflow supports step-by-step debugging of nodes, where you can repetitively test whether the execution of the current node meets expectations.
-
+
After running a step test, you can review the execution status, input/output, and metadata information.
-
+
diff --git a/en/guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.md b/en/guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.md
index 0a210aded..479c8481e 100644
--- a/en/guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.md
+++ b/en/guides/workflow/debug-and-preview/yu-lan-yu-yun-hang.md
@@ -2,10 +2,10 @@
Dify Workflow offers a comprehensive set of execution and debugging features. In conversational applications, clicking "Preview" enters debugging mode.
-
+
In workflow applications, clicking "Run" enters debugging mode.
-
+
Once in debugging mode, you can debug the configured workflow using the interface on the right side of the screen.
diff --git a/en/guides/workflow/file-upload.md b/en/guides/workflow/file-upload.md
index 649282382..9b261922a 100644
--- a/en/guides/workflow/file-upload.md
+++ b/en/guides/workflow/file-upload.md
@@ -68,7 +68,7 @@ Some LLMs, such as [Claude 3.5 Sonnet](https://docs.anthropic.com/en/docs/build-
1. Click the **"Features"** button in the upper right corner of the Chatflow application to add more functionality to the application. After enabling this feature, application users can upload and update files at any time during the application dialogue. A maximum of 10 files can be uploaded simultaneously, with a size limit of 15MB per file.
-
file upload
+
file upload
Enabling this feature does not grant LLMs the ability to directly read files. A **Document Extractor** is still needed to parse documents into text for LLM comprehension.
@@ -79,11 +79,11 @@ Enabling this feature does not grant LLMs the ability to directly read files. A
3. Add an LLM node and select the output variable of the Document Extractor node in the system prompt.
4. Add an "Answer" node at the end, filling in the output variable of the LLM node.
-
+
Once enabled, users can upload files and engage in conversations in the dialogue box. However, with this method, the LLM application does not have the ability to remember file contents, and files need to be uploaded for each conversation.
-
+
If you want the LLM to remember file contents during conversations, please refer to Method 3.
@@ -125,11 +125,11 @@ After uploading, files are stored in single file variables, which LLMs cannot di
Use the file variable from the "Start" node as the input variable for the **"Document Extractor"** node.
-
Document Extractor
+
Document Extractor
Fill in the output variable of the "Document Extractor" node in the system prompt of the LLM node.
-
+
After completing these settings, application users can paste file URLs or upload local files in the WebApp, then interact with the LLM about the document content. Users can replace files at any time during the conversation, and the LLM will obtain the latest file content.
@@ -143,7 +143,7 @@ For certain file types (such as images), file variables can be directly used wit
Below is an example configuration:
-
Using file variables directly in LLM node
+
Using file variables directly in LLM node
It's important to note that when directly using file variables in LLM node, the developers need to ensure that the file variable contains only image files; otherwise, errors may occur. If users might upload different types of files, we need to use list operator node for filtering files.
@@ -151,7 +151,7 @@ It's important to note that when directly using file variables in LLM node, the
Placing file variables in answer nodes or end nodes will provide a file download card in the conversation box when the application reaches that node. Clicking the card allows for file download.
-
file download
+
file download
## Advanced Usage
diff --git a/en/guides/workflow/node/agent.md b/en/guides/workflow/node/agent.md
index f0a91ddd1..74067114c 100644
--- a/en/guides/workflow/node/agent.md
+++ b/en/guides/workflow/node/agent.md
@@ -10,17 +10,17 @@ An Agent Node is a component in Dify Chatflow/Workflow that enables autonomous t
In the Dify Chatflow/Workflow editor, drag the Agent node from the components panel onto the canvas.
-
+
### Select an Agent Strategy
In the node configuration panel, click Agent Strategy.
-
+
From the dropdown menu, select the desired Agent reasoning strategy. Dify provides two built-in strategies, **Function Calling and ReAct**, which can be installed from the **Marketplace → Agent Strategies category**.
-
+
#### 1. Function Calling
@@ -34,7 +34,7 @@ Pros:
**• Structured output:** The model outputs structured information about function calls, facilitating processing by downstream nodes.
-
+
#### 2. ReAct (Reason + Act)
@@ -48,7 +48,7 @@ Pros:
**• Wide applicability:** Suitable for scenarios that require external knowledge or need to perform specific actions, such as Q\&A, information retrieval, and task execution.
-
+
Developers can contribute Agent strategy plugins to the public [repository](https://github.com/langgenius/dify-plugins). After review, these plugins will be listed in the Marketplace for others to install.
@@ -66,10 +66,10 @@ After choosing the Agent strategy, the configuration panel will display the rele
5. **Maximum Iterations:** Set the maximum number of execution steps for the Agent.
6. **Output Variables:** Indicates the data structure output by the node.
-
+
## Logs
During execution, the Agent node generates detailed logs. You can see overall node execution information—including inputs and outputs, token usage, time spent, and status. Click Details to view the output from each round of Agent strategy execution.
-
+
diff --git a/en/guides/workflow/node/answer.md b/en/guides/workflow/node/answer.md
index d0c545e74..0c3f5b573 100644
--- a/en/guides/workflow/node/answer.md
+++ b/en/guides/workflow/node/answer.md
@@ -10,10 +10,10 @@ Answer node can be seamlessly integrated at any point to dynamically deliver con
Example 1: Output plain text.
-
+
Example 2: Output image and LLM reply.
-
+
-
+
diff --git a/en/guides/workflow/node/code.md b/en/guides/workflow/node/code.md
index dd48fc01e..868f03d57 100644
--- a/en/guides/workflow/node/code.md
+++ b/en/guides/workflow/node/code.md
@@ -13,7 +13,7 @@ The code node supports running Python/NodeJS code to perform data transformation
This node significantly enhances the flexibility for developers, allowing them to embed custom Python or JavaScript scripts within the workflow and manipulate variables in ways that preset nodes cannot achieve. Through configuration options, you can specify the required input and output variables and write the corresponding execution code:
-
+
## Configuration
diff --git a/en/guides/workflow/node/doc-extractor.md b/en/guides/workflow/node/doc-extractor.md
index f934365d3..40971c3e8 100644
--- a/en/guides/workflow/node/doc-extractor.md
+++ b/en/guides/workflow/node/doc-extractor.md
@@ -13,7 +13,7 @@ LLMs cannot directly read or interpret document contents. Therefore, it's necess
The document extractor node can be understood as an information processing center. It recognizes and reads files in the input variables, extracts information, and converts it into string-type output variables for downstream nodes to call.
-
doc extractor
+
doc extractor
The document extractor node structure is divided into input variables and output variables.
@@ -41,7 +41,7 @@ In a typical file interaction Q\&A scenario, the document extractor can serve as
This section will introduce the usage of the document extractor node through a typical ChatPDF example workflow template.
-
Chatpdf workflow
+
Chatpdf workflow
**Configuration Process:**
@@ -49,11 +49,11 @@ This section will introduce the usage of the document extractor node through a t
2. Add a document extractor node and select the `pdf` variable in the input variables.
3. Add an LLM node and select the output variable of the document extractor node in the system prompt. The LLM can read the contents of the file through this output variable.
-
+
Configure the end node by selecting the output variable of the LLM node in the end node.
-
chat with pdf
+
chat with pdf
After configuration, the application will have file upload functionality, allowing users to upload PDF files and engage in conversation.
diff --git a/en/guides/workflow/node/end.md b/en/guides/workflow/node/end.md
index ff2750853..d027b329c 100644
--- a/en/guides/workflow/node/end.md
+++ b/en/guides/workflow/node/end.md
@@ -18,12 +18,12 @@ End nodes are not supported within Chatflow.
In the following [long story generation workflow](iteration.md#example-2-long-article-iterative-generation-another-scheduling-method), the variable `Output` declared by the end node is the output of the upstream code node. This means the workflow will end after the Code node completes execution and will output the execution result of Code.
-
End Node - Long Story Generation Example
+
End Node - Long Story Generation Example
**Single Path Execution Example:**
-
+
**Multi-Path Execution Example:**
-
+
diff --git a/en/guides/workflow/node/http-request.md b/en/guides/workflow/node/http-request.md
index 16f07f053..0078edc7c 100644
--- a/en/guides/workflow/node/http-request.md
+++ b/en/guides/workflow/node/http-request.md
@@ -15,7 +15,7 @@ This node supports common HTTP request methods:
You can configure various aspects of the HTTP request, including URL, request headers, query parameters, request body content, and authentication information.
-
HTTP Request Configuration
+
HTTP Request Configuration
***
@@ -25,7 +25,7 @@ You can configure various aspects of the HTTP request, including URL, request he
One practical feature of this node is the ability to dynamically insert variables into different parts of the request based on the scenario. For example, when handling customer feedback requests, you can embed variables such as username or customer ID, feedback content, etc., into the request to customize automated reply messages or fetch specific customer information and send related resources to a designated server.
-
Customer Feedback Classification
+
Customer Feedback Classification
The return values of an HTTP request include the response body, status code, response headers, and files. Notably, if the response contains a file, this node can automatically save the file for use in subsequent steps of the workflow. This design not only improves processing efficiency but also makes handling responses with files straightforward and direct.
@@ -37,7 +37,7 @@ Example: Suppose you are developing a document management application and need t
Here is a configuration example:
-
http-node-send-file
+
http-node-send-file
### Advanced Features
diff --git a/en/guides/workflow/node/ifelse.md b/en/guides/workflow/node/ifelse.md
index bca4a2dd3..3b9e6f206 100644
--- a/en/guides/workflow/node/ifelse.md
+++ b/en/guides/workflow/node/ifelse.md
@@ -27,7 +27,7 @@ A conditional branching node has three parts:
### Scenario
-
+
Taking the above **Text Summary Workflow** as an example:
@@ -41,4 +41,4 @@ Taking the above **Text Summary Workflow** as an example:
For complex condition judgments, you can set multiple condition judgments and configure **AND** or **OR** between conditions to take the **intersection** or **union** of the conditions, respectively.
-
Multiple Condition Judgments
+
Multiple Condition Judgments
diff --git a/en/guides/workflow/node/iteration.md b/en/guides/workflow/node/iteration.md
index 522985cb1..7ace31051 100644
--- a/en/guides/workflow/node/iteration.md
+++ b/en/guides/workflow/node/iteration.md
@@ -26,7 +26,7 @@ An iteration node consists of three core components: **Input Variables**, **Iter
#### **Example 1: Long Article Iteration Generator**
-
Long Story Generator
+
Long Story Generator
1. Enter the story title and outline in the **Start Node**.
2. Use a **Generate Subtitles and Outlines Node** to use LLM to generate the complete content from user input.
@@ -38,15 +38,15 @@ An iteration node consists of three core components: **Input Variables**, **Iter
1. Configure the story title (title) and outline (outline) in the **Start Node**.
-
Start Node Configuration
+
Start Node Configuration
2. Use a **Generate Subtitles and Outlines Node** to convert the story title and outline into complete text.
-
Template Node
+
Template Node
3. Use a **Extract Subtitles and Outlines Node** to convert the story text into an array (Array) structure. The parameter to extract is `sections`, and the parameter type is `Array[Object]`.
-
Parameter Extraction
+
Parameter Extraction
{% hint style="info" %}
The effectiveness of parameter extraction is influenced by the model's inference capability and the instructions given. Using a model with stronger inference capabilities and adding examples in the **instructions** can improve the parameter extraction results.
@@ -54,11 +54,11 @@ The effectiveness of parameter extraction is influenced by the model's inference
4. Use the array-formatted story outline as the input for the iteration node and process it within the iteration node using an **LLM Node**.
-
Configure Iteration Node
+
Configure Iteration Node
Configure the input variables `GenerateOverallOutline/output` and `Iteration/item` in the LLM Node.
-
Configure LLM Node
+
Configure LLM Node
{% hint style="info" %}
Built-in variables for iteration: `items[object]` and `index[number]`.
@@ -70,15 +70,15 @@ Built-in variables for iteration: `items[object]` and `index[number]`.
5. Configure a **Direct Reply Node** inside the iteration node to achieve streaming output after each iteration.
-
Configure Answer Node
+
Configure Answer Node
6. Complete debugging and preview.
-
Generate by Iterating Through Story Chapters
+
Generate by Iterating Through Story Chapters
#### **Example 2: Long Article Iteration Generator (Another Arrangement)**
-
+
* Enter the story title and outline in the **Start Node**.
* Use an **LLM Node** to generate subheadings and corresponding content for the article.
@@ -135,11 +135,11 @@ Array variables can be generated via the following nodes as iteration node input
* [Code Node](code.md)
-
* [Knowledge Base Retrieval](knowledge-retrieval.md)
* [Iteration](iteration.md)
@@ -154,7 +154,7 @@ The output variable of the iteration node is in array format and cannot be direc
**Convert Using a Code Node**
-
Code Node Conversion
+
Code Node Conversion
CODE Example:
@@ -168,7 +168,7 @@ def main(articleSections: list):
**Convert Using a Template Node**
-
Template Node Conversion
+
Template Node Conversion
CODE Example:
diff --git a/en/guides/workflow/node/knowledge-retrieval.md b/en/guides/workflow/node/knowledge-retrieval.md
index 81f895770..66ac76fcc 100644
--- a/en/guides/workflow/node/knowledge-retrieval.md
+++ b/en/guides/workflow/node/knowledge-retrieval.md
@@ -2,7 +2,7 @@
The Knowledge Base Retrieval Node is designed to query text content related to user questions from the Dify Knowledge Base, which can then be used as context for subsequent answers by the Large Language Model (LLM).
-
+
Configuring the Knowledge Base Retrieval Node involves three main steps:
@@ -22,8 +22,8 @@ Within the knowledge base retrieval node, you can add an existing knowledge base
It's possible to modify the indexing strategy and retrieval mode for an individual knowledge base within the node. For a detailed explanation of these settings, refer to the knowledge base [help documentation](https://docs.dify.ai/guides/knowledge-base/retrieval-test-and-citation).
-
+
Dify offers two recall strategies for different knowledge base retrieval scenarios: "N-to-1 Recall" and "Multi-way Recall". In the N-to-1 mode, knowledge base queries are executed through function calling, requiring the selection of a system reasoning model. In the multi-way recall mode, a Rerank model needs to be configured for result re-ranking. For a detailed explanation of these two recall strategies, refer to the retrieval mode explanation in the [help documentation](https://docs.dify.ai/guides/knowledge-base/create-knowledge-and-upload-documents#id-5-indexing-methods).
-
+
diff --git a/en/guides/workflow/node/list-operator.md b/en/guides/workflow/node/list-operator.md
index 6784d0ad6..eef8c9bc5 100644
--- a/en/guides/workflow/node/list-operator.md
+++ b/en/guides/workflow/node/list-operator.md
@@ -10,11 +10,11 @@ The list operator can filter and extract attributes such as file format type, fi
For example, in an application that allows users to upload both document files and image files simultaneously, different files need to be sorted through the **list operation node**, with different files being handled by different processes.
-
+
List operation nodes are generally used to extract information from array variables, converting them into variable types that can be accepted by downstream nodes through setting conditions. Its structure is divided into input variables, filter conditions, sorting, taking the first N items, and output variables.
-
+
**Input Variables**
@@ -72,6 +72,6 @@ In file interaction Q\&A scenarios, application users may upload document files
3. Extract document file variables and pass them to the "Doc Extractor" node; extract image file variables and pass them to the "LLM" node.
4. Add a "Answer" node at the end, filling in the output variable of the LLM node.
-
+
After the application user uploads both document files and images, document files are automatically diverted to the doc extractor node, and image files are automatically diverted to the LLM node to achieve joint processing of mixed files.
diff --git a/en/guides/workflow/node/llm.md b/en/guides/workflow/node/llm.md
index c5b6c69d8..ad9fb3bf4 100644
--- a/en/guides/workflow/node/llm.md
+++ b/en/guides/workflow/node/llm.md
@@ -4,7 +4,7 @@
Invokes the capabilities of large language models to process information input by users in the "Start" node (natural language, uploaded files, or images) and provide effective response information.
-
LLM Node
+
LLM Node
***
@@ -27,7 +27,7 @@ By selecting the appropriate model and writing prompts, you can build powerful a
### How to Configure
-
LLM Node Configuration - Model Selection
+
LLM Node Configuration - Model Selection
**Configuration Steps:**
@@ -48,11 +48,11 @@ In the LLM node, you can customize the model input prompts. If you select a chat
If you're struggling to come up with effective system prompts (System), you can use the Prompt Generator to quickly create prompts suitable for your specific business scenarios, leveraging AI capabilities.
-
+
In the prompt editor, you can call out the **variable insertion menu** by typing `/` or `{` to insert **special variable blocks** or **upstream node variables** into the prompt as context content.
-
Calling Out the Variable Insertion Menu
+
Calling Out the Variable Insertion Menu
***
@@ -62,7 +62,7 @@ In the prompt editor, you can call out the **variable insertion menu** by typing
Context variables are a special type of variable defined within the LLM node, used to insert externally retrieved text content into the prompt.
-
Context Variables
+
Context Variables
In common knowledge base Q\&A applications, the downstream node of knowledge retrieval is typically the LLM node. The **output variable** `result` of knowledge retrieval needs to be configured in the **context variable** within the LLM node for association and assignment. After association, inserting the **context variable** at the appropriate position in the prompt can incorporate the externally retrieved knowledge into the prompt.
@@ -88,13 +88,13 @@ To achieve conversational memory in text completion models (e.g., gpt-3.5-turbo-
The conversation history variable is not widely used and can only be inserted when selecting text completion models in Chatflow.
{% endhint %}
-
Inserting Conversation History Variable
+
Inserting Conversation History Variable
**Model Parameters**
The parameters of the model affect the output of the model. Different models have different parameters. The following figure shows the parameter list for `gpt-4`.
-
+
The main parameter terms are explained as follows:
@@ -108,7 +108,7 @@ The main parameter terms are explained as follows:
If you do not understand what these parameters are, you can choose to load presets and select from the three presets: Creative, Balanced, and Precise.
-
+
***
@@ -143,7 +143,7 @@ To enable workflow applications to read "[Knowledge Base](../../knowledge-base/)
2. Fill in the **output variable** `result` of the knowledge retrieval node into the **context variable** of the LLM node;
3. Insert the **context variable** into the application prompt to give the LLM the ability to read text within the knowledge base.
-
+
The `result` variable output by the Knowledge Retrieval Node also includes segmented reference information. You can view the source of information through the **Citation and Attribution** feature.
@@ -161,7 +161,7 @@ To enable workflow applications to read document contents, such as building a Ch
For more information, please refer to [File Upload](../file-upload.md).
-
input system prompts
+
input system prompts
* **Error Handling**
diff --git a/en/guides/workflow/node/parameter-extractor.md b/en/guides/workflow/node/parameter-extractor.md
index 3d7030b49..4f9e753d0 100644
--- a/en/guides/workflow/node/parameter-extractor.md
+++ b/en/guides/workflow/node/parameter-extractor.md
@@ -16,11 +16,11 @@ Some nodes within the workflow require specific data formats as inputs, such as
In this example: The Arxiv paper retrieval tool requires **paper author** or **paper ID** as input parameters. The parameter extractor extracts the paper ID **2405.10739** from the query "What is the content of this paper: 2405.10739" and uses it as the tool parameter for precise querying.
-
Arxiv Paper Retrieval Tool
+
Arxiv Paper Retrieval Tool
2. **Converting text to structured data**, such as in the long story iteration generation application, where it serves as a pre-step for the [iteration node](iteration.md), converting chapter content in text format to an array format, facilitating multi-round generation processing by the iteration node.
-
+
1. **Extracting structured data and using the** [**HTTP Request**](https://docs.dify.ai/guides/workflow/node/http-request), which can request any accessible URL, suitable for obtaining external retrieval results, webhooks, generating images, and other scenarios.
diff --git a/en/guides/workflow/node/question-classifier.md b/en/guides/workflow/node/question-classifier.md
index 67168e2e8..614fae1f3 100644
--- a/en/guides/workflow/node/question-classifier.md
+++ b/en/guides/workflow/node/question-classifier.md
@@ -14,7 +14,7 @@ In a typical product customer service Q\&A scenario, the issue classifier can se
The following diagram is an example workflow template for a product customer service scenario:
-
+
In this scenario, we set up three classification labels/descriptions:
@@ -32,7 +32,7 @@ When users input different questions, the issue classifier will automatically cl
### 3. How to Configure
-
+
**Configuration Steps:**
diff --git a/en/guides/workflow/node/start.md b/en/guides/workflow/node/start.md
index 775c8db81..924771164 100644
--- a/en/guides/workflow/node/start.md
+++ b/en/guides/workflow/node/start.md
@@ -8,7 +8,7 @@ The **“Start”** node is a critical preset node in the Chatflow / Workflow ap
On the Start node's settings page, you'll find two sections: **"Input Fields"** and preset **System Variables**.
-
Chatflow and Workflow
+
Chatflow and Workflow
#### Input Field
diff --git a/en/guides/workflow/node/template.md b/en/guides/workflow/node/template.md
index 1f1d1bd4a..233e50873 100644
--- a/en/guides/workflow/node/template.md
+++ b/en/guides/workflow/node/template.md
@@ -2,7 +2,7 @@
Template lets you dynamically format and combine variables from previous nodes into a single text-based output using Jinja2, a powerful templating syntax for Python. It's useful for combining data from multiple sources into a specific structure required by subsequent nodes. The simple example below shows how to assemble an article by piecing together various previous outputs:
-
+
Beyond naive use cases, you can create more complex templates as per Jinja's [documentation](https://jinja.palletsprojects.com/en/3.1.x/templates/) for a variety of tasks. Here's one template that structures retrieved chunks and their relevant metadata from a knowledge retrieval node into a formatted markdown:
@@ -22,7 +22,7 @@ Beyond naive use cases, you can create more complex templates as per Jinja's [do
{% endraw %}
```
-
+
This template node can then be used within a Chatflow to return intermediate outputs to the end user, before a LLM response is initiated.
diff --git a/en/guides/workflow/node/tools.md b/en/guides/workflow/node/tools.md
index 720511337..5bea4a345 100644
--- a/en/guides/workflow/node/tools.md
+++ b/en/guides/workflow/node/tools.md
@@ -14,9 +14,9 @@ If built-in tools do not meet your needs, you can create custom tools in the **D
You can also orchestrate a more complex workflow and publish it as a tool.
-
Tool Selection
+
Tool Selection
-
Configuring Google Search Tool to Retrieve External Knowledge
+
Configuring Google Search Tool to Retrieve External Knowledge
Configuring a tool node generally involves two steps:
diff --git a/en/guides/workflow/node/variable-aggregator.md b/en/guides/workflow/node/variable-aggregator.md
index f5538ee3b..f2400b1c3 100644
--- a/en/guides/workflow/node/variable-aggregator.md
+++ b/en/guides/workflow/node/variable-aggregator.md
@@ -16,15 +16,15 @@ Through variable aggregation, you can aggregate multiple outputs, such as from i
Without variable aggregation, the branches of Classification 1 and Classification 2, after different knowledge base retrievals, would require repeated definitions for downstream LLM and direct response nodes.
-
By adding variable aggregation, the outputs of the two knowledge retrieval nodes can be aggregated into a single variable.
-
Multi-Branch Aggregation after Issue Classification
+
Multi-Branch Aggregation after Issue Classification
**Multi-Branch Aggregation after IF/ELSE Conditional Branching**
-
Multi-Branch Aggregation after Conditional Branching
+
Multi-Branch Aggregation after Conditional Branching
### 3 Format Requirements
diff --git a/en/guides/workflow/node/variable-assigner.md b/en/guides/workflow/node/variable-assigner.md
index a1ab23795..40e5544c8 100644
--- a/en/guides/workflow/node/variable-assigner.md
+++ b/en/guides/workflow/node/variable-assigner.md
@@ -22,7 +22,7 @@ Using the variable assigner node, you can write context from the conversation pr
Example: After the conversation starts, LLM will automatically determine whether the user's input contains facts, preferences, or chat history that need to be remembered. If it has, LLM will first extract and store those information, then use it as context to respond. If there is no new information to remember, LLM will directly use the previously relevant memories to answer questions.
-
+
**Configuration process:**
@@ -108,7 +108,7 @@ def main(arg1: list) -> str:
Example: Before the chatting, the user specifies "English" in the `language` input box. This language will be written to the conversation variable, and the LLM will reference this information when responding, continuing to use "English" in subsequent conversations.
-
+
**Configuration Guide:**
@@ -124,7 +124,7 @@ Example: Before the chatting, the user specifies "English" in the `language` inp
Example: After starting the conversation, the LLM will ask the user to input items related to the Checklist in the chatting box. Once the user mentions content from the Checklist, it will be updated and stored in the Conversation Variable. The LLM will remind the user to continue supplementing missing items after each round of dialogue.
-
+
**Configuration Process:**
diff --git a/en/guides/workflow/publish.md b/en/guides/workflow/publish.md
index 575905946..bbdaa47f7 100644
--- a/en/guides/workflow/publish.md
+++ b/en/guides/workflow/publish.md
@@ -2,7 +2,7 @@
After completing debugging, clicking "Publish" in the upper right corner allows you to save and quickly release the workflow as different types of applications.
-
+
Conversational applications can be published as:
diff --git a/en/guides/workflow/variables.md b/en/guides/workflow/variables.md
index 1b06937e3..6768eb832 100644
--- a/en/guides/workflow/variables.md
+++ b/en/guides/workflow/variables.md
@@ -18,7 +18,7 @@ Workflow type application provides the system variables below:
Variables name
Data Type
Description
Remark
sys.files
[LEGACY]
Array[File]
File Parameter: Stores images uploaded by users
The image upload function needs to be enabled in the 'Features' section in the upper right corner of the application orchestration page
sys.user_id
String
User ID: A unique identifier automatically assigned by the system to each user when they use a workflow application. It is used to distinguish different users
sys.app_id
String
App ID: A unique identifier automatically assigned by the system to each App. This parameter is used to record the basic information of the current application.
This parameter is used to differentiate and locate distinct Workflow applications for users with development capabilities
sys.workflow_id
String
Workflow ID: This parameter records information about all nodes information in the current Workflow application.
This parameter can be used by users with development capabilities to track and record information about the nodes contained within a Workflow
sys.workflow_run_id
String
Workflow Run ID: Used to record the runtime status and execution logs of a Workflow application.
This parameter can be used by users with development capabilities to track the application's historical execution records
-
Workflow App System Variables
+
Workflow App System Variables
#### Chatflow
@@ -26,13 +26,13 @@ Chatflow type application provides the following system variables:
Variables name
Data Type
Description
Remark
sys.query
String
Content entered by the user in the chatting box.
sys.files
Array[File]
File Parameter: Stores images uploaded by users
The image upload function needs to be enabled in the 'Features' section in the upper right corner of the application orchestration page
sys.dialogue_count
Number
The number of conversations turns during the user's interaction with a Chatflow application. The count automatically increases by one after each chat round and can be combined with if-else nodes to create rich branching logic.
For example, LLM will review the conversation history at the X conversation turn and automatically provide an analysis.
sys.conversation_id
String
A unique ID for the chatting box interaction session, grouping all related messages into the same conversation, ensuring that the LLM continues the chatting on the same topic and context.
sys.user_id
String
A unique ID is assigned for each application user to distinguish different conversation users.
sys.workflow_id
String
Workflow ID: This parameter records information about all nodes information in the current Workflow application.
This parameter can be used by users with development capabilities to track and record information about the nodes contained within a Workflow
sys.workflow_run_id
String
Workflow Run ID: Used to record the runtime status and execution logs of a Workflow application.
This parameter can be used by users with development capabilities to track the application's historical execution records
-
Chatflow App System Variables
+
Chatflow App System Variables
### Environment Variables
**Environment variables are used to protect sensitive information involved in workflows**, such as API keys and database passwords used when running workflows. They are stored in the workflow rather than in the code, allowing them to be shared across different environments.
-
Environment Variables
+
Environment Variables
Supports the following 3 data types:
@@ -56,7 +56,7 @@ Environmental variables have the following characteristics:
For example, you can store the language preference input by the user in the first round of chatting in a conversation variable. The LLM will refer to the information in the conversation variable when answering and use the specified language to reply to the user in subsequent chats.
-
Conversation Variable
+
Conversation Variable
**Conversation variables** support the following six data types:
diff --git a/en/guides/workspace/app.md b/en/guides/workspace/app.md
index aa216bf42..d9a785d96 100644
--- a/en/guides/workspace/app.md
+++ b/en/guides/workspace/app.md
@@ -4,11 +4,11 @@
In the **Discover** section, several commonly used template applications are provided. These applications cover areas such as human resources, assistants, translation, programming, and writing.
-
+
To use a template application, click the "Add to Workspace" button on the template. You can then use the application in the workspace on the left side.
-
+
To modify a template and create a new application, click the "Customize" button on the template.
diff --git a/en/guides/workspace/app/README.md b/en/guides/workspace/app/README.md
index 778d8e6ad..08f2adc99 100644
--- a/en/guides/workspace/app/README.md
+++ b/en/guides/workspace/app/README.md
@@ -4,11 +4,11 @@
In the **Discover** section, several commonly used template applications are provided. These applications cover areas such as human resources, assistants, translation, programming, and writing.
-
+
To use a template application, click the "Add to Workspace" button on the template. You can then use the application in the workspace on the left side.
-
+
To modify a template and create a new application, click the "Customize" button on the template.
diff --git a/en/learn-more/extended-reading/how-to-use-json-schema-in-dify.md b/en/learn-more/extended-reading/how-to-use-json-schema-in-dify.md
index 06bbf8bbc..0067fb0df 100644
--- a/en/learn-more/extended-reading/how-to-use-json-schema-in-dify.md
+++ b/en/learn-more/extended-reading/how-to-use-json-schema-in-dify.md
@@ -18,7 +18,7 @@ JSON Schema is a specification for describing JSON data structures. Developers c
Switch the LLM in your application to one of the models supporting JSON Schema output mentioned above. Then, in the settings form, enable `JSON Schema` and fill in the JSON Schema template. Simultaneously, enable the `response_format` column and switch it to the `json_schema` format.
-
+
The content generated by the LLM supports output in the following format:
@@ -205,7 +205,7 @@ You are a UI generator AI. Convert the user input into a UI.
**Example Output:**
-
+
## Tips
diff --git a/en/learn-more/extended-reading/retrieval-augment/README.md b/en/learn-more/extended-reading/retrieval-augment/README.md
index cf62dd19f..4468b4078 100644
--- a/en/learn-more/extended-reading/retrieval-augment/README.md
+++ b/en/learn-more/extended-reading/retrieval-augment/README.md
@@ -8,7 +8,7 @@ Developers can use this technology to build AI-powered customer service, enterpr
In the diagram below, when a user asks, "Who is the President of the United States?", the system does not directly pass the question to the large model for an answer. Instead, it first performs a vector search in a knowledge base (such as Wikipedia shown in the diagram) to find relevant content through semantic similarity matching (e.g., "Joe Biden is the 46th and current president of the United States..."). Then, the system provides the user's question along with the retrieved relevant knowledge to the large model, allowing it to obtain sufficient information to answer the question reliably.
-
Basic RAG Architecture
+
Basic RAG Architecture
**Why is this necessary?**
diff --git a/en/learn-more/extended-reading/retrieval-augment/hybrid-search.md b/en/learn-more/extended-reading/retrieval-augment/hybrid-search.md
index 6e8122a13..5c398e425 100644
--- a/en/learn-more/extended-reading/retrieval-augment/hybrid-search.md
+++ b/en/learn-more/extended-reading/retrieval-augment/hybrid-search.md
@@ -29,7 +29,7 @@ For most text search scenarios, the primary goal is to ensure that the most rele
In hybrid search, you need to establish vector indexes and keyword indexes in the database in advance. When a user query is input, the most relevant texts are retrieved from the documents using both retrieval methods.
-
Hybrid Search
+
Hybrid Search
"Hybrid search" does not have a precise definition. This article uses the combination of vector search and keyword search as an example. If we use other combinations of search algorithms, it can also be called "hybrid search." For instance, we can combine knowledge graph techniques for retrieving entity relationships with vector search techniques.
@@ -39,7 +39,7 @@ Different retrieval systems excel at finding various subtle relationships betwee
Definition: Generating query embeddings and querying the text segments most similar to their vector representations.
-
Vector Search Settings
+
Vector Search Settings
**TopK:** Used to filter the text fragments most similar to the user's query. The system will dynamically adjust the number of fragments based on the context window size of the selected model. The default value is 3.
@@ -51,7 +51,7 @@ Definition: Generating query embeddings and querying the text segments most simi
Definition: Indexing all words in the document, allowing users to query any word and return text fragments containing those words.
-
Full-Text Search Settings
+
Full-Text Search Settings
**TopK:** Used to filter the text fragments most similar to the user's query. The system will dynamically adjust the number of fragments based on the context window size of the selected model. The default value is 3.
@@ -61,7 +61,7 @@ Definition: Indexing all words in the document, allowing users to query any word
Simultaneously performs full-text search and vector search, applying a re-ranking step to select the best results matching the user's query from both types of query results. Requires configuring the Rerank model API.
-
Hybrid Search Settings
+
Hybrid Search Settings
**TopK:** Used to filter the text fragments most similar to the user's query. The system will dynamically adjust the number of fragments based on the context window size of the selected model. The default value is 3.
@@ -71,16 +71,16 @@ Simultaneously performs full-text search and vector search, applying a re-rankin
Set different retrieval modes by entering the "Dataset -> Create Dataset" page and configuring the retrieval settings.
-
Setting Retrieval Mode When Creating a Dataset
+
Setting Retrieval Mode When Creating a Dataset
### Modifying Retrieval Mode in Dataset Settings
Modify the retrieval mode of an existing dataset by entering the "Dataset -> Select Dataset -> Settings" page.
-
Modifying Retrieval Mode in Dataset Settings
+
Modifying Retrieval Mode in Dataset Settings
### Modifying Retrieval Mode in Prompt Arrangement
Modify the retrieval mode when creating an application by entering the "Prompt Arrangement -> Context -> Select Dataset -> Settings" page.
-
Modifying Retrieval Mode in Prompt Arrangement
+
Modifying Retrieval Mode in Prompt Arrangement
diff --git a/en/learn-more/extended-reading/retrieval-augment/rerank.md b/en/learn-more/extended-reading/retrieval-augment/rerank.md
index 8e4081c06..87b120065 100644
--- a/en/learn-more/extended-reading/retrieval-augment/rerank.md
+++ b/en/learn-more/extended-reading/retrieval-augment/rerank.md
@@ -6,7 +6,7 @@ Hybrid search can leverage the strengths of different retrieval technologies to
**The re-rank model calculates the semantic match between the list of candidate documents and the user query, reordering them based on semantic match to improve the results of semantic sorting.** The principle is to compute a relevance score between the user query and each candidate document and return a list of documents sorted by relevance from high to low. Common re-rank models include Cohere rerank, bge-reranker, etc.
-
Hybrid Search + Re-ranking
+
Hybrid Search + Re-ranking
In most cases, there is a preliminary retrieval before re-ranking because calculating the relevance score between a query and millions of documents would be highly inefficient. Therefore, **re-ranking is typically placed at the final stage of the search process and is ideal for merging and sorting results from different retrieval systems.**
@@ -24,7 +24,7 @@ For example, with Cohere Rerank, you only need to register an account and apply
Dify currently supports the Cohere Rerank model. You can enter the "Model Providers -> Cohere" page and fill in the API key for the Re-rank model:
-
Configure Cohere Rerank Model in Model Providers
+
Configure Cohere Rerank Model in Model Providers
### How to Obtain the Cohere Rerank Model?
@@ -34,7 +34,7 @@ Visit: [https://cohere.com/rerank](https://cohere.com/rerank), register on the p
Enter the "Dataset -> Create Dataset -> Retrieval Settings" page to add the Re-rank settings. Besides setting the Re-rank model when creating a dataset, you can also change the Re-rank configuration in the settings of an existing dataset and in the dataset recall mode settings in application orchestration.
-
Setting the Re-rank Model in Dataset Retrieval Mode
+
Setting the Re-rank Model in Dataset Retrieval Mode
**TopK:** Used to set the number of relevant documents returned after re-ranking.
@@ -46,4 +46,4 @@ Enter the "Prompt Arrangement -> Context -> Settings" page to enable the Re-rank
Explanation about multi-path recall mode: 🔗Please check the section [Multi-path Retrieval](https://docs.dify.ai/guides/knowledge-base/integrate-knowledge-within-application#multi-path-retrieval-recommended)
-
Setting the Re-rank Model in Multi-Path Recall Mode for Datasets
+
Setting the Re-rank Model in Multi-Path Recall Mode for Datasets
diff --git a/en/learn-more/extended-reading/retrieval-augment/retrieval.md b/en/learn-more/extended-reading/retrieval-augment/retrieval.md
index d823f021e..a3024cf8c 100644
--- a/en/learn-more/extended-reading/retrieval-augment/retrieval.md
+++ b/en/learn-more/extended-reading/retrieval-augment/retrieval.md
@@ -2,7 +2,7 @@
When users build AI applications with multiple knowledge bases, Dify's retrieval strategy will determine which content will be retrieved.
-
retrieval Mode Settings
+
retrieval Mode Settings
### Retrieval Setting
@@ -12,6 +12,6 @@ In multi-path retrieval mode, it's recommended that the Rerank model be configur
Below is the technical flowchart for the multi-path retrieval mode:
-
Multi-Path retrieval
+
Multi-Path retrieval
Since multi-path retrieval mode does not rely on the model's inference capability or dataset descriptions, it can achieve higher-quality retrieval results when retrieving across multiple datasets. Additionally, incorporating a re-ranking step can effectively improve document retrieval effectiveness. Therefore, when creating knowledge base Q\&A applications associated with multiple datasets, we recommend configuring the retrieval mode as multi-path retrieval.
diff --git a/en/learn-more/use-cases/build-an-notion-ai-assistant.md b/en/learn-more/use-cases/build-an-notion-ai-assistant.md
index acd23ccde..2c74e7fb6 100644
--- a/en/learn-more/use-cases/build-an-notion-ai-assistant.md
+++ b/en/learn-more/use-cases/build-an-notion-ai-assistant.md
@@ -36,39 +36,39 @@ Click [here](https://dify.ai/) to login to Dify. You can conveniently log in usi
Click the `Knowledge` button on the top side bar, followed by the `Create Knowledge` button.
-
+
#### 3. Connect with Notion and Your Knowledge[](https://wsyfin.com/notion-dify#3-connect-with-notion-and-datasets)
Select "Sync from Notion" and then click the "Connect" button..
-
+
Afterward, you'll be redirected to the Notion login page. Log in with your Notion account.
-
+
Check the permissions needed by Dify, and then click the "Select pages" button.
-
+
Select the pages you want to synchronize with Dify, and press the "Allow access" button.
-
+
#### 4. Start training[](https://wsyfin.com/notion-dify#4-start-training)
Specifying the pages for AI need to study, enabling it to comprehend the content within this section of Notion. Then click the "next" button.
-
+
We suggest selecting the "Automatic" and "High Quality" options to train your AI assistant. Then click the "Save & Process" button.
-
+
Enjoy your coffee while waiting for the training process to complete.
-
+
#### 5. Create Your AI application[](https://wsyfin.com/notion-dify#5-create-your-ai-application)
@@ -76,11 +76,11 @@ You must create an AI application and link it with the knowledge you've recently
Return to the dashboard, and click the "Create new APP" button. It's recommended to use the Chat App directly.
-
+
Select the "Prompt Eng." and link your notion datasets in the "context".
-
+
I recommend adding a 'Pre Prompt' to your AI application. Just like spells are essential to Harry Potter, similarly, certain tools or features can greatly enhance the ability of AI application.
@@ -88,15 +88,15 @@ For example, if your Notion notes focus on problem-solving in software developme
_I want you to act as an IT Expert in my Notion workspace, using your knowledge of computer science, network infrastructure, Notion notes, and IT security to solve the problems_.
-
+
It's recommended to initially enable the AI to actively furnish the users with a starter sentence, providing a clue as to what they can ask. Furthermore, activating the 'Speech to Text' feature can allow users to interact with your AI assistant using their voice.
-
+
Finally, Click the "Publish" button on the top right of the page. Now you can click the public URL in the "Monitoring" section to converse with your personalized AI assistant!
-
+
### Utilizing API to Integrate With Your Project
@@ -106,19 +106,19 @@ With effortless API integration, you can conveniently invoke your Notion AI appl
Click the "API Reference" button on the page of Overview page. You can refer to it as your App's API document.
-
+
#### 1. Generate API Secret Key[](https://wsyfin.com/notion-dify#1-generate-api-secret-key)
For security reasons, it's recommended to create a new API secret key to access your AI application.
-
+
#### 2. Retrieve Conversation ID[](https://wsyfin.com/notion-dify#2-retrieve-conversation-id)
After chatting with your AI application, you can retrieve the session ID from the "Logs & Ann." pages.
-
+
#### 3. Invoke API[](https://wsyfin.com/notion-dify#3-invoke-api)
@@ -143,19 +143,19 @@ curl --location --request POST 'https://api.dify.ai/v1/chat-messages' \
Sending request in terminal and you will get a successful response.
-
+
If you want to continue this chat, go to replace the `conversation_id` of the request code to the `conversation_id` you get from the response.
And you can check all the conversation history on the "Logs & Ann." page.
-
+
### Sync with notion periodically[](https://wsyfin.com/notion-dify#sync-with-notion-periodically)
If your Notion's pages have updated, you can sync with Dify periodically to keep your AI assistant up-to-date. Your AI assistant will learn from the new content.
-
+
### Summary[](https://wsyfin.com/notion-dify#summary)
diff --git a/en/learn-more/use-cases/create-a-midjourney-prompt-bot-with-dify.md b/en/learn-more/use-cases/create-a-midjourney-prompt-bot-with-dify.md
index 4a3d7ec71..442437be3 100644
--- a/en/learn-more/use-cases/create-a-midjourney-prompt-bot-with-dify.md
+++ b/en/learn-more/use-cases/create-a-midjourney-prompt-bot-with-dify.md
@@ -10,48 +10,48 @@ Dify offers two types of applications: conversational applications similar to Ch
You can access Dify here: https://dify.ai/
-
+
Once you've created your application, the dashboard page will display some data monitoring and application settings. Click on "Prompt Engineering" on the left, which is the main working page.
-
+
On this page, the left side is for prompt settings and other functions, while the right side provides real-time previews and usage of your created content. The prefix prompts are the triggers that the user inputs after each content, and they instruct the GPT model how to process the user's input information.
-
+
Take a look at my prefix prompt structure: the first part instructs GPT to output a description of a photo in the following structure. The second structure serves as the template for generating the prompt, mainly consisting of elements like 'Color photo of the theme,' 'Intricate patterns,' 'Stark contrasts,' 'Environmental description,' 'Camera model,' 'Lens focal length description related to the input content,' 'Composition description relative to the input content,' and 'The names of four master photographers.' This constitutes the main content of the prompt. In theory, you can now save this to the preview area on the right, input the theme you want to generate, and the corresponding prompt will be generated.
-
+
You may have noticed the "\{{proportion\}}" and "\{{version\}}" at the end. These are variables used to pass user-selected information. On the right side, users are required to choose image proportions and model versions, and these two variables help carry that information to the end of the prompt. Let's see how to set them up.
-
+
Our goal is to fill in the user's selected information at the end of the prompt, making it easy for users to copy without having to rewrite or memorize these commands. For this, we use the variable function.
Variables allow us to dynamically incorporate the user's form-filled or selected content into the prompt. For example, I've created two variables: one represents the image proportion, and the other represents the model version. Click the "Add" button to create the variables.
-
+
After creation, you'll need to fill in the variable key and field name. The variable key should be in English. The optional setting means the field will be non-mandatory when the user fills it. Next, click "Settings" in the action bar to set the variable content.
-
+
Variables can be of two types: text variables, where users manually input content, and select options where users select from given choices. Since we want to avoid manual commands, we'll choose the dropdown option and add the required choices.
-
+
Now, let's use the variables. We need to enclose the variable key within double curly brackets {} and add it to the prefix prompt. Since we want the GPT to output the user-selected content as is, we'll include the phrase "Producing the following English photo description based on user input" in the prompt.
-
+
However, there's still a chance that GPT might modify our variable content. To address this, we can lower the diversity in the model selection on the right, reducing the temperature and making it less likely to alter our variable content. You can check the tooltips for other parameters' meanings.
-
+
With these steps, your application is now complete. After testing and ensuring there are no issues with the output, click the "Publish" button in the upper right corner to release your application. You and users can access your application through the publicly available URL. You can also customize the application name, introduction, icon, and other details in the settings.
-
+
That's how you create a simple AI application using Dify. You can also deploy your application on other platforms or modify its UI using the generated API. Additionally, Dify supports uploading your own data, such as building a customer service bot to assist with product-related queries. This concludes the tutorial, and a special thanks to @goocarlos for creating such a fantastic product.
diff --git a/en/learn-more/use-cases/create-an-ai-chatbot-with-business-data-in-minutes.md b/en/learn-more/use-cases/create-an-ai-chatbot-with-business-data-in-minutes.md
index 05c7792ad..2cd345189 100644
--- a/en/learn-more/use-cases/create-an-ai-chatbot-with-business-data-in-minutes.md
+++ b/en/learn-more/use-cases/create-an-ai-chatbot-with-business-data-in-minutes.md
@@ -21,7 +21,7 @@ If you want to build an AI Chatbot based on the company's existing knowledge bas
3. select the cleaning method
4. Click \[Save and Process], and it will take only a few seconds to complete the processing.
-
+
### Create an AI application and give it instructions
@@ -39,19 +39,19 @@ In this case, we assign a role to the AI:
> Opening remarks:Hey \{{User\_name\}}, I'm Bob☀️, the first AI member of Dify. You can discuss with me any questions related to Dify products, team, and even LLMOps.
-
+
### Debug the performance of AI Chatbot and publish.
After completing the setup, you can send messages to it on the right side of the current page to debug whether its performance meets expectations. Then click "Publish". And then you get an AI chatbot.
-
+
### Embed AI Chatbot application into your front-end page.
This step is to embed the prepared AI chatbot into your official website . Click \[Overview] -> \[Embedded], select the script tag method, and copy the script code into the \ or \ tag of your website. If you are not a technical person, you can ask the developer responsible for the official website to paste and update the page.
-
+
1. Paste the copied code into the target location on your website.
2. Update your official website and you can get an AI intelligent customer service with your business data. Try it out to see the effect.
diff --git a/en/learn-more/use-cases/how-to-connect-aws-bedrock.md b/en/learn-more/use-cases/how-to-connect-aws-bedrock.md
index 1cd691517..26485608b 100644
--- a/en/learn-more/use-cases/how-to-connect-aws-bedrock.md
+++ b/en/learn-more/use-cases/how-to-connect-aws-bedrock.md
@@ -12,13 +12,13 @@ This article will briefly introduce how to connect the Dify platform with the AW
Visit [AWS Bedrock](https://aws.amazon.com/bedrock/) and create the Knowledge Base service.
-
Create AWS Bedrock Knowledge Base
+
Create AWS Bedrock Knowledge Base
### 2. Build the Backend API Service
The Dify platform cannot directly connect to AWS Bedrock Knowledge Base. The developer needs to refer to Dify's [API definition](../../guides/knowledge-base/external-knowledge-api-documentation.md) on external knowledge base connection, manually create the backend API service, and establish a connection with AWS Bedrock. Please refer to the specific architecture diagram:
-
Build the backend API service
+
Build the backend API service
You can refer to the following 2 demo code.
@@ -117,7 +117,7 @@ During the process, you can construct the API endpoint address and the API Key f
After log in to the AWS Bedrock Knowledge backend and get the ID of the created Knowledge Base, you can use this parameter to connect to the Dify platform in the subsequent steps.
-
Get the AWS Bedrock Knowledge Base ID
+
Get the AWS Bedrock Knowledge Base ID
### 4. Associate the External Knowledge API
@@ -129,13 +129,13 @@ Follow the prompts on the page and fill in the following information:
* API endpoint address, the connection address of the external knowledge base, which can be customized in [Step 2](how-to-connect-aws-bedrock.md#id-2.build-the-backend-api-service). Example: `api-endpoint/retrieval`;
* API Key, the external knowledge base connection key, which can be customized in [Step 2](how-to-connect-aws-bedrock.md#id-2.build-the-backend-api-service).
-
+
### 5. Connect to External Knowledge Base
Go to the **“Knowledge** page, click **“Connect to an External Knowledge Base”** below the add knowledge base card to jump to the parameter configuration page.
-
+
Fill in the following parameters:
@@ -152,7 +152,7 @@ Fill in the AWS Bedrock knowledge base ID obtained in Step 3.
**Score threshold:** The similarity threshold for text chunk filtering. Only text chunks with a score exceeding the set score will be recalled. The default value is 0.5. The higher the value, the higher the similarity required between the text and the question, the smaller the number of texts expected to be recalled, and the more accurate the result will be.
-
+
After the settings are completed, you can establish a connection with the external knowledge base API.
@@ -160,8 +160,8 @@ After the settings are completed, you can establish a connection with the extern
After establishing a connection with an external knowledge base, developers can simulate possible user's question keywords in **"Retrieval Test"** and preview the text chunks retrieval from the AWS Bedrock Knowledge Base.
-
Test the connection and retrieval of the external knowledge base
+
Test the connection and retrieval of the external knowledge base
If you are not satisfied with the retrieval results, you can try to modify the retrieval parameters or adjust the retrieval settings of AWS Bedrock Knowledge Base.
-
Adjust the text chunking parameters of AWS Bedrock Knowledge Base
+
Adjust the text chunking parameters of AWS Bedrock Knowledge Base
diff --git a/en/learn-more/use-cases/how-to-creat-dify-schedule.md b/en/learn-more/use-cases/how-to-creat-dify-schedule.md
index 01b5164e7..5f6959f01 100644
--- a/en/learn-more/use-cases/how-to-creat-dify-schedule.md
+++ b/en/learn-more/use-cases/how-to-creat-dify-schedule.md
@@ -59,7 +59,7 @@ Let's get your automated workflows up and running in just a few steps!
| WeChat Notification | Email Notification |
|:------------------:|:------------------:|
-|  |  |
+|  |  |
## ❓ FAQ
@@ -72,8 +72,8 @@ Let's get your automated workflows up and running in just a few steps!
> 💡 Note: Only workflow applications are supported!
-
-
+
+
### 🚫 Connection Issues?
diff --git a/en/learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website.md b/en/learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website.md
index acd7fbe7f..c608e0a13 100644
--- a/en/learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website.md
+++ b/en/learn-more/use-cases/how-to-integrate-dify-chatbot-to-your-wix-website.md
@@ -15,10 +15,10 @@ Assuming you've already created a [Dify AI application](https://docs.dify.ai/gui
3. Click the "Publish" button in the upper right corner
4. On the publish page, choose the "Embed Into Site" option
- 
+ 
5. Select an appropriate style and copy the displayed iFrame code. For example:
- 
+ 
## 2. Embedding the iFrame Code Snippet in Your Wix Site
@@ -26,7 +26,7 @@ Assuming you've already created a [Dify AI application](https://docs.dify.ai/gui
2. Click the blue `+` (Add Elements) button on the left side of the page
3. Select **Embed Code**, then click **Embed HTML** to add an HTML iFrame element to the page
- 
+ 
4. In the `HTML Settings` box, select the `Code` option
5. Paste the iFrame code snippet you obtained from your Dify application
6. Click the **Update** button to save and preview your changes
@@ -37,7 +37,7 @@ Here's an example of an iFrame code snippet for embedding a Dify Chatbot:
```
-
+
> ⚠️ Ensure the address in the iFrame code begins with HTTPS. HTTP addresses will not display correctly.
diff --git a/en/workshop/basic/build-ai-image-generation-app.md b/en/workshop/basic/build-ai-image-generation-app.md
index 788f0589e..c6d373322 100644
--- a/en/workshop/basic/build-ai-image-generation-app.md
+++ b/en/workshop/basic/build-ai-image-generation-app.md
@@ -6,7 +6,7 @@ With the rise of image generation, many excellent image generation products have
In this article, you will learn how to develop an AI image generation app using Dify.
-
+
## You Will Learn
@@ -24,7 +24,7 @@ If you haven't registered yet, you will be asked to register before entering the
After entering the management page, click `copy` to copy the key.
-
+
Next, you need to fill in the key in [Dify - Tools - Stability](https://cloud.dify.ai/tools) by following these steps:
@@ -33,7 +33,7 @@ Next, you need to fill in the key in [Dify - Tools - Stability](https://cloud.di
* Select Stability
* Click `Authorize`
-
+
* Fill in the key and save
@@ -47,7 +47,7 @@ If the message credits are insufficient, you can customize other model providers
Click **Your Avatar - Settings - Model Provider**
-
+
If you haven't found a suitable model provider, the groq platform provides free call credits for LLMs like Llama.
@@ -57,17 +57,17 @@ Click **Create API Key**, set a desired name, and copy the API Key.
Back to **Dify - Model Providers**, select **groqcloud**, and click **Setup**.
-
+
Paste the API Key and save.
-
+
## 3. Build an Agent
Back to **Dify - Studio**, select **Create from Blank**.
-
+
In this experiment, we only need to understand the basic usage of Agent.
@@ -79,21 +79,21 @@ An Agent is an AI system that simulates human behavior and capabilities. It inte
Select **Agent**, fill in the name.
-
+
Next, you will enter the Agent orchestration interface as shown below.
-
+
Select the LLM. Here we use Llama-3.1-70B provided by groq as an example:
-
+
Select Stability in **Tools**:
-
+
-
+
### Write Prompts
@@ -113,17 +113,17 @@ Each time the user inputs a command, the Agent will know this system-level instr
For example: Draw a girl holding an open book.
-
+
### Don't want to write prompts? Of course you can!
Click **Generate** in the upper right corner of Instructions.
-
+
Enter your requirements in the **Instructions** and click **Generate**. The generated prompts on the right will show AI-generated prompts.
-
+
However, to develop a good understanding of prompts, we should not rely on this feature in the early stages.
@@ -131,7 +131,7 @@ However, to develop a good understanding of prompts, we should not rely on this
Click the publish button in the upper right corner, and after publishing, select **Run App** to get a web page for an online running Agent.
-
+
Copy the URL of this web page to share with other friends.
@@ -139,7 +139,7 @@ Copy the URL of this web page to share with other friends.
We can add style instructions in the user's input command, for example: Anime style, draw a girl holding an open book.
-
+
But if we want set the default style to anime style, we can add it to the system prompt because we previously learned that the system prompt is known each time the user command is executed and has a higher priority.
@@ -165,10 +165,10 @@ If the user requests content unrelated to drawing, reply: "Sorry, I don't unders
For example, let's ask: What's for dinner tonight?
-
+
In some more formal business scenarios, we can call a sensitive word library to refuse user requests.
Add the keyword "dinner" in **Add Feature - Content Moderation**. When the user inputs the keyword, the Agent app outputs "Sorry, I don't understand what you're saying."
-
+
diff --git a/en/workshop/intermediate/article-reader.md b/en/workshop/intermediate/article-reader.md
index fa561da56..e1faa1334 100644
--- a/en/workshop/intermediate/article-reader.md
+++ b/en/workshop/intermediate/article-reader.md
@@ -54,7 +54,7 @@ Please choose the appropriate file upload method according to your business scen
Thus, Dify introduced the **doc extractor**, which can extract text from the file variable and output it as a text variable.
-
+
### **LLM**
@@ -123,11 +123,11 @@ To handle multiple uploaded files, an iterative node is needed.
The iterative node is similar to the while loop in many programming languages, except that Dify has no conditional restrictions, and the **input variable can only be of type `array` (list)**. The reason is that Dify will execute all the content in the list until it is done.
-
+
Therefore, you need to adjust the file variable in the start node to an `array` type, i.e., a file list.
-
+
## **Question 2: Handling Specific Files from a File List**
@@ -137,4 +137,4 @@ For example, limit the analysis to only document-type files and sort the files t
Before the iterative node, add a list operation, adjust the **filter condiftion** and **order by**, then change the input of the iterative node to the output of the list operation node.
-
+
diff --git a/en/workshop/intermediate/customer-service-bot.md b/en/workshop/intermediate/customer-service-bot.md
index e4da8f4c4..99315efe1 100644
--- a/en/workshop/intermediate/customer-service-bot.md
+++ b/en/workshop/intermediate/customer-service-bot.md
@@ -26,13 +26,13 @@ Before starting the experiment, remember that the core of the knowledge base is
In Dify, select **Create from Blank - Chatflow.**
-
+
#### Add a Model Provider
This experiment involves using embedding models. Currently, supported embedding model providers include OpenAI and Cohere. In Dify's model providers, those with the `TEXT EMBEDDING` label are supported. Ensure you have added at least one and have sufficient balance.
-
+
> **What is embedding?**
>
@@ -58,7 +58,7 @@ We will start with a uploading local document as an example.
After uploading the document, you will enter the following page:
-
+
You can see a segmentation preview on the right. The default selection is automatic segmentation and cleaning. Dify will automatically divide the article into many paragraphs based on the content. You can also set other segmentation rules in the custom settings.
@@ -80,11 +80,11 @@ Dify provides three retrieval functions: vector retrieval, full-text retrieval,
In hybrid retrieval, you can set weights or use a reranking model. When setting weights, you can set whether the retrieval should focus more on semantics or keywords. For example, in the image below, semantics account for 70% of the weight, and keywords account for 30%.
-
+
Clicking **Save and Process** will process the document. After processing, the document can be used in the application.
-
+
#### Syncing from a Website
@@ -92,7 +92,7 @@ In many cases, we need to build a smart customer service bot based on help docum
Currently, Dify supports processing up to 50 pages. Please pay attention to the quantity limit. If exceeded, you can create a new knowledge base.
-
+
#### Adjusting Knowledge Base Content
@@ -127,7 +127,7 @@ Here is a reference classification:
* User requests explanation of technical terms
* User asks about joining the community
-
+
#### Direct Reply Node
@@ -161,7 +161,7 @@ Context: You need to use the output of the knowledge retrieval node as the conte
System prompt: Based on \{{context\}}, answer \{{user question\}}
-
+
You can use `/` or `{` to reference variables in the prompt writing area. In variables, variables starting with `sys.` are system variables. Please refer to the help documentation for details.
@@ -171,7 +171,7 @@ In addition, you can enable LLM memory to make the user's conversation experienc
In the knowledge base function, you can connect external knowledge bases through external knowledge base APIs, such as the AWS Bedrock knowledge base.
-
+
For best practices on AWS Bedrock knowledge bases, please read: [how-to-connect-aws-bedrock.md](../../learn-more/use-cases/how-to-connect-aws-bedrock.md "mention")
@@ -179,7 +179,7 @@ For best practices on AWS Bedrock knowledge bases, please read: [how-to-connect-
In both the community edition and SaaS version of Dify, you can add, delete, and query the status of knowledge bases through the knowledge base API.
-
+
In the instance with the knowledge base deployed, go to **Knowledge Base -> API** and create an API key. Please keep the API key safe.
diff --git a/en/workshop/intermediate/twitter-chatflow.md b/en/workshop/intermediate/twitter-chatflow.md
index f9c4b0664..8e26d23a6 100644
--- a/en/workshop/intermediate/twitter-chatflow.md
+++ b/en/workshop/intermediate/twitter-chatflow.md
@@ -59,7 +59,7 @@ docker compose up -d
Configure Model Provider in account setting:
-
+
## Create a chatflow
@@ -67,15 +67,15 @@ Now, let's get started on the chatflow.
Click on `Create from Blank` to start:
-
+
The initialized chatflow should be like:
-
+
## Add nodes to chatflow
-
The final chatflow looks like this
+
The final chatflow looks like this
### Start node
@@ -83,7 +83,7 @@ In start node, we can add some system variables at the beginning of a chat. In t
Click on Start node and add a new variable:
-
+
### Code node
@@ -100,27 +100,27 @@ def main(id: str) -> dict:
Add a code node and select python, and set input and output variable names:
-
+
### HTTP request node
Based on the [Crawlbase docs](https://crawlbase.com/docs/crawling-api/scrapers/#twitter-profile), to scrape a Twitter user’s profile in http format, we need to complete HTTP request node in the following format:
-
+
Importantly, it is best not to directly enter the token value as plain text for security reasons, as this is not a good practice. Actually, in the latest version of Dify, we can set token values in **`Environment Variables`**. Click `env` - `Add Variable` to set the token value, so plain text will not appear in the node.
Check [https://crawlbase.com/dashboard/account/docs](https://crawlbase.com/dashboard/account/docs) for your crawlbase API Key.
-
+
By typing `/` , you can easily insert the API Key as a variable.
-
+
Tap the start button of this node to check whether it works correctly:
-
+
### LLM node
@@ -130,17 +130,17 @@ The value `context` should be `body` from HTTP Request node.
The following is a sample system prompt.
-
+
## Test run
Click `Preview` to start a test run and input twitter user id in `id`
-
+
For example, I want to analyze Elon Musk's tweets and write a tweet about global warming in his tone.
-
+
Does this sound like Elon? lol
diff --git a/jp/community/docs-contribution.md b/jp/community/docs-contribution.md
index 0b06506e1..2e01eb44f 100644
--- a/jp/community/docs-contribution.md
+++ b/jp/community/docs-contribution.md
@@ -13,7 +13,7 @@ Dify のヘルプドキュメントは、[オープンソースプロジェク
ドキュメントを読んでいる際に内容の誤りを見つけたり、一部を修正したい場合は、文書ページの右側にある目次内の **“Github に編集”** ボタンをクリックしてください。これにより、GitHub のオンラインエディターを使用してファイルを修正できます。その後、修正内容を簡潔に説明した pull request を作成してください。タイトルは `Fix: Update xxx` の形式を使用してください。リクエストを受け取った後、レビューを行い、問題がなければ修正をマージします。
-
+
もちろん、[Issues ページ](https://github.com/langgenius/dify-docs/issues)にドキュメントのリンクを貼り付け、修正が必要な内容を簡単に説明していただくことも可能です。フィードバックを受け取った後、迅速に対応いたします。
diff --git a/jp/development/models-integration/gpustack.md b/jp/development/models-integration/gpustack.md
index 78c079f15..78b0c827b 100644
--- a/jp/development/models-integration/gpustack.md
+++ b/jp/development/models-integration/gpustack.md
@@ -36,7 +36,7 @@ GPUStackにホストされたLLMを使用する方法の例です:
3. モデルを展開するために「Save」をクリックします。
-
+
## APIキーの作成方法
@@ -60,6 +60,6 @@ GPUStackにホストされたLLMを使用する方法の例です:
モデルをアプリケーションで使用するために、「Save」をクリックしてください。
-
+
GPUStackに関する詳細情報は、[Github Repo](https://github.com/gpustack/gpustack)を参照してください。
\ No newline at end of file
diff --git a/jp/development/models-integration/hugging-face.md b/jp/development/models-integration/hugging-face.md
index b3e778547..396e0faec 100644
--- a/jp/development/models-integration/hugging-face.md
+++ b/jp/development/models-integration/hugging-face.md
@@ -11,7 +11,7 @@ Difyはテキスト生成(Text-Generation)と埋め込み(Embeddings)を
2. Hugging FaceのAPIキーを設定します([取得はこちら](https://huggingface.co/settings/tokens))。
3. [Hugging Faceのモデル一覧ページ](https://huggingface.co/models)にアクセスし、対応するモデルの種類を選択します。
-
+
DifyはHugging Face上のモデルを次の2つの方法で接続できます:
@@ -24,17 +24,17 @@ DifyはHugging Face上のモデルを次の2つの方法で接続できます:
モデルの詳細ページの右側にHosted Inference APIのセクションがあるモデルのみがHosted Inference APIをサポートしています。以下の図のように表示されます:
-
+
モデルの詳細ページで、モデルの名前を取得できます。
-
+
#### 2 Difyで接続モデルを使用する
`設定 > モデルプロバイダー > Hugging Face > モデルタイプ`のエンドポイントタイプでHosted Inference APIを選択します。以下の図のように設定します:
-
+
APIトークンは記事の冒頭で設定したAPIキーです。モデル名は前のステップで取得したモデル名を入力します。
@@ -44,26 +44,26 @@ APIトークンは記事の冒頭で設定したAPIキーです。モデル名
モデルの詳細ページの右側にある`Deploy`ボタンの下にInference EndpointsオプションがあるモデルのみがInference Endpointをサポートしています。以下の図のように表示されます:
-
+
#### 2 モデルをデプロイ
モデルのデプロイボタンをクリックし、Inference Endpointオプションを選択します。以前にクレジットカードを登録していない場合は、カードの登録が必要です。手順に従って進めてください。カードを登録した後、以下の画面が表示されます:必要に応じて設定を変更し、左下のCreate EndpointボタンをクリックしてInference Endpointを作成します。
-
+
モデルがデプロイされると、エンドポイントURLが表示されます。
-
+
#### 3 Difyで接続モデルを使用する
`設定 > モデルプロバイダー > Hugging Face > モデルタイプ`のエンドポイントタイプでInference Endpointsを選択します。以下の図のように設定します:
-
+
APIトークンは記事の冒頭で設定したAPIキーです。`テキスト生成モデルの名前は任意に設定可能ですが、埋め込みモデルの名前はHugging Faceの名前と一致する必要があります。`エンドポイントURLは前のステップでデプロイしたモデルのエンドポイントURLを入力します。
-
+
> 注意点:埋め込みの「ユーザー名 / 組織名」は、Hugging Faceの[Inference Endpoints](https://huggingface.co/docs/inference-endpoints/guides/access)のデプロイ方法に基づいて、「[ユーザー名](https://huggingface.co/settings/account)」または「[組織名](https://ui.endpoints.huggingface.co/)」を入力する必要があります。
diff --git a/jp/development/models-integration/litellm.md b/jp/development/models-integration/litellm.md
index fe434ae6a..41177448e 100644
--- a/jp/development/models-integration/litellm.md
+++ b/jp/development/models-integration/litellm.md
@@ -51,7 +51,7 @@ docker run \
`設定 > モデルプロバイダー > OpenAI-API互換` で、以下を入力してください:
-
+
- モデル名: `gpt-4`
- ベースURL: `http://localhost:4000`
diff --git a/jp/development/models-integration/replicate.md b/jp/development/models-integration/replicate.md
index fcaf186d3..a07615e0a 100644
--- a/jp/development/models-integration/replicate.md
+++ b/jp/development/models-integration/replicate.md
@@ -9,8 +9,8 @@ DifyはReplicate上の[言語モデル](https://replicate.com/collections/langua
3. モデルを選択します。[言語モデル](https://replicate.com/collections/language-models)と[埋め込みモデル](https://replicate.com/collections/embedding-models)からモデルを選びます。
4. Difyの`設定 > モデルプロバイダ > Replicate`にてモデルを追加します。
-
+
APIキーは第2ステップで設定したAPIキーです。モデル名とモデルバージョンはモデルの詳細ページで見つけることができます:
-
+
diff --git a/jp/development/models-integration/xinference.md b/jp/development/models-integration/xinference.md
index 932f03da0..4b73f976c 100644
--- a/jp/development/models-integration/xinference.md
+++ b/jp/development/models-integration/xinference.md
@@ -29,7 +29,7 @@ Xinferenceのデプロイ方法は、[ローカルデプロイ](https://github.c
`http://127.0.0.1:9997`にアクセスし、デプロイするモデルとその仕様を選択します。以下の図を参照してください:
-
+
モデルによっては異なるハードウェアプラットフォームでの互換性が異なるため、[Xinference内蔵モデル](https://inference.readthedocs.io/en/latest/models/builtin/index.html)を確認して、作成するモデルが現在のハードウェアプラットフォームでサポートされているかどうかを確認してください。
4. モデルUIDの取得
diff --git a/jp/features/datasets/maintain-dataset-via-api.md b/jp/features/datasets/maintain-dataset-via-api.md
index 5d190dd3a..abf9fdebf 100644
--- a/jp/features/datasets/maintain-dataset-via-api.md
+++ b/jp/features/datasets/maintain-dataset-via-api.md
@@ -12,7 +12,7 @@
ナレッジページに移動し、左側のナビゲーションでAPIページに切り替えます。このページでは、difyが提供するAPIドキュメントを表示し、ナレッジAPIにアクセスするための認証情報を管理できます。
-
{% hint style="warning" %}
除“`变量`”以外的其他内容块不可重复插入。在不同应用和模型下,可插入的内容块会根据不同的提示词模板结构有所区别,`会话历史`、`查询内容` 仅在对话型应用的文本补全模型中可用。
@@ -149,7 +149,7 @@ And answer according to the language English.
在调试预览界面,用户与 AI 产生对话之后,将鼠标指针移动到任意的用户会话,即可在左上角看到“日志”标志按钮,点击即可查看提示词日志。
-
调试日志入口
+
调试日志入口
在日志中,我们可以清晰的查看到:
@@ -157,7 +157,7 @@ And answer according to the language English.
* 当前会话引用的相关文本片段
* 历史会话记录
-
调试预览界面查看提示词日志
+
调试预览界面查看提示词日志
从日志中,我们可以查看经过系统拼装后最终发送至 LLM 的完整提示词,并根据调试结果持续改进提示词输入。
@@ -165,4 +165,4 @@ And answer according to the language English.
在初始的构建应用主界面,左侧导航栏可以看到“日志与标注”,点击进去即可查看完整的日志。 在日志与标注的主界面,点击任意一个会话日志条目,在弹出的右侧对话框中同样鼠标指针移动到会话上即可点开“日志”按钮查看提示词日志。
-