Skip to content

Commit a56df29

Browse files
committed
Update n8n tutorial
1 parent 21f68ff commit a56df29

File tree

1 file changed

+56
-40
lines changed

1 file changed

+56
-40
lines changed

tutorials/integrations/n8n-integration.mdx

Lines changed: 56 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -11,21 +11,20 @@ Learn how to integrate Runpod Serverless with n8n, a workflow automation tool. B
1111

1212
In this tutorial, you'll learn how to:
1313

14-
* Deploy a vLLM worker on Runpod Serverless.
15-
* Configure your vLLM endpoint for OpenAI compatibility.
16-
* Connect n8n to your Runpod endpoint.
17-
* Test your integration with a simple workflow.
14+
* Deploy a vLLM worker serving the `Qwen/qwen3-32b-awq` model.
15+
* Configure your environment variables for n8n compatibility.
16+
* Create a simple n8n workflow to test your integration.
17+
* Connect your workflow to your Runpod endpoint.
1818

1919
## Requirements
2020

2121
* You've [created a Runpod account](/get-started/manage-accounts).
2222
* You've created a [Runpod API key](/get-started/api-keys).
2323
* You have [n8n](https://n8n.io/) installed and running.
24-
* (Optional) For gated models, you've created a [Hugging Face access token](https://huggingface.co/docs/hub/en/security-tokens).
2524

2625
## Step 1: Deploy a vLLM worker on Runpod
2726

28-
First, you'll deploy a vLLM worker to serve your language model.
27+
First, you'll deploy a vLLM worker to serve the `Qwen/qwen3-32b-awq` model.
2928

3029
<Steps>
3130
<Step title="Create a new vLLM endpoint">
@@ -42,13 +41,19 @@ First, you'll deploy a vLLM worker to serve your language model.
4241

4342
In the deployment modal:
4443

45-
* Enter the model name or Hugging Face model URL (e.g., `openchat/openchat-3.5-0106`).
46-
* Expand the **Advanced** section:
44+
* In the **Model** field, enter `Qwen/qwen3-32b-awq`.
45+
* Expand the **Advanced** section to configure your vLLM environment variables:
4746
* Set **Max Model Length** to `8192` (or an appropriate context length for your model).
48-
* You may need to enable tool calling and set an appropriate reasoning parser depending on your model.
47+
* Near the bottom of the page: Check **Enable Auto Tool Choice**.
48+
* Set **Reasoning Parser** to `Qwen3`.
49+
* Set **Tool Call Parser** to `Hermes`.
4950
* Click **Next**.
5051
* Click **Create Endpoint**.
5152

53+
<Warning>
54+
When using a different model, you may need to adjust your vLLM environment variables to ensure your model returns responses in the format that n8n expects.
55+
</Warning>
56+
5257
Your endpoint will now begin initializing. This may take several minutes while Runpod provisions resources and downloads your model. Wait until the status shows as **Running**.
5358
</Step>
5459

@@ -57,19 +62,38 @@ First, you'll deploy a vLLM worker to serve your language model.
5762
</Step>
5863
</Steps>
5964

60-
## Step 2: Connect n8n to your Runpod endpoint
65+
## Step 2: Create an n8n workflow
6166

62-
Now you'll configure n8n to use your Runpod endpoint as an OpenAI-compatible API.
67+
Next, you'll create a simple n8n workflow to test your integration.
6368

6469
<Steps>
65-
<Step title="Add an OpenAI Chat Model node">
66-
In your n8n workflow, add a new **OpenAI Chat Model** node to your canvas. Double-click the node to configure it.
70+
<Step title="Create a new workflow">
71+
Open n8n and navigate to your workspace, then click **Create Workflow**.
72+
</Step>
73+
<Step title="Add a chat message trigger">
74+
Click **Add first step** and select **On chat message**. Click **Test chat** to confirm.
75+
</Step>
76+
77+
<Step title="Add AI Agent node">
78+
Click the **+** button and search for **AI Agent** and select it. Click **Execute step** to confirm.
79+
</Step>
80+
81+
<Step title="Add a Chat Model nodel">
82+
Click the **+** button labeled **Chat Model**, search for **OpenAI Chat Model** and select it.
6783
</Step>
6884

6985
<Step title="Create a new credential">
7086
Click the dropdown under **Credential to connect with** and select **Create new credential**.
7187
</Step>
7288

89+
</Steps>
90+
91+
## Step 3: Configure the OpenAI Chat Model node
92+
93+
Now you'll configure the n8n OpenAI Chat Model node to use the model running on your Runpod endpoint.
94+
95+
<Steps>
96+
7397
<Step title="Add your Runpod API key">
7498
Under **API Key**, add your Runpod API Key. You can create an API key in the [Runpod console](/get-started/api-keys).
7599
</Step>
@@ -81,47 +105,39 @@ Now you'll configure n8n to use your Runpod endpoint as an OpenAI-compatible API
81105
https://api.runpod.ai/v2/ENDPOINT_ID/openai/v1
82106
```
83107

84-
Replace `ENDPOINT_ID` with your endpoint ID from Step 1.
108+
Replace `ENDPOINT_ID` with your vLLM endpoint ID from Step 1.
85109
</Step>
86110

87-
<Step title="Save the credential">
88-
Click **Save**. n8n will automatically test your endpoint connection. If successful, you can start using the node in your workflow.
89-
</Step>
90-
</Steps>
111+
<Step title="Test the connection">
112+
Click **Save**. n8n will automatically test your endpoint connection. It may take a few minutes for your endpoint to scale up a worker to process the request. You can monitor the request using the **Workers** and **Requests** tabs for your vLLM endpoint in the Runpod console.
91113

92-
## Step 3: Test your integration
93-
94-
Create a simple workflow to test your integration.
95-
96-
{/* TODO ... */}
97-
98-
<Steps>
99-
<Step title="Create a test workflow">
100-
Add a **Manual Trigger** node and connect it to your **OpenAI Chat Model** node.
114+
If you see the message "Connection tested successfully," that means your endpoint is reachable, but it doesn't gaurantee that it's fully compatible with n8n—we'll do that in the next step.
101115
</Step>
102116

103-
<Step title="Configure the chat model">
104-
In the **OpenAI Chat Model** node, add a test message like "Hello, what can you help me with?"
105-
</Step>
117+
<Step title="Select the Qwen3 model">
118+
Press escape to return to the OpenAI Chat Model configuration modal.
119+
120+
Under **Model**, select `qwen/qwen3-32b-awq`, then press escape to return to the workflow canvas.
106121

107-
<Step title="Execute the workflow">
108-
Click **Execute Workflow** in n8n. You should see a response from your model running on Runpod.
109122
</Step>
110123

111-
<Step title="Monitor requests">
112-
Monitor requests from your n8n workflow in the endpoint details page of the Runpod console.
124+
<Step title="Type a test message">
125+
Type a test message into the chat box like "Hello, how are you?" and press enter.
126+
127+
If everything is working correctly, you should see each of the nodes in your workflow go green to indicate successful execution, and a response from the model in the chat box.
128+
129+
<Tip>
130+
Make sure to **Save** your workflow before closing it, as n8n may not save changes to your model node configuration automatically.
131+
</Tip>
113132
</Step>
114133
</Steps>
115134

116-
<Note>
117-
118-
The n8n chat feature may have trouble parsing output from vLLM depending on your model. If you experience issues, try adjusting your model's output format or testing with a different model.
119-
120-
</Note>
121135

122136
## Next steps
123137

124-
Now that you've integrated Runpod with n8n, you can:
138+
Congratulations! You've successfully used Runpod to power an AI agent on n8n.
139+
140+
Now that you've integrated with n8n, you can:
125141

126142
* Build complex AI-powered workflows using your Runpod endpoints.
127143
* Explore other [integration options](/integrations/overview) with Runpod.

0 commit comments

Comments
 (0)