We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent a56df29 commit 562a8f3Copy full SHA for 562a8f3
tutorials/integrations/n8n-integration.mdx
@@ -43,7 +43,7 @@ First, you'll deploy a vLLM worker to serve the `Qwen/qwen3-32b-awq` model.
43
44
* In the **Model** field, enter `Qwen/qwen3-32b-awq`.
45
* Expand the **Advanced** section to configure your vLLM environment variables:
46
- * Set **Max Model Length** to `8192` (or an appropriate context length for your model).
+ * Set **Max Model Length** to `8192`.
47
* Near the bottom of the page: Check **Enable Auto Tool Choice**.
48
* Set **Reasoning Parser** to `Qwen3`.
49
* Set **Tool Call Parser** to `Hermes`.
0 commit comments