From 8f5af7db6591744fb0094892182f2924e811eb75 Mon Sep 17 00:00:00 2001 From: linmoumou Date: Thu, 14 May 2026 09:56:31 +0800 Subject: [PATCH] fix: correct spelling and grammar across docs - registry/claim-my-node.mdx: 'choosed' -> 'chosen' - tutorials/video/wan/wan2-2-animate.mdx: 'Bleow' -> 'Below', 'less then' -> 'less than', grammar fixes - zh/tutorials/image/qwen/qwen-image.mdx: 'proccessing' -> 'processing', 'detph' -> 'depth' --- registry/claim-my-node.mdx | 2 +- tutorials/video/wan/wan2-2-animate.mdx | 4 ++-- zh/tutorials/image/qwen/qwen-image.mdx | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/registry/claim-my-node.mdx b/registry/claim-my-node.mdx index 96f9e422e..594f23f57 100644 --- a/registry/claim-my-node.mdx +++ b/registry/claim-my-node.mdx @@ -38,7 +38,7 @@ To claim your nodes: alt="Choose a Publisher - Dark Mode" /> -To claim the node under the choosed publisher, follow these steps: +To claim the node under the chosen publisher, follow these steps: 1. **Review Node Information:** - Check the node details, including the name, repository link, and publisher status as shown on the screen. diff --git a/tutorials/video/wan/wan2-2-animate.mdx b/tutorials/video/wan/wan2-2-animate.mdx index ac2abcddf..ca94cc4a2 100644 --- a/tutorials/video/wan/wan2-2-animate.mdx +++ b/tutorials/video/wan/wan2-2-animate.mdx @@ -126,10 +126,10 @@ The Wan2.2 animate has two modes: Mix and move 3. Upload the reference image, the character is this image will be the target character 4. You can use the videos we provided as input videos for the first time, the **DWPose Estimator** node in [comfyui_controlnet_aux](https://github.com/Fannovel16/comfyui_controlnet_aux) will preprocess the input video to pose and face control videos 5. The `Points Editor` is from [KJNodes](https://github.com/kijai/ComfyUI-KJNodes/), by default this node will not load the first frame from the input video, you need to run the workflow once or manually upload the first frame - - Bleow the `Points Editor` node, we have added note about how this node work, and how to edit it please refer to it + - Below the `Points Editor` node, we have added a note about how this node works, and how to edit it. Please refer to it 6. For the "Video Extend" group, it's in order to extend to the output video length - Each `Video Extend` will extend another 77 frames(Around 4.8125 seconds) - - If your input video is less then 5s, you might not need it + - If your input video is less than 5s, you might not need it - If you want to extend longer, you need to copy and paste multiple times, you need to link the `batch_images` from last Video Extend to next one, and also the `video_frame_offset` from last Video Extend to next one 7. Click the `Run` button or use the shortcut `Ctrl(cmd) + Enter` to execute video generation diff --git a/zh/tutorials/image/qwen/qwen-image.mdx b/zh/tutorials/image/qwen/qwen-image.mdx index fc0ecc275..aae9d0e9e 100644 --- a/zh/tutorials/image/qwen/qwen-image.mdx +++ b/zh/tutorials/image/qwen/qwen-image.mdx @@ -270,7 +270,7 @@ Comfy Org rehost 地址: [Qwen-Image-DiffSynth-ControlNets/model_patches](http 4. 如果需要可以修改 `QwenImageDiffsynthControlnet` 节点的 `strength` 强度来控制线稿控制的强度 5. 点击 `Run` 按钮,或者使用快捷键 `Ctrl(cmd) + Enter(回车)` 来运行工作流 -> 对于 qwen_image_depth_diffsynth_controlnet.safetensors 使用,需要将图像预处理成 detph 深度图,替换掉 `image proccessing` 图,对于这部分的使用,请参考本篇文档中 InstantX 的处理方法,其它部分与 Canny 模型的使用类似 +> 对于 qwen_image_depth_diffsynth_controlnet.safetensors 使用,需要将图像预处理成 depth 深度图,替换掉 `image processing` 图,对于这部分的使用,请参考本篇文档中 InstantX 的处理方法,其它部分与 Canny 模型的使用类似 **Inpaint 模型 ControlNet 使用说明** ![Inpaint 工作流](/images/tutorial/image/qwen/image_qwen_image_controlnet_patch-inpaint.jpg)