Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion registry/claim-my-node.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ To claim your nodes:
alt="Choose a Publisher - Dark Mode"
/>

To claim the node under the choosed publisher, follow these steps:
To claim the node under the chosen publisher, follow these steps:

1. **Review Node Information:**
- Check the node details, including the name, repository link, and publisher status as shown on the screen.
Expand Down
4 changes: 2 additions & 2 deletions tutorials/video/wan/wan2-2-animate.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -126,10 +126,10 @@ The Wan2.2 animate has two modes: Mix and move
3. Upload the reference image, the character is this image will be the target character
4. You can use the videos we provided as input videos for the first time, the **DWPose Estimator** node in [comfyui_controlnet_aux](https://github.com/Fannovel16/comfyui_controlnet_aux) will preprocess the input video to pose and face control videos
5. The `Points Editor` is from [KJNodes](https://github.com/kijai/ComfyUI-KJNodes/), by default this node will not load the first frame from the input video, you need to run the workflow once or manually upload the first frame
- Bleow the `Points Editor` node, we have added note about how this node work, and how to edit it please refer to it
- Below the `Points Editor` node, we have added a note about how this node works, and how to edit it. Please refer to it
6. For the "Video Extend" group, it's in order to extend to the output video length
- Each `Video Extend` will extend another 77 frames(Around 4.8125 seconds)
- If your input video is less then 5s, you might not need it
- If your input video is less than 5s, you might not need it
- If you want to extend longer, you need to copy and paste multiple times, you need to link the `batch_images` from last Video Extend to next one, and also the `video_frame_offset` from last Video Extend to next one
7. Click the `Run` button or use the shortcut `Ctrl(cmd) + Enter` to execute video generation

Expand Down
2 changes: 1 addition & 1 deletion zh/tutorials/image/qwen/qwen-image.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -270,7 +270,7 @@ Comfy Org rehost 地址: [Qwen-Image-DiffSynth-ControlNets/model_patches](http
4. 如果需要可以修改 `QwenImageDiffsynthControlnet` 节点的 `strength` 强度来控制线稿控制的强度
5. 点击 `Run` 按钮,或者使用快捷键 `Ctrl(cmd) + Enter(回车)` 来运行工作流

> 对于 qwen_image_depth_diffsynth_controlnet.safetensors 使用,需要将图像预处理成 detph 深度图,替换掉 `image proccessing` 图,对于这部分的使用,请参考本篇文档中 InstantX 的处理方法,其它部分与 Canny 模型的使用类似
> 对于 qwen_image_depth_diffsynth_controlnet.safetensors 使用,需要将图像预处理成 depth 深度图,替换掉 `image processing` 图,对于这部分的使用,请参考本篇文档中 InstantX 的处理方法,其它部分与 Canny 模型的使用类似

**Inpaint 模型 ControlNet 使用说明**
![Inpaint 工作流](/images/tutorial/image/qwen/image_qwen_image_controlnet_patch-inpaint.jpg)
Expand Down
Loading