Skip to content

Run Local and API LLMs, Features Gemini2 image generation, DEEPSEEK R1, QwenVL2.5, QWQ32B, Ollama, LlamaCPP LMstudio, Koboldcpp, TextGen, Transformers or via APIs Anthropic, Groq, OpenAI, Google Gemini, Mistral, xAI and create your own charcters assistants (SystemPrompts) with custom presets

License

Notifications You must be signed in to change notification settings

if-ai/ComfyUI-IF_LLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ComfyUI-IF_AI_LLM

################# ATENTION ####################

It Might comflict with IF_AI_tools so if you have it installed please remove it before installing IF_LLM I am working on adding this tools to IF_AI_tools so you only need one or the other

###############################################

Video

Video Thumbnail

Lighter version of ComfyUI-IF_AI_tools is a set of custom nodes to Run Local and API LLMs and LMMs, supports Ollama, LlamaCPP LMstudio, Koboldcpp, TextGen, Transformers or via APIs Anthropic, Groq, OpenAI, Google Gemini, Mistral, xAI and create your own profiles (SystemPrompts) with custom presets and muchmore

thorium_HQhmKkuczP

Install Ollama

You can technically use any LLM API that you want, but for the best expirience install Ollama and set it up.

To install Ollama models just open CMD or any terminal and type the run command follow by the model name such as

ollama run llama3.2-vision

If you want to use omost

ollama run impactframes/dolphin_llama3_omost

if you need a good smol model

ollama run ollama run llama3.2

Optionally Set enviromnet variables for any of your favourite LLM API keys "XAI_API_KEY", "GOOGLE_API_KEY", "ANTHROPIC_API_KEY", "MISTRAL_API_KEY", "OPENAI_API_KEY" or "GROQ_API_KEY" with those names or otherwise it won't pick it up you can also use .env file to store your keys

Features

[NEW] xAI Grok Vision, Mistral, Google Gemini exp 114, Anthropic 3.5 Haiku, OpenAI 01 preview [NEW] Wildcard System [NEW] Local Models Koboldcpp, TextGen, LlamaCPP, LMstudio, Ollama [NEW] Auto prompts auto generation for Image Prompt Maker runs jobs on batches automatically [NEW] Image generation with IF_PROMPTImaGEN via Dalle3 [NEW] Endpoints xAI, Transformers, [NEW] IF_profiles System Prompts with Reasoning/Reflection/Reward Templates and custom presets [NEW] WF such as GGUF and FluxRedux

  • Gemini, Groq, Mistral, OpenAI, Anthropic, Google, xAI, Transformers, Koboldcpp, TextGen, LlamaCPP, LMstudio, Ollama
  • Omost_tool the first tool
  • Vision Models Haiku/GPT4oMini?Geminiflash/Qwen2-VL
  • [Ollama-Omost]https://ollama.com/impactframes/dolphin_llama3_omost can be 2x to 3x faster than other Omost Models LLama3 and Phi3 IF_AI Prompt mkr models released thorium_XXW2qsjUp0

ollama run impactframes/llama3_ifai_sd_prompt_mkr_q4km:latest

ollama run impactframes/ifai_promptmkr_dolphin_phi3:latest

https://huggingface.co/impactframes/llama3_if_ai_sdpromptmkr_q4km

https://huggingface.co/impactframes/ifai_promptmkr_dolphin_phi3_gguf

Installation

  1. Open the manager search for IF_LLM and install

Install ComfyUI-IF_AI_ImaGenPromptMaker -hardest way

  1. Navigate to your ComfyUI custom_nodes folder, type CMD on the address bar to open a command prompt, and run the following command to clone the repository:
       git clone https://github.com/if-ai/ComfyUI-IF_LLM.git

OR

  1. In ComfyUI protable version just dounle click embedded_install.bat or type CMD on the address bar on the newly created custom_nodes\ComfyUI-IF_LLM folder type

       H:\ComfyUI_windows_portable\python_embeded\python.exe -m pip install -r requirements.txt

    replace C:\ for your Drive letter where you have the ComfyUI_windows_portable directory

  2. On custom environment activate the environment and move to the newly created ComfyUI-IF_LLM

       cd ComfyUI-IF_LLM
       python -m pip install -r requirements.txt

    If you want to use AWQ to save VRAM and up to 3x faster inference you need to install triton and autoawq

pip install triton
pip install --no-deps --no-build-isolation autoawq

I also have precompiled wheels for FA2 sageattention and trton for windows 10 for cu126 and pytorch 2.6.3 and python 12+ https://huggingface.co/impactframes/ComfyUI_desktop_wheels_win_cp12_cu126/tree/main

thorium_59oWhA71y7

Related Tools

  • IF_prompt_MKR
  • A similar tool available for Stable Diffusion WebUI

Videos

None yet

Example using normal Model

ancient Megastructure, small lone figure

Workflow Examples

You can try out these workflow examples directly in ComfyDeploy!

Workflow Try It

|CD_FLUX_LoRA|Try CD_FLUX_LoRA|

|CD_HYVid_I2V_&_T2V_Native_IFLLM|Try CD_HYVid_I2V_&_T2V_Native_IFLLM| |CD_HYVid_I2V_&_T2V_i2VLora_Native|Try CD_HYVid_I2V_&_T2V_i2VLora_Native| |CD_HYVid_I2V_Lora_KjWrapper|Try CD_HYVid_I2V_Lora_KjWrapper|

TODO

  • IMPROVED PROFILES
  • OMNIGEN
  • QWENFLUX
  • VIDEOGEN
  • AUDIOGEN

Support

If you find this tool useful, please consider supporting my work by:

:IFAIPROMPTImaGEN_comfy

About

Run Local and API LLMs, Features Gemini2 image generation, DEEPSEEK R1, QwenVL2.5, QWQ32B, Ollama, LlamaCPP LMstudio, Koboldcpp, TextGen, Transformers or via APIs Anthropic, Groq, OpenAI, Google Gemini, Mistral, xAI and create your own charcters assistants (SystemPrompts) with custom presets

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published