################# ATENTION ####################
It Might comflict with IF_AI_tools so if you have it installed please remove it before installing IF_LLM I am working on adding this tools to IF_AI_tools so you only need one or the other
###############################################
Lighter version of ComfyUI-IF_AI_tools is a set of custom nodes to Run Local and API LLMs and LMMs, supports Ollama, LlamaCPP LMstudio, Koboldcpp, TextGen, Transformers or via APIs Anthropic, Groq, OpenAI, Google Gemini, Mistral, xAI and create your own profiles (SystemPrompts) with custom presets and muchmore
You can technically use any LLM API that you want, but for the best expirience install Ollama and set it up.
- Visit ollama.com for more information.
To install Ollama models just open CMD or any terminal and type the run command follow by the model name such as
ollama run llama3.2-vision
If you want to use omost
ollama run impactframes/dolphin_llama3_omost
if you need a good smol model
ollama run ollama run llama3.2
Optionally Set enviromnet variables for any of your favourite LLM API keys "XAI_API_KEY", "GOOGLE_API_KEY", "ANTHROPIC_API_KEY", "MISTRAL_API_KEY", "OPENAI_API_KEY" or "GROQ_API_KEY" with those names or otherwise it won't pick it up you can also use .env file to store your keys
[NEW] xAI Grok Vision, Mistral, Google Gemini exp 114, Anthropic 3.5 Haiku, OpenAI 01 preview [NEW] Wildcard System [NEW] Local Models Koboldcpp, TextGen, LlamaCPP, LMstudio, Ollama [NEW] Auto prompts auto generation for Image Prompt Maker runs jobs on batches automatically [NEW] Image generation with IF_PROMPTImaGEN via Dalle3 [NEW] Endpoints xAI, Transformers, [NEW] IF_profiles System Prompts with Reasoning/Reflection/Reward Templates and custom presets [NEW] WF such as GGUF and FluxRedux
- Gemini, Groq, Mistral, OpenAI, Anthropic, Google, xAI, Transformers, Koboldcpp, TextGen, LlamaCPP, LMstudio, Ollama
- Omost_tool the first tool
- Vision Models Haiku/GPT4oMini?Geminiflash/Qwen2-VL
- [Ollama-Omost]https://ollama.com/impactframes/dolphin_llama3_omost can be 2x to 3x faster than other Omost Models
LLama3 and Phi3 IF_AI Prompt mkr models released
ollama run impactframes/llama3_ifai_sd_prompt_mkr_q4km:latest
ollama run impactframes/ifai_promptmkr_dolphin_phi3:latest
https://huggingface.co/impactframes/llama3_if_ai_sdpromptmkr_q4km
https://huggingface.co/impactframes/ifai_promptmkr_dolphin_phi3_gguf
- Open the manager search for IF_LLM and install
- Navigate to your ComfyUI
custom_nodes
folder, typeCMD
on the address bar to open a command prompt, and run the following command to clone the repository:git clone https://github.com/if-ai/ComfyUI-IF_LLM.git
OR
-
In ComfyUI protable version just dounle click
embedded_install.bat
or typeCMD
on the address bar on the newly createdcustom_nodes\ComfyUI-IF_LLM
folder typeH:\ComfyUI_windows_portable\python_embeded\python.exe -m pip install -r requirements.txt
replace
C:\
for your Drive letter where you have the ComfyUI_windows_portable directory -
On custom environment activate the environment and move to the newly created ComfyUI-IF_LLM
cd ComfyUI-IF_LLM python -m pip install -r requirements.txt
If you want to use AWQ to save VRAM and up to 3x faster inference you need to install triton and autoawq
pip install triton
pip install --no-deps --no-build-isolation autoawq
I also have precompiled wheels for FA2 sageattention and trton for windows 10 for cu126 and pytorch 2.6.3 and python 12+ https://huggingface.co/impactframes/ComfyUI_desktop_wheels_win_cp12_cu126/tree/main
- IF_prompt_MKR
- A similar tool available for Stable Diffusion WebUI
None yet
ancient Megastructure, small lone figure
You can try out these workflow examples directly in ComfyDeploy!
Workflow | Try It |
---|
|CD_FLUX_LoRA||
|CD_HYVid_I2V_&_T2V_Native_IFLLM||
|CD_HYVid_I2V_&_T2V_i2VLora_Native|
|
|CD_HYVid_I2V_Lora_KjWrapper|
|
- IMPROVED PROFILES
- OMNIGEN
- QWENFLUX
- VIDEOGEN
- AUDIOGEN
If you find this tool useful, please consider supporting my work by:
- Starring the repository on GitHub: ComfyUI-IF_AI_tools
- Subscribing to my YouTube channel: Impact Frames
- Follow me on X: Impact Frames X Thank You!