diff --git a/README.md b/README.md index d54071a..0cb4ff6 100644 --- a/README.md +++ b/README.md @@ -3,14 +3,14 @@ [![Discord Server](https://img.shields.io/discord/1014774730907209781?label=Discord)](https://discord.com/invite/u9yhsFmEkB) -*New: Stable Diffusion 2.1 is now supported!* +*New: Stable Diffusion XL, ControlNets, LoRAs and Embeddings are now supported!* This is a community project, so please feel free to contribute (and use in your project)! ![t2i](https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/assets/stable-samples/txt2img/768/merged-0006.png) # Why? -The goal is to let you be productive quickly (at your AI art project), so it bundles Stable Diffusion along with commonly-used features (like GFPGAN and CodeFormer for face restoration, RealESRGAN for upscaling, k-samplers, support for loading custom VAEs and hypernetworks, NSFW filter etc). +The goal is to let you be productive quickly (at your AI art project), so it bundles Stable Diffusion along with commonly-used features (like ControlNets, LoRAs, Textual Inversion Embeddings, GFPGAN and CodeFormer for face restoration, RealESRGAN for upscaling, k-samplers, support for loading custom VAEs, NSFW filter etc). Advanced features include a model-downloader (with a database of commonly used models), support for running in parallel on multiple GPUs, auto-scanning for malicious models etc. [Full list of features](https://github.com/easydiffusion/sdkit/wiki/Features) @@ -67,7 +67,7 @@ images = generate_images(context, prompt='Photograph of an astronaut riding a ho save_images(images, dir_path='D:\\path\\to\\images\\directory') ``` -Please see the list of [examples](https://github.com/easydiffusion/sdkit/tree/main/examples), to learn how to use the other features (like filters, VAE, Hypernetworks, memory optimizations, running on multiple GPUs etc). +Please see the list of [examples](https://github.com/easydiffusion/sdkit/tree/main/examples), to learn how to use the other features (like filters, VAE, ControlNet, Embeddings, LoRA, memory optimizations, running on multiple GPUs etc). # API Please see the [API Reference](https://github.com/easydiffusion/sdkit/wiki/API) page for a detailed summary. @@ -96,9 +96,9 @@ For models that don't match a known hash (e.g. custom models), or if you want to ## Does it have all the cool features? It has a lot of features! It was born out of a popular Stable Diffusion UI, splitting out the battle-tested core engine into `sdkit`. -**Features include:** SD 2.1, txt2img, img2img, inpainting, NSFW filter, multiple GPU support, Mac Support, GFPGAN and CodeFormer (fix faces), RealESRGAN (upscale), 19 samplers (including k-samplers and UniPC), custom VAE, custom hypernetworks, low-memory optimizations, model merging, safetensor support, picklescan, etc. [Click here to see the full list of features](https://github.com/easydiffusion/sdkit/wiki/Features). +**Features include:** SD 2.1, SDXL, ControlNet, LoRAs, Embeddings, txt2img, img2img, inpainting, NSFW filter, multiple GPU support, Mac Support, GFPGAN and CodeFormer (fix faces), RealESRGAN (upscale), 16 samplers (including k-samplers and UniPC), custom VAE, low-memory optimizations, model merging, safetensor support, picklescan, etc. [Click here to see the full list of features](https://github.com/easydiffusion/sdkit/wiki/Features). -📢 We're looking to add support for *textual inversion embeddings*, *AMD support*, *ControlNet*, *Pix2Pix*, and *outpainting*. We'd love code contributions for these! +📢 We're looking to add support for *Lycoris*, *AMD*, *Pix2Pix*, and *outpainting*. We'd love code contributions for these! ## Is it fast? It is pretty fast, and close to the fastest. For the same image, `sdkit` took 5.5 seconds, while `automatic1111` webui took 4.95 seconds. 📢 We're looking for code contributions to make `sdkit` even faster! @@ -119,10 +119,10 @@ No xformers. No VRAM optimizations for low-memory usage. ## Does it work on lower-end GPUs, or without GPUs? Yes. It works on NVIDIA/Mac GPUs with atleast 2GB of VRAM. For PCs without a compatible GPU, it can run entirely on the CPU. Running on the CPU will be *very* slow, but atleast you'll be able to try it out! -📢 We don't support AMD yet (it'll run in CPU-mode), but we're looking for code contributions for AMD support! +📢 We don't support AMD yet on Windows (it'll run in CPU-mode, or in Linux), but we're looking for code contributions for AMD support! ## Why not just use diffusers? -You can certainly use diffusers. `sdkit` is infact using `diffusers` internally (currently in beta), so you can think of `sdkit` as a convenient API and a collection of tools, focused on Stable Diffusion projects. +You can certainly use diffusers. `sdkit` is infact using `diffusers` internally, so you can think of `sdkit` as a convenient API and a collection of tools, focused on Stable Diffusion projects. `sdkit`: 1. is a simple, lightweight toolkit for Stable Diffusion projects. @@ -132,11 +132,11 @@ You can certainly use diffusers. `sdkit` is infact using `diffusers` internally 5. built-in support for running on multiple GPUs. 6. can download models from any server. 7. auto-scans for malicious models. -8. includes 19 samplers (including k-samplers). +8. includes 16 samplers (including k-samplers). 9. born out of the needs of the new Stable Diffusion AI Art scene, starting Aug 2022. # Who is using sdkit? -* [Easy Diffusion (cmdr2 UI)](https://github.com/cmdr2/stable-diffusion-ui) for Stable Diffusion. +* [Easy Diffusion (cmdr2 UI)](https://github.com/easydiffusion/easydiffusion) for Stable Diffusion. * [Arthemy AI](https://arthemy.ai/) If your project is using sdkit, you can add it to this list. Please feel free to open a pull request (or let us know at our [Discord community](https://discord.com/invite/u9yhsFmEkB)). @@ -145,9 +145,8 @@ If your project is using sdkit, you can add it to this list. Please feel free to We'd love to accept code contributions. Please feel free to drop by our [Discord community](https://discord.com/invite/u9yhsFmEkB)! 📢 We're looking for code contributions for these features (or anything else you'd like to work on): -- Using custom Textual Inversion embeddings. +- Lycoris. - Outpainting. -- ControlNet. - Pix2Pix. - AMD support. @@ -161,7 +160,7 @@ Instructions for running automated tests: [Running Tests](tests/README.md). * GFPGAN: https://github.com/TencentARC/GFPGAN * RealESRGAN: https://github.com/xinntao/Real-ESRGAN * k-diffusion: https://github.com/crowsonkb/k-diffusion -* Code contributors and artists on the cmdr2 UI: https://github.com/cmdr2/stable-diffusion-ui and Discord (https://discord.com/invite/u9yhsFmEkB) +* Code contributors and artists on Easy Diffusion (cmdr2 UI): https://github.com/easydiffusion/easydiffusion and Discord (https://discord.com/invite/u9yhsFmEkB) * Lots of contributors on the internet # Disclaimer diff --git a/examples/005-generate-custom_hypernetwork.py b/examples/005-generate-custom_hypernetwork.py deleted file mode 100644 index 13fba5f..0000000 --- a/examples/005-generate-custom_hypernetwork.py +++ /dev/null @@ -1,27 +0,0 @@ -import sdkit -from sdkit.generate import generate_images -from sdkit.models import load_model -from sdkit.utils import log, save_images - -context = sdkit.Context() - -# set the path to the model and hypernetwork file on the disk -context.model_paths["stable-diffusion"] = "D:\\path\\to\\model.ckpt" -context.model_paths["hypernetwork"] = "D:\\path\\to\\hypernetwork.pt" -load_model(context, "stable-diffusion") -load_model(context, "hypernetwork") - -# generate the image, hypernetwork_strength at 0.3 -images = generate_images( - context, - prompt="Photograph of an astronaut riding a horse", - seed=42, - width=512, - height=512, - hypernetwork_strength=0.3, -) - -# save the image -save_images(images, dir_path="D:\\path\\to\\images\\directory") - -log.info("Generated images with a custom VAE!") diff --git a/examples/008-generate-custom_lora.py b/examples/008-generate-custom_lora.py index ec02527..fca3770 100644 --- a/examples/008-generate-custom_lora.py +++ b/examples/008-generate-custom_lora.py @@ -4,7 +4,6 @@ from sdkit.utils import log, save_images context = sdkit.Context() -context.test_diffusers = True # set the path to the model and LoRA file on the disk context.model_paths["lora"] = "D:\\path\\to\\lora.safetensors" diff --git a/examples/009-generate-controlnet.py b/examples/009-generate-controlnet.py index 43217ea..8923e2a 100644 --- a/examples/009-generate-controlnet.py +++ b/examples/009-generate-controlnet.py @@ -6,7 +6,6 @@ from PIL import Image context = sdkit.Context() -context.test_diffusers = True # convert an existing image into an openpose image (or skip these lines if you have a custom openpose image) diff --git a/pyproject.toml b/pyproject.toml index 572962a..dd019ad 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,11 +4,11 @@ build-backend = "setuptools.build_meta" [project] name = "sdkit" -version = "1.0.185" +version = "2.0.0" authors = [ {name="cmdr2", email="secondary.cmdr2@gmail.com"}, ] -description = "sdkit (stable diffusion kit) is an easy-to-use library for using Stable Diffusion in your AI Art projects. It is fast, feature-packed, and memory-efficient. It bundles Stable Diffusion along with commonly-used features (like GFPGAN, RealESRGAN, k-samplers, custom VAE, hypernetworks etc). It also includes a model-downloader with a database of commonly used models, and advanced features like running in parallel on multiple GPUs, auto-scanning for malicious models etc. Supports Stable Diffusion 2.1!" +description = "sdkit (stable diffusion kit) is an easy-to-use library for using Stable Diffusion in your AI Art projects. It is fast, feature-packed, and memory-efficient. It bundles Stable Diffusion along with commonly-used features (like ControlNet, LoRA, Textual Inversion Embeddings, GFPGAN, RealESRGAN, k-samplers, custom VAE etc). It also includes a model-downloader with a database of commonly used models, and advanced features like running in parallel on multiple GPUs, auto-scanning for malicious models etc. Supports Stable Diffusion XL, ControlNet, LoRA, Embeddings!" readme = "README.md" requires-python = ">=3.8.5" classifiers = [ diff --git a/sdkit/__init__.py b/sdkit/__init__.py index 3b8a013..26e6c96 100644 --- a/sdkit/__init__.py +++ b/sdkit/__init__.py @@ -36,7 +36,7 @@ def __init__(self) -> None: """ self.vram_usage_level = "balanced" - self.test_diffusers = False + self.test_diffusers = True self.enable_codeformer = False """ Enable this to use CodeFormer.