Stable Diffusion is an AI model for text to image generation. There are various GUIs available to tweak settings, add specific styles like Disney or photorealistic (Lora models) and help adjusting existing images via picture to picture or selection infill.
The guides below focus on the Automatic1111 Stable Diffusion WebUI.
This docker requires docker as well as the GPU drivers installed on the host system.
https://github.com/AbdBarho/stable-diffusion-webui-docker
git clone https://github.com/AbdBarho/stable-diffusion-webui-docker cd stable-diffusion-webui-docker vi docker-compose.yaml #adjust name to stable-diffusion if desired #adjust data and output dir location to /opt/stable-diffusion/data and output respectively if desired mkdir -p /opt/stable-diffusion/{data,output} #Build the ai and download the models. #Note, first start will take 15-60 minutes to download models to data folder as cache depending on internet connection #size approximately 10GB docker compose --profile download up --build # if download errors occur, just repeat the command. # build the desired interface: # docker compose --profile [ui] up --build # where [ui] is one of: invoke | auto | auto-cpu | comfy | comfy-cpu # use auto-cpu or comfy-cpu for cpu only interface #docker compose --profile auto-cpu up --build docker compose --profile auto up --build # later: docker compose --profile auto up -d
access the app on:
Windows AMD AUTOMATIC1111 stable diffusion webui using Microsoft DirectML for GPU acceleration:
git clone https://github.com/lshqqytiger/stable-diffusion-webui-directml cd stable-diffusion-webui-directml git submodule init git submodule update venv\Scripts\python.exe -m pip install --upgrade pip venv\Scripts\pip.exe pip install -r requirements.txt venv\Scripts\pip.exe install torch-directml
COMMANDLINE_ARGS=--use-directml --skip-torch-cuda-test
--opt-sub-quad-attention --lowvram --disable-nan-check
Check/Test if GPU is used in Task Manager's GPU tab. Dedicated GPU Memory should show large usage for the loaded Model and when generating a test image in the web interface, the GPU usage should ramp up like here:
Approx times to generate 512×512 default settings images:
CPU/GPU | Model | Time |
---|---|---|
CPU | Intel 8th gen i5 (Intel NUC) | 5-15 minutes |
GPU | AMD Radeon 780M | 1.5 minutes |
GPU | AMD Radeon 7900XTX | 21.7 seconds |
GPU | Nvidia RTX 4080 | 1.2 seconds |
As an extension for SD-webui:
Open the Extensions tab in SD-webui.
As an extension for SD-webui:
Open the Extensions tab in SD-webui.
LoRA models/styles/tools as well as base models can be searched and downloaded from CivitAI
Description for photorealistic images:
models/stable-diffusion
directorymodels/Lora
directorymodels/Lora
directorymodels/Lora
directoryhttps://github.com/mcmonkeyprojects/sd-dynamic-thresholding https://github.com/opparco/stable-diffusion-webui-composable-lora
Base settings:
set sampling steps to 20 set sampling method to DPM++ SDE Karras set width to 768 and height to 512 set CFG scale to 6 set seed to -1 enable Dynamic Thresholding (CFG Scale Fix) enable all composable Lora
cd /opt/stable-diffusion/data/models/Stable-diffusion wget https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/raw/main/sd_xl_base_1.0_0.9vae.safetensors
This is for general objects/scenes/animals/etc, faster than SDXL
download SDXL Turbo from https://huggingface.co/stabilityai/sdxl-turbo and copy into Models/Stable-diffusion
directory.
https://civitai.com/models/176555?modelVersionId=214296
Other Text LoRAs:
https://civitai.com/tag/text
3d, 8k, masterpiece, (best quality), Japanese garden, waterfall, cherry blossoms in full bloom, conifer, koi pond, (pagoda:1.1), stone lantern, water basin, water reflections, winding path, (night sky:1.:1), hokusai inspiration, ultra-realistic, pastel color scheme, soft lighting, golden hour, tranquil atmosphere, landscape orientation EasyNegative, (worst quality:1.2), (low quality:1.2), (lowres:1.1), (monochrome:1.1), (greyscape), multiple views, comic, sketch, watermark 25 steps, 2m karras, 1024x768 seed 1192013237
greg rutkowski, highly detailed, dark, surreal scary swamp, terrifying, horror, poorly lit, trending on artstation, incredible composition, masterpiece with settings: LMS Karras 50 steps CFG scale 8 random seed
Test. use in txt2img the following prompt:
RAW photo, close up photo of face, woman with dark hair in rain at night, blue eyes, detailed eyes, dark skin, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 <lora:lora_contrast_fix:1> <lora:lora-style-hyperrealism-art:1>
and as negative prompt below:
(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck
then click generate as often as you like
https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/683
If you want a more up to date version of A1111 here are the steps:
docker exec -it webui-docker-auto-1 /bin/bash git pull pip install -r requirements.txt #if needed cd repositories git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui-assets.git exit docker stop webui-docker-auto-1 docker start webui-docker-auto-1
If it fails for any reason just rebuild the container using the standard process of docker compose –profile auto up –build