From 3fcff4ca1a8ee63a6a835be4d19871f78cd15bd0 Mon Sep 17 00:00:00 2001 From: ClashSAN <98228077+ClashSAN@users.noreply.github.com> Date: Thu, 24 Aug 2023 09:46:33 -0400 Subject: [PATCH] Add sdxl optimizations (+feel free to add more) --- Optimum-SDXL-Usage.md | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 Optimum-SDXL-Usage.md diff --git a/Optimum-SDXL-Usage.md b/Optimum-SDXL-Usage.md new file mode 100644 index 0000000..f0ce09e --- /dev/null +++ b/Optimum-SDXL-Usage.md @@ -0,0 +1,20 @@ +Here's a quick listing of things to tune for your setup: + +## Commandline arguments: + +- (nvidia) 8gb `--medvram-sdxl --xformers` +- (nvidia) 12gb+ `--xformers` +- (nvidia) 4gb `--lowvram --xformers` + + +## System: +- downgrade nvidia drivers to 531 or lower prevent extreme slowdowns for largest pictures +- add a pagefile to prevent failure to load weights due to low cpu ram +- (Linux) install tcmalloc - greatly reducing ram usage: `sudo apt install --no-install-recommends google-perftools` [#10117](https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/10117) +- use an SSD, for faster loadtime, especially if a pagefile is required +- converting `.safetensors` to `.ckpt` for reduced ram usage [#12086](https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12086#issuecomment-1691154698) + +## Model weights: + +- use vae that will not need to run in fp32 for increased speed and less vram usage: [sdxl_vae.safetensors](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/blob/main/sdxl_vae.safetensors) +- use fp16 (~7gb) weights for less cpu ram usage \ No newline at end of file