mirror of
https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
synced 2025-04-04 03:29:00 +08:00
basic --opt-sdp-attention info derived from discussion post https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/8691
parent
5ffbfa4cf4
commit
c06be1b1e7
@ -2,7 +2,9 @@ A number of optimization can be enabled by [commandline arguments](Run-with-Cust
|
||||
|
||||
| commandline argument | explanation |
|
||||
|--------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `--xformers` | Use [xformers](https://github.com/facebookresearch/xformers) library. Great improvement to memory consumption and speed. Windows version installs binaries mainained by [C43H66N12O12S2](https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases). Will only be enabled on small subset of configuration because that's what we have binaries for. [Documentation](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers) |
|
||||
| `--opt-sdp-attention` | Faster speeds than using xformers, only available for user who manually install torch 2.0 to their venv. (non-deterministic)
|
||||
| `--opt-sdp-no-mem-attention` | Faster speeds than using xformers, only available for user who manually install torch 2.0 to their venv. (deterministic, slight slower than `--opt-sdp-attention`)
|
||||
| `--xformers` | Use [xformers](https://github.com/facebookresearch/xformers) library. Great improvement to memory consumption and speed. Will only be enabled on small subset of configuration because that's what we have binaries for. [Documentation](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers) |
|
||||
| `--force-enable-xformers` | Enables xformers above regardless of whether the program thinks you can run it or not. Do not report bugs you get running this. |
|
||||
| `--opt-split-attention` | Cross attention layer optimization significantly reducing memory use for almost no cost (some report improved performance with it). Black magic. <br/>On by default for `torch.cuda`, which includes both NVidia and AMD cards. |
|
||||
| `--disable-opt-split-attention` | Disables the optimization above. |
|
||||
|
Loading…
x
Reference in New Issue
Block a user