diff --git a/Optimizations.md b/Optimizations.md index c1454fe..fba7de2 100644 --- a/Optimizations.md +++ b/Optimizations.md @@ -2,7 +2,7 @@ A number of optimization can be enabled by [commandline arguments](Run-with-Cust | commandline argument | explanation | |--------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `--xformers` | Use [xformers](https://github.com/facebookresearch/xformers) library. Great improvement to memory consumption and speed. Windows version installs binaries mainained by [C43H66N12O12S2](https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases). Will only be enabled on small subset of configuration because that's what we have binaries for. | +| `--xformers` | Use [xformers](https://github.com/facebookresearch/xformers) library. Great improvement to memory consumption and speed. Windows version installs binaries mainained by [C43H66N12O12S2](https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases). Will only be enabled on small subset of configuration because that's what we have binaries for. [Documentation](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers) | | `--force-enable-xformers` | Enables xformers above regardless of whether the program thinks you can run it or not. Do not report bugs you get running this. | | `--opt-split-attention` | Cross attention layer optimization significantly reducing memory use for almost no cost (some report improved preformance with it). Black magic.
On by default for `torch.cuda`, which includes both NVidia and AMD cards. | | `--disable-opt-split-attention` | Disables the optimization above. |