brackets

catboxanon 2023-08-16 17:54:05 -04:00
parent 79e3c23c5a
commit f27b096be8

@ -4,7 +4,7 @@ A number of optimization can be enabled by [commandline arguments](Command-Line-
|--------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `--opt-sdp-attention` | May results in faster speeds than using xFormers on some systems but requires more VRAM. (non-deterministic)
| `--opt-sdp-no-mem-attention` | May results in faster speeds than using xFormers on some systems but requires more VRAM. (deterministic, slightly slower than `--opt-sdp-attention` and uses more VRAM)
| `--xformers` | Use [xFormers](https://github.com/facebookresearch/xformers) library. Great improvement to memory consumption and speed. Nvidia GPUs only. ([deterministic as of 0.0.19](https://github.com/facebookresearch/xformers/releases/tag/v0.0.19) (webui uses 0.0.20 as of 1.4.0) |
| `--xformers` | Use [xFormers](https://github.com/facebookresearch/xformers) library. Great improvement to memory consumption and speed. Nvidia GPUs only. ([deterministic as of 0.0.19](https://github.com/facebookresearch/xformers/releases/tag/v0.0.19) [webui uses 0.0.20 as of 1.4.0]) |
| `--force-enable-xformers` | Enables xFormers regardless of whether the program thinks you can run it or not. Do not report bugs you get running this. |
| `--opt-split-attention` | Cross attention layer optimization significantly reducing memory use for almost no cost (some report improved performance with it). Black magic. <br/>On by default for `torch.cuda`, which includes both NVidia and AMD cards. |
| `--disable-opt-split-attention` | Disables the optimization above. |