--precision half

w-e-w 2024-11-02 19:58:24 +09:00
parent 8d34abe419
commit f87fde526a

@ -147,7 +147,7 @@ https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/10516
--use-cpu | {all, sd, interrogate, gfpgan, bsrgan, esrgan, scunet, codeformer} | None | Use CPU as torch device for specified modules. | --use-cpu | {all, sd, interrogate, gfpgan, bsrgan, esrgan, scunet, codeformer} | None | Use CPU as torch device for specified modules. |
--use-ipex | None | False | Use Intel XPU as torch device | --use-ipex | None | False | Use Intel XPU as torch device |
--no-half | None | False | Do not switch the model to 16-bit floats. | --no-half | None | False | Do not switch the model to 16-bit floats. |
--precision | {full,autocast} | autocast | Evaluate at this precision. | --precision | {full, half, autocast} | autocast | Evaluate at this precision. |
--no-half-vae | None | False | Do not switch the VAE model to 16-bit floats. | --no-half-vae | None | False | Do not switch the VAE model to 16-bit floats. |
--upcast-sampling | None | False | Upcast sampling. No effect with `--no-half`. Usually produces similar results to `--no-half` with better performance while using less memory. | --upcast-sampling | None | False | Upcast sampling. No effect with `--no-half`. Usually produces similar results to `--no-half` with better performance while using less memory. |
--medvram | None | False | Enable Stable Diffusion model optimizations for sacrificing a some performance for low VRAM usage. | --medvram | None | False | Enable Stable Diffusion model optimizations for sacrificing a some performance for low VRAM usage. |