mirror of
https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
synced 2025-02-12 00:22:55 +08:00
note: use these models specifically
parent
d6d13a3f14
commit
70863262e4
12
Features.md
12
Features.md
@ -6,17 +6,17 @@ All examples are non-cherrypicked unless specified otherwise.
|
||||
|
||||
[[PR]](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/11757) | [[Stability-AI Github]](https://github.com/Stability-AI/generative-models)
|
||||
|
||||
## Downloading:
|
||||
## Downloads
|
||||
|
||||
Two models are available. The first is the primary model.
|
||||
**Two** models are available. The **first** is the **primary** model.
|
||||
|
||||
- [sd_xl_base_1.0_0.9vae](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0_0.9vae.safetensors)
|
||||
|
||||
- [sd_xl_refiner_1.0_0.9vae](https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0_0.9vae.safetensors)
|
||||
|
||||
These models are recommended for generating, merging and training.
|
||||
These models **specifically** are recommended for generating, merging and training.
|
||||
|
||||
>They have a built-in trained vae by https://github.com/madebyollin which fixes NaN infinity calculations running in fp16. ([VAE](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/blob/main/sdxl_vae.safetensors)) for reference
|
||||
>They have a built-in trained vae by [madebyollin](https://github.com/madebyollin) which fixes NaN infinity calculations running in fp16. ([VAE](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/blob/main/sdxl_vae.safetensors) for reference)
|
||||
|
||||
## SD-XL BASE
|
||||
|
||||
@ -47,9 +47,9 @@ It's tested to produce same (or very close) images as Stability-AI's repo (need
|
||||
|
||||
This secondary model is **designed** to process the `1024×1024` SD-XL image **near completion***, to further enhance and refine details in your final output picture. You could use it to refine finished pictures in the img2img tab as well.
|
||||
|
||||
*To try this kind of generation, you can use this extension - [wcde/sd-webui-refiner](https://github.com/wcde/sd-webui-refiner)
|
||||
- [wcde/sd-webui-refiner](https://github.com/wcde/sd-webui-refiner) *To try this kind of generation, you can use this extension.
|
||||
|
||||
(You will want 6gb minimum as `--lowvram` currently isn't supported.)
|
||||
>You will want 6gb minimum as `--lowvram` currently isn't supported.
|
||||
|
||||
# SD2 Variation Models
|
||||
[PR](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/8958), ([more info.](https://github.com/Stability-AI/stablediffusion/blob/main/doc/UNCLIP.MD))
|
||||
|
Loading…
Reference in New Issue
Block a user