mirror of
https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
synced 2025-04-05 03:59:00 +08:00
add wider application section, "add difference" merging applies to most mechanically unique models
parent
360776acb2
commit
377c1e856d
@ -1,4 +1,8 @@
|
||||
Making your own inpainting model is very simple:
|
||||
|
||||
|
||||

|
||||
|
||||
1. Go to Checkpoint Merger
|
||||
2. Select "Add Difference"
|
||||
3. Set "Multiplier" to 1.0
|
||||
@ -13,4 +17,33 @@ Making your own inpainting model is very simple:
|
||||
The way this works is it literally just takes the inpainting model, and copies over your model's unique data to it.
|
||||
Notice that the formula is A + (B - C), which you can interpret as equivalent to (A - C) + B. Because 'A' is 1.5-inpaint and 'C' is 1.5, A - C is inpainting logic and nothing more. so the formula is (Inpainting logic) + (Your Model).
|
||||
|
||||

|
||||
### Wider Application
|
||||
|
||||
This "add-the-difference" merging can be applied to almost all the **mechanically unique** models webui can load. \
|
||||
Check them out on the [Features](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features) page!
|
||||
|
||||
|
||||
#1 Your existing **finetuned model** will need to match the **unique model's** architecture, either: Stable Diffusion 2, or 1.
|
||||
|
||||
#2 You also need to put the unique model against the base model.
|
||||
Find out what was the base model from their github.
|
||||
|
||||
Q: what was altdiffusion-m9 using as base model? \
|
||||
A: the stable diffusion 1.4 model
|
||||
|
||||
Q: what was instructpix2pix using as base model? \
|
||||
A: the stable diffusion 1.5 model
|
||||
|
||||
The networks/properties of these models can be used with any finetune, just like how the famous controlnet networks apply, only these are not separated from the model.
|
||||
|
||||
<details><summary> Notes: </summary>
|
||||
|
||||
_You might realize Controlnet networks can already do many of these things._
|
||||
|
||||
So, here are some things maybe worth trying:
|
||||
|
||||
-darker/brighter lighting with noise offset model \
|
||||
-make similar pictures to 512x512 in smaller 256 or 320 dimensions with miniSD model \
|
||||
-prompt more deterministic across input languages with altdiffusion-m9 model (changes clip model)
|
||||
|
||||
</details>
|
Loading…
x
Reference in New Issue
Block a user