add wider application section, "add difference" merging applies to most mechanically unique models

ClashSAN 2023-05-29 05:41:30 -04:00
parent 360776acb2
commit 377c1e856d

@ -1,4 +1,8 @@
Making your own inpainting model is very simple:
![screenshot](https://github.com/AUTOMATIC1111/stable-diffusion-webui/assets/40751091/4bdbab38-9237-48ea-9698-a036a5c96585)
1. Go to Checkpoint Merger
2. Select "Add Difference"
3. Set "Multiplier" to 1.0
@ -13,4 +17,33 @@ Making your own inpainting model is very simple:
The way this works is it literally just takes the inpainting model, and copies over your model's unique data to it.
Notice that the formula is A + (B - C), which you can interpret as equivalent to (A - C) + B. Because 'A' is 1.5-inpaint and 'C' is 1.5, A - C is inpainting logic and nothing more. so the formula is (Inpainting logic) + (Your Model).
![screenshot](https://github.com/AUTOMATIC1111/stable-diffusion-webui/assets/40751091/4bdbab38-9237-48ea-9698-a036a5c96585)
### Wider Application
This "add-the-difference" merging can be applied to almost all the **mechanically unique** models webui can load. \
Check them out on the [Features](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features) page!
#1 Your existing **finetuned model** will need to match the **unique model's** architecture, either: Stable Diffusion 2, or 1.
#2 You also need to put the unique model against the base model.
Find out what was the base model from their github.
Q: what was altdiffusion-m9 using as base model? \
A: the stable diffusion 1.4 model
Q: what was instructpix2pix using as base model? \
A: the stable diffusion 1.5 model
The networks/properties of these models can be used with any finetune, just like how the famous controlnet networks apply, only these are not separated from the model.
<details><summary> Notes: </summary>
_You might realize Controlnet networks can already do many of these things._
So, here are some things maybe worth trying:
-darker/brighter lighting with noise offset model \
-make similar pictures to 512x512 in smaller 256 or 320 dimensions with miniSD model \
-prompt more deterministic across input languages with altdiffusion-m9 model (changes clip model)
</details>