From 377c1e856d8bf9f3009b72fc554cae39f84b3860 Mon Sep 17 00:00:00 2001 From: ClashSAN <98228077+ClashSAN@users.noreply.github.com> Date: Mon, 29 May 2023 05:41:30 -0400 Subject: [PATCH] add wider application section, "add difference" merging applies to most mechanically unique models --- How-to-make-your-own-Inpainting-model.md | 35 +++++++++++++++++++++++- 1 file changed, 34 insertions(+), 1 deletion(-) diff --git a/How-to-make-your-own-Inpainting-model.md b/How-to-make-your-own-Inpainting-model.md index 1b651f3..8ed7ff1 100644 --- a/How-to-make-your-own-Inpainting-model.md +++ b/How-to-make-your-own-Inpainting-model.md @@ -1,4 +1,8 @@ Making your own inpainting model is very simple: + + +![screenshot](https://github.com/AUTOMATIC1111/stable-diffusion-webui/assets/40751091/4bdbab38-9237-48ea-9698-a036a5c96585) + 1. Go to Checkpoint Merger 2. Select "Add Difference" 3. Set "Multiplier" to 1.0 @@ -13,4 +17,33 @@ Making your own inpainting model is very simple: The way this works is it literally just takes the inpainting model, and copies over your model's unique data to it. Notice that the formula is A + (B - C), which you can interpret as equivalent to (A - C) + B. Because 'A' is 1.5-inpaint and 'C' is 1.5, A - C is inpainting logic and nothing more. so the formula is (Inpainting logic) + (Your Model). -![screenshot](https://github.com/AUTOMATIC1111/stable-diffusion-webui/assets/40751091/4bdbab38-9237-48ea-9698-a036a5c96585) +### Wider Application + +This "add-the-difference" merging can be applied to almost all the **mechanically unique** models webui can load. \ + Check them out on the [Features](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features) page! + + +#1 Your existing **finetuned model** will need to match the **unique model's** architecture, either: Stable Diffusion 2, or 1. + +#2 You also need to put the unique model against the base model. +Find out what was the base model from their github. + +Q: what was altdiffusion-m9 using as base model? \ +A: the stable diffusion 1.4 model + +Q: what was instructpix2pix using as base model? \ +A: the stable diffusion 1.5 model + +The networks/properties of these models can be used with any finetune, just like how the famous controlnet networks apply, only these are not separated from the model. + +
Notes: + +_You might realize Controlnet networks can already do many of these things._ + +So, here are some things maybe worth trying: + +-darker/brighter lighting with noise offset model \ +-make similar pictures to 512x512 in smaller 256 or 320 dimensions with miniSD model \ +-prompt more deterministic across input languages with altdiffusion-m9 model (changes clip model) + +
\ No newline at end of file