Merge branch 'dev' into gradio4

This commit is contained in:
missionfloyd 2024-04-21 18:15:55 -06:00 committed by GitHub
commit 32281b272e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
33 changed files with 475 additions and 114 deletions

View File

@ -1,4 +1,126 @@
## 1.8.0-RC ## 1.9.0
### Features:
* Make refiner switchover based on model timesteps instead of sampling steps ([#14978](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14978))
* add an option to have old-style directory view instead of tree view; stylistic changes for extra network sorting/search controls
* add UI for reordering callbacks, support for specifying callback order in extension metadata ([#15205](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15205))
* Sgm uniform scheduler for SDXL-Lightning models ([#15325](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15325))
* Scheduler selection in main UI ([#15333](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15333), [#15361](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15361), [#15394](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15394))
### Minor:
* "open images directory" button now opens the actual dir ([#14947](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14947))
* Support inference with LyCORIS BOFT networks ([#14871](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14871), [#14973](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14973))
* make extra network card description plaintext by default, with an option to re-enable HTML as it was
* resize handle for extra networks ([#15041](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15041))
* cmd args: `--unix-filenames-sanitization` and `--filenames-max-length` ([#15031](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15031))
* show extra networks parameters in HTML table rather than raw JSON ([#15131](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15131))
* Add DoRA (weight-decompose) support for LoRA/LoHa/LoKr ([#15160](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15160), [#15283](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15283))
* Add '--no-prompt-history' cmd args for disable last generation prompt history ([#15189](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15189))
* update preview on Replace Preview ([#15201](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15201))
* only fetch updates for extensions' active git branches ([#15233](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15233))
* put upscale postprocessing UI into an accordion ([#15223](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15223))
* Support dragdrop for URLs to read infotext ([#15262](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15262))
* use diskcache library for caching ([#15287](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15287), [#15299](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15299))
* Allow PNG-RGBA for Extras Tab ([#15334](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15334))
* Support cover images embedded in safetensors metadata ([#15319](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15319))
* faster interrupt when using NN upscale ([#15380](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15380))
* Extras upscaler: an input field to limit maximul side length for the output image ([#15293](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15293), [#15415](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15415), [#15417](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15417), [#15425](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15425))
* add an option to hide postprocessing options in Extras tab
### Extensions and API:
* ResizeHandleRow - allow overriden column scale parametr ([#15004](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15004))
* call script_callbacks.ui_settings_callback earlier; fix extra-options-section built-in extension killing the ui if using a setting that doesn't exist
* make it possible to use zoom.js outside webui context ([#15286](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15286), [#15288](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15288))
* allow variants for extension name in metadata.ini ([#15290](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15290))
* make reloading UI scripts optional when doing Reload UI, and off by default
* put request: gr.Request at start of img2img function similar to txt2img
* open_folder as util ([#15442](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15442))
* make it possible to import extensions' script files as `import scripts.<filename>` ([#15423](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15423))
### Performance:
* performance optimization for extra networks HTML pages
* optimization for extra networks filtering
* optimization for extra networks sorting
### Bug Fixes:
* prevent escape button causing an interrupt when no generation has been made yet
* [bug] avoid doble upscaling in inpaint ([#14966](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14966))
* possible fix for reload button not appearing in some cases for extra networks.
* fix: the `split_threshold` parameter does not work when running Split oversized images ([#15006](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15006))
* Fix resize-handle visability for vertical layout (mobile) ([#15010](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15010))
* register_tmp_file also for mtime ([#15012](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15012))
* Protect alphas_cumprod during refiner switchover ([#14979](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14979))
* Fix EXIF orientation in API image loading ([#15062](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15062))
* Only override emphasis if actually used in prompt ([#15141](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15141))
* Fix emphasis infotext missing from `params.txt` ([#15142](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15142))
* fix extract_style_text_from_prompt #15132 ([#15135](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15135))
* Fix Soft Inpaint for AnimateDiff ([#15148](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15148))
* edit-attention: deselect surrounding whitespace ([#15178](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15178))
* chore: fix font not loaded ([#15183](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15183))
* use natural sort in extra networks when ordering by path
* Fix built-in lora system bugs caused by torch.nn.MultiheadAttention ([#15190](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15190))
* Avoid error from None in get_learned_conditioning ([#15191](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15191))
* Add entry to MassFileLister after writing metadata ([#15199](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15199))
* fix issue with Styles when Hires prompt is used ([#15269](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15269), [#15276](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15276))
* Strip comments from hires fix prompt ([#15263](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15263))
* Make imageviewer event listeners browser consistent ([#15261](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15261))
* Fix AttributeError in OFT when trying to get MultiheadAttention weight ([#15260](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15260))
* Add missing .mean() back ([#15239](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15239))
* fix "Restore progress" button ([#15221](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15221))
* fix ui-config for InputAccordion [custom_script_source] ([#15231](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15231))
* handle 0 wheel deltaY ([#15268](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15268))
* prevent alt menu for firefox ([#15267](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15267))
* fix: fix syntax errors ([#15179](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15179))
* restore outputs path ([#15307](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15307))
* Escape btn_copy_path filename ([#15316](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15316))
* Fix extra networks buttons when filename contains an apostrophe ([#15331](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15331))
* escape brackets in lora random prompt generator ([#15343](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15343))
* fix: Python version check for PyTorch installation compatibility ([#15390](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15390))
* fix typo in call_queue.py ([#15386](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15386))
* fix: when find already_loaded model, remove loaded by array index ([#15382](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15382))
* minor bug fix of sd model memory management ([#15350](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15350))
* Fix CodeFormer weight ([#15414](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15414))
* Fix: Remove script callbacks in ordered_callbacks_map ([#15428](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15428))
* fix limited file write (thanks, Sylwia)
* Fix extra-single-image API not doing upscale failed ([#15465](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15465))
* error handling paste_field callables ([#15470](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15470))
### Hardware:
* Add training support and change lspci for Ascend NPU ([#14981](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14981))
* Update to ROCm5.7 and PyTorch ([#14820](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14820))
* Better workaround for Navi1, removing --pre for Navi3 ([#15224](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15224))
* Ascend NPU wiki page ([#15228](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15228))
### Other:
* Update comment for Pad prompt/negative prompt v0 to add a warning about truncation, make it override the v1 implementation
* support resizable columns for touch (tablets) ([#15002](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15002))
* Fix #14591 using translated content to do categories mapping ([#14995](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14995))
* Use `absolute` path for normalized filepath ([#15035](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15035))
* resizeHandle handle double tap ([#15065](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15065))
* --dat-models-path cmd flag ([#15039](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15039))
* Add a direct link to the binary release ([#15059](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15059))
* upscaler_utils: Reduce logging ([#15084](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15084))
* Fix various typos with crate-ci/typos ([#15116](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15116))
* fix_jpeg_live_preview ([#15102](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15102))
* [alternative fix] can't load webui if selected wrong extra option in ui ([#15121](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15121))
* Error handling for unsupported transparency ([#14958](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14958))
* Add model description to searched terms ([#15198](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15198))
* bump action version ([#15272](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15272))
* PEP 604 annotations ([#15259](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15259))
* Automatically Set the Scale by value when user selects an Upscale Model ([#15244](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15244))
* move postprocessing-for-training into builtin extensions ([#15222](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15222))
* type hinting in shared.py ([#15211](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15211))
* update ruff to 0.3.3
* Update pytorch lightning utilities ([#15310](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15310))
* Add Size as an XYZ Grid option ([#15354](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15354))
* Use HF_ENDPOINT variable for HuggingFace domain with default ([#15443](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15443))
* re-add update_file_entry ([#15446](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15446))
* create_infotext allow index and callable, re-work Hires prompt infotext ([#15460](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15460))
* update restricted_opts to include more options for --hide-ui-dir-config ([#15492](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15492))
## 1.8.0
### Features: ### Features:
* Update torch to version 2.1.2 * Update torch to version 2.1.2
@ -61,7 +183,7 @@
* add before_token_counter callback and use it for prompt comments * add before_token_counter callback and use it for prompt comments
* ResizeHandleRow - allow overridden column scale parameter ([#15004](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15004)) * ResizeHandleRow - allow overridden column scale parameter ([#15004](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15004))
### Performance ### Performance:
* Massive performance improvement for extra networks directories with a huge number of files in them in an attempt to tackle #14507 ([#14528](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14528)) * Massive performance improvement for extra networks directories with a huge number of files in them in an attempt to tackle #14507 ([#14528](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14528))
* Reduce unnecessary re-indexing extra networks directory ([#14512](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14512)) * Reduce unnecessary re-indexing extra networks directory ([#14512](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14512))
* Avoid unnecessary `isfile`/`exists` calls ([#14527](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14527)) * Avoid unnecessary `isfile`/`exists` calls ([#14527](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14527))

View File

@ -1,7 +1,7 @@
from modules import scripts from modules import scripts
from modules.shared import opts from modules.shared import opts
xyz_grid = [x for x in scripts.scripts_data if x.script_class.__module__ == "xyz_grid.py"][0].module xyz_grid = [x for x in scripts.scripts_data if x.script_class.__module__ == "scripts.xyz_grid"][0].module
def int_applier(value_name:str, min_range:int = -1, max_range:int = -1): def int_applier(value_name:str, min_range:int = -1, max_range:int = -1):
""" """

View File

@ -568,7 +568,7 @@ function extraNetworksShowMetadata(text) {
return; return;
} }
} catch (error) { } catch (error) {
console.eror(error); console.error(error);
} }
var elem = document.createElement('pre'); var elem = document.createElement('pre');

View File

@ -17,7 +17,7 @@ from fastapi.encoders import jsonable_encoder
from secrets import compare_digest from secrets import compare_digest
import modules.shared as shared import modules.shared as shared
from modules import sd_samplers, deepbooru, sd_hijack, images, scripts, ui, postprocessing, errors, restart, shared_items, script_callbacks, infotext_utils, sd_models from modules import sd_samplers, deepbooru, sd_hijack, images, scripts, ui, postprocessing, errors, restart, shared_items, script_callbacks, infotext_utils, sd_models, sd_schedulers
from modules.api import models from modules.api import models
from modules.shared import opts from modules.shared import opts
from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images
@ -221,6 +221,7 @@ class Api:
self.add_api_route("/sdapi/v1/options", self.set_config, methods=["POST"]) self.add_api_route("/sdapi/v1/options", self.set_config, methods=["POST"])
self.add_api_route("/sdapi/v1/cmd-flags", self.get_cmd_flags, methods=["GET"], response_model=models.FlagsModel) self.add_api_route("/sdapi/v1/cmd-flags", self.get_cmd_flags, methods=["GET"], response_model=models.FlagsModel)
self.add_api_route("/sdapi/v1/samplers", self.get_samplers, methods=["GET"], response_model=list[models.SamplerItem]) self.add_api_route("/sdapi/v1/samplers", self.get_samplers, methods=["GET"], response_model=list[models.SamplerItem])
self.add_api_route("/sdapi/v1/schedulers", self.get_schedulers, methods=["GET"], response_model=list[models.SchedulerItem])
self.add_api_route("/sdapi/v1/upscalers", self.get_upscalers, methods=["GET"], response_model=list[models.UpscalerItem]) self.add_api_route("/sdapi/v1/upscalers", self.get_upscalers, methods=["GET"], response_model=list[models.UpscalerItem])
self.add_api_route("/sdapi/v1/latent-upscale-modes", self.get_latent_upscale_modes, methods=["GET"], response_model=list[models.LatentUpscalerModeItem]) self.add_api_route("/sdapi/v1/latent-upscale-modes", self.get_latent_upscale_modes, methods=["GET"], response_model=list[models.LatentUpscalerModeItem])
self.add_api_route("/sdapi/v1/sd-models", self.get_sd_models, methods=["GET"], response_model=list[models.SDModelItem]) self.add_api_route("/sdapi/v1/sd-models", self.get_sd_models, methods=["GET"], response_model=list[models.SDModelItem])
@ -683,6 +684,17 @@ class Api:
def get_samplers(self): def get_samplers(self):
return [{"name": sampler[0], "aliases":sampler[2], "options":sampler[3]} for sampler in sd_samplers.all_samplers] return [{"name": sampler[0], "aliases":sampler[2], "options":sampler[3]} for sampler in sd_samplers.all_samplers]
def get_schedulers(self):
return [
{
"name": scheduler.name,
"label": scheduler.label,
"aliases": scheduler.aliases,
"default_rho": scheduler.default_rho,
"need_inner_model": scheduler.need_inner_model,
}
for scheduler in sd_schedulers.schedulers]
def get_upscalers(self): def get_upscalers(self):
return [ return [
{ {

View File

@ -145,7 +145,7 @@ class ExtrasBaseRequest(BaseModel):
gfpgan_visibility: float = Field(default=0, title="GFPGAN Visibility", ge=0, le=1, allow_inf_nan=False, description="Sets the visibility of GFPGAN, values should be between 0 and 1.") gfpgan_visibility: float = Field(default=0, title="GFPGAN Visibility", ge=0, le=1, allow_inf_nan=False, description="Sets the visibility of GFPGAN, values should be between 0 and 1.")
codeformer_visibility: float = Field(default=0, title="CodeFormer Visibility", ge=0, le=1, allow_inf_nan=False, description="Sets the visibility of CodeFormer, values should be between 0 and 1.") codeformer_visibility: float = Field(default=0, title="CodeFormer Visibility", ge=0, le=1, allow_inf_nan=False, description="Sets the visibility of CodeFormer, values should be between 0 and 1.")
codeformer_weight: float = Field(default=0, title="CodeFormer Weight", ge=0, le=1, allow_inf_nan=False, description="Sets the weight of CodeFormer, values should be between 0 and 1.") codeformer_weight: float = Field(default=0, title="CodeFormer Weight", ge=0, le=1, allow_inf_nan=False, description="Sets the weight of CodeFormer, values should be between 0 and 1.")
upscaling_resize: float = Field(default=2, title="Upscaling Factor", ge=1, le=8, description="By how much to upscale the image, only used when resize_mode=0.") upscaling_resize: float = Field(default=2, title="Upscaling Factor", gt=0, description="By how much to upscale the image, only used when resize_mode=0.")
upscaling_resize_w: int = Field(default=512, title="Target Width", ge=1, description="Target width for the upscaler to hit. Only used when resize_mode=1.") upscaling_resize_w: int = Field(default=512, title="Target Width", ge=1, description="Target width for the upscaler to hit. Only used when resize_mode=1.")
upscaling_resize_h: int = Field(default=512, title="Target Height", ge=1, description="Target height for the upscaler to hit. Only used when resize_mode=1.") upscaling_resize_h: int = Field(default=512, title="Target Height", ge=1, description="Target height for the upscaler to hit. Only used when resize_mode=1.")
upscaling_crop: bool = Field(default=True, title="Crop to fit", description="Should the upscaler crop the image to fit in the chosen size?") upscaling_crop: bool = Field(default=True, title="Crop to fit", description="Should the upscaler crop the image to fit in the chosen size?")
@ -233,6 +233,13 @@ class SamplerItem(BaseModel):
aliases: list[str] = Field(title="Aliases") aliases: list[str] = Field(title="Aliases")
options: dict[str, str] = Field(title="Options") options: dict[str, str] = Field(title="Options")
class SchedulerItem(BaseModel):
name: str = Field(title="Name")
label: str = Field(title="Label")
aliases: Optional[list[str]] = Field(title="Aliases")
default_rho: Optional[float] = Field(title="Default Rho")
need_inner_model: Optional[bool] = Field(title="Needs Inner Model")
class UpscalerItem(BaseModel): class UpscalerItem(BaseModel):
class Config: class Config:
protected_namespaces = () protected_namespaces = ()

View File

@ -50,7 +50,7 @@ class FaceRestorerCodeFormer(face_restoration_utils.CommonFaceRestoration):
def restore_face(cropped_face_t): def restore_face(cropped_face_t):
assert self.net is not None assert self.net is not None
return self.net(cropped_face_t, w=w, adain=True)[0] return self.net(cropped_face_t, weight=w, adain=True)[0]
return self.restore_with_helper(np_image, restore_face) return self.restore_with_helper(np_image, restore_face)

View File

@ -11,7 +11,7 @@ import tqdm
from einops import rearrange, repeat from einops import rearrange, repeat
from ldm.util import default from ldm.util import default
from modules import devices, sd_models, shared, sd_samplers, hashes, sd_hijack_checkpoint, errors from modules import devices, sd_models, shared, sd_samplers, hashes, sd_hijack_checkpoint, errors
from modules.textual_inversion import textual_inversion, logging from modules.textual_inversion import textual_inversion, saving_settings
from modules.textual_inversion.learn_schedule import LearnRateScheduler from modules.textual_inversion.learn_schedule import LearnRateScheduler
from torch import einsum from torch import einsum
from torch.nn.init import normal_, xavier_normal_, xavier_uniform_, kaiming_normal_, kaiming_uniform_, zeros_ from torch.nn.init import normal_, xavier_normal_, xavier_uniform_, kaiming_normal_, kaiming_uniform_, zeros_
@ -533,7 +533,7 @@ def train_hypernetwork(id_task, hypernetwork_name: str, learn_rate: float, batch
model_name=checkpoint.model_name, model_hash=checkpoint.shorthash, num_of_dataset_images=len(ds), model_name=checkpoint.model_name, model_hash=checkpoint.shorthash, num_of_dataset_images=len(ds),
**{field: getattr(hypernetwork, field) for field in ['layer_structure', 'activation_func', 'weight_init', 'add_layer_norm', 'use_dropout', ]} **{field: getattr(hypernetwork, field) for field in ['layer_structure', 'activation_func', 'weight_init', 'add_layer_norm', 'use_dropout', ]}
) )
logging.save_settings_to_file(log_directory, {**saved_params, **locals()}) saving_settings.save_settings_to_file(log_directory, {**saved_params, **locals()})
latent_sampling_method = ds.latent_sampling_method latent_sampling_method = ds.latent_sampling_method

View File

@ -1,7 +1,7 @@
from __future__ import annotations from __future__ import annotations
import datetime import datetime
import functools
import pytz import pytz
import io import io
import math import math
@ -13,6 +13,8 @@ import numpy as np
import piexif import piexif
import piexif.helper import piexif.helper
from PIL import Image, ImageFont, ImageDraw, ImageColor, PngImagePlugin, ImageOps from PIL import Image, ImageFont, ImageDraw, ImageColor, PngImagePlugin, ImageOps
# pillow_avif needs to be imported somewhere in code for it to work
import pillow_avif # noqa: F401
import string import string
import json import json
import hashlib import hashlib
@ -347,6 +349,32 @@ def sanitize_filename_part(text, replace_spaces=True):
return text return text
@functools.cache
def get_scheduler_str(sampler_name, scheduler_name):
"""Returns {Scheduler} if the scheduler is applicable to the sampler"""
if scheduler_name == 'Automatic':
config = sd_samplers.find_sampler_config(sampler_name)
scheduler_name = config.options.get('scheduler', 'Automatic')
return scheduler_name.capitalize()
@functools.cache
def get_sampler_scheduler_str(sampler_name, scheduler_name):
"""Returns the '{Sampler} {Scheduler}' if the scheduler is applicable to the sampler"""
return f'{sampler_name} {get_scheduler_str(sampler_name, scheduler_name)}'
def get_sampler_scheduler(p, sampler):
"""Returns '{Sampler} {Scheduler}' / '{Scheduler}' / 'NOTHING_AND_SKIP_PREVIOUS_TEXT'"""
if hasattr(p, 'scheduler') and hasattr(p, 'sampler_name'):
if sampler:
sampler_scheduler = get_sampler_scheduler_str(p.sampler_name, p.scheduler)
else:
sampler_scheduler = get_scheduler_str(p.sampler_name, p.scheduler)
return sanitize_filename_part(sampler_scheduler, replace_spaces=False)
return NOTHING_AND_SKIP_PREVIOUS_TEXT
class FilenameGenerator: class FilenameGenerator:
replacements = { replacements = {
'seed': lambda self: self.seed if self.seed is not None else '', 'seed': lambda self: self.seed if self.seed is not None else '',
@ -358,6 +386,8 @@ class FilenameGenerator:
'height': lambda self: self.image.height, 'height': lambda self: self.image.height,
'styles': lambda self: self.p and sanitize_filename_part(", ".join([style for style in self.p.styles if not style == "None"]) or "None", replace_spaces=False), 'styles': lambda self: self.p and sanitize_filename_part(", ".join([style for style in self.p.styles if not style == "None"]) or "None", replace_spaces=False),
'sampler': lambda self: self.p and sanitize_filename_part(self.p.sampler_name, replace_spaces=False), 'sampler': lambda self: self.p and sanitize_filename_part(self.p.sampler_name, replace_spaces=False),
'sampler_scheduler': lambda self: self.p and get_sampler_scheduler(self.p, True),
'scheduler': lambda self: self.p and get_sampler_scheduler(self.p, False),
'model_hash': lambda self: getattr(self.p, "sd_model_hash", shared.sd_model.sd_model_hash), 'model_hash': lambda self: getattr(self.p, "sd_model_hash", shared.sd_model.sd_model_hash),
'model_name': lambda self: sanitize_filename_part(shared.sd_model.sd_checkpoint_info.name_for_extra, replace_spaces=False), 'model_name': lambda self: sanitize_filename_part(shared.sd_model.sd_checkpoint_info.name_for_extra, replace_spaces=False),
'date': lambda self: datetime.datetime.now().strftime('%Y-%m-%d'), 'date': lambda self: datetime.datetime.now().strftime('%Y-%m-%d'),
@ -569,6 +599,16 @@ def save_image_with_geninfo(image, geninfo, filename, extension=None, existing_p
}) })
piexif.insert(exif_bytes, filename) piexif.insert(exif_bytes, filename)
elif extension.lower() == '.avif':
if opts.enable_pnginfo and geninfo is not None:
exif_bytes = piexif.dump({
"Exif": {
piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(geninfo or "", encoding="unicode")
},
})
image.save(filename,format=image_format, exif=exif_bytes)
elif extension.lower() == ".gif": elif extension.lower() == ".gif":
image.save(filename, format=image_format, comment=geninfo) image.save(filename, format=image_format, comment=geninfo)
else: else:
@ -747,7 +787,6 @@ def read_info_from_image(image: Image.Image) -> tuple[str | None, dict]:
exif_comment = exif_comment.decode('utf8', errors="ignore") exif_comment = exif_comment.decode('utf8', errors="ignore")
if exif_comment: if exif_comment:
items['exif comment'] = exif_comment
geninfo = exif_comment geninfo = exif_comment
elif "comment" in items: # for gif elif "comment" in items: # for gif
geninfo = items["comment"].decode('utf8', errors="ignore") geninfo = items["comment"].decode('utf8', errors="ignore")

View File

@ -145,7 +145,7 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal
return batch_results return batch_results
def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_styles, init_img, sketch, init_img_with_mask, inpaint_color_sketch, init_img_inpaint, init_mask_inpaint, mask_blur: int, mask_alpha: float, inpainting_fill: int, n_iter: int, batch_size: int, cfg_scale: float, image_cfg_scale: float, denoising_strength: float, selected_scale_tab: int, height: int, width: int, scale_by: float, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, img2img_batch_inpaint_mask_dir: str, override_settings_texts, img2img_batch_use_png_info: bool, img2img_batch_png_info_props: list, img2img_batch_png_info_dir: str, request: gr.Request, *args): def img2img(id_task: str, request: gr.Request, mode: int, prompt: str, negative_prompt: str, prompt_styles, init_img, sketch, init_img_with_mask, inpaint_color_sketch, inpaint_color_sketch_orig, init_img_inpaint, init_mask_inpaint, mask_blur: int, mask_alpha: float, inpainting_fill: int, n_iter: int, batch_size: int, cfg_scale: float, image_cfg_scale: float, denoising_strength: float, selected_scale_tab: int, height: int, width: int, scale_by: float, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, img2img_batch_inpaint_mask_dir: str, override_settings_texts, img2img_batch_use_png_info: bool, img2img_batch_png_info_props: list, img2img_batch_png_info_dir: str, *args):
override_settings = create_override_settings_dict(override_settings_texts) override_settings = create_override_settings_dict(override_settings_texts)
is_batch = mode == 5 is_batch = mode == 5

View File

@ -8,7 +8,7 @@ import sys
import gradio as gr import gradio as gr
from modules.paths import data_path from modules.paths import data_path
from modules import shared, ui_tempdir, script_callbacks, processing, infotext_versions, images, prompt_parser from modules import shared, ui_tempdir, script_callbacks, processing, infotext_versions, images, prompt_parser, errors
from PIL import Image from PIL import Image
sys.modules['modules.generation_parameters_copypaste'] = sys.modules[__name__] # alias for old name sys.modules['modules.generation_parameters_copypaste'] = sys.modules[__name__] # alias for old name
@ -500,7 +500,11 @@ def connect_paste(button, paste_fields, input_comp, override_settings_component,
for output, key in paste_fields: for output, key in paste_fields:
if callable(key): if callable(key):
v = key(params) try:
v = key(params)
except Exception:
errors.report(f"Error executing {key}", exc_info=True)
v = None
else: else:
v = params.get(key, None) v = params.get(key, None)

View File

@ -1,17 +1,39 @@
from PIL import Image, ImageFilter, ImageOps from PIL import Image, ImageFilter, ImageOps
def get_crop_region(mask, pad=0): def get_crop_region_v2(mask, pad=0):
"""finds a rectangular region that contains all masked ares in an image. Returns (x1, y1, x2, y2) coordinates of the rectangle. """
For example, if a user has painted the top-right part of a 512x512 image, the result may be (256, 0, 512, 256)""" Finds a rectangular region that contains all masked ares in a mask.
mask_img = mask if isinstance(mask, Image.Image) else Image.fromarray(mask) Returns None if mask is completely black mask (all 0)
box = mask_img.getbbox()
if box: Parameters:
mask: PIL.Image.Image L mode or numpy 1d array
pad: int number of pixels that the region will be extended on all sides
Returns: (x1, y1, x2, y2) | None
Introduced post 1.9.0
"""
mask = mask if isinstance(mask, Image.Image) else Image.fromarray(mask)
if box := mask.getbbox():
x1, y1, x2, y2 = box x1, y1, x2, y2 = box
else: # when no box is found return max(x1 - pad, 0), max(y1 - pad, 0), min(x2 + pad, mask.size[0]), min(y2 + pad, mask.size[1]) if pad else box
x1, y1 = mask_img.size
x2 = y2 = 0
return max(x1 - pad, 0), max(y1 - pad, 0), min(x2 + pad, mask_img.size[0]), min(y2 + pad, mask_img.size[1]) def get_crop_region(mask, pad=0):
"""
Same function as get_crop_region_v2 but handles completely black mask (all 0) differently
when mask all black still return coordinates but the coordinates may be invalid ie x2>x1 or y2>y1
Notes: it is possible for the coordinates to be "valid" again if pad size is sufficiently large
(mask_size.x-pad, mask_size.y-pad, pad, pad)
Extension developer should use get_crop_region_v2 instead unless for compatibility considerations.
"""
mask = mask if isinstance(mask, Image.Image) else Image.fromarray(mask)
if box := get_crop_region_v2(mask, pad):
return box
x1, y1 = mask.size
x2 = y2 = 0
return max(x1 - pad, 0), max(y1 - pad, 0), min(x2 + pad, mask.size[0]), min(y2 + pad, mask.size[1])
def expand_crop_region(crop_region, processing_width, processing_height, image_width, image_height): def expand_crop_region(crop_region, processing_width, processing_height, image_width, image_height):

View File

@ -134,13 +134,15 @@ def run_postprocessing_webui(id_task, *args, **kwargs):
return run_postprocessing(*args, **kwargs) return run_postprocessing(*args, **kwargs)
def run_extras(extras_mode, resize_mode, image, image_folder, input_dir, output_dir, show_extras_results, gfpgan_visibility, codeformer_visibility, codeformer_weight, upscaling_resize, upscaling_resize_w, upscaling_resize_h, upscaling_crop, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility, upscale_first: bool, save_output: bool = True): def run_extras(extras_mode, resize_mode, image, image_folder, input_dir, output_dir, show_extras_results, gfpgan_visibility, codeformer_visibility, codeformer_weight, upscaling_resize, upscaling_resize_w, upscaling_resize_h, upscaling_crop, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility, upscale_first: bool, save_output: bool = True, max_side_length: int = 0):
"""old handler for API""" """old handler for API"""
args = scripts.scripts_postproc.create_args_for_run({ args = scripts.scripts_postproc.create_args_for_run({
"Upscale": { "Upscale": {
"upscale_enabled": True,
"upscale_mode": resize_mode, "upscale_mode": resize_mode,
"upscale_by": upscaling_resize, "upscale_by": upscaling_resize,
"max_side_length": max_side_length,
"upscale_to_width": upscaling_resize_w, "upscale_to_width": upscaling_resize_w,
"upscale_to_height": upscaling_resize_h, "upscale_to_height": upscaling_resize_h,
"upscale_crop": upscaling_crop, "upscale_crop": upscaling_crop,

View File

@ -608,7 +608,7 @@ class Processed:
"version": self.version, "version": self.version,
} }
return json.dumps(obj) return json.dumps(obj, default=lambda o: None)
def infotext(self, p: StableDiffusionProcessing, index): def infotext(self, p: StableDiffusionProcessing, index):
return create_infotext(p, self.all_prompts, self.all_seeds, self.all_subseeds, comments=[], position_in_batch=index % self.batch_size, iteration=index // self.batch_size) return create_infotext(p, self.all_prompts, self.all_seeds, self.all_subseeds, comments=[], position_in_batch=index % self.batch_size, iteration=index // self.batch_size)
@ -703,8 +703,54 @@ def program_version():
return res return res
def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iteration=0, position_in_batch=0, use_main_prompt=False, index=None, all_negative_prompts=None, all_hr_prompts=None, all_hr_negative_prompts=None): def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iteration=0, position_in_batch=0, use_main_prompt=False, index=None, all_negative_prompts=None):
if index is None: """
this function is used to generate the infotext that is stored in the generated images, it's contains the parameters that are required to generate the imagee
Args:
p: StableDiffusionProcessing
all_prompts: list[str]
all_seeds: list[int]
all_subseeds: list[int]
comments: list[str]
iteration: int
position_in_batch: int
use_main_prompt: bool
index: int
all_negative_prompts: list[str]
Returns: str
Extra generation params
p.extra_generation_params dictionary allows for additional parameters to be added to the infotext
this can be use by the base webui or extensions.
To add a new entry, add a new key value pair, the dictionary key will be used as the key of the parameter in the infotext
the value generation_params can be defined as:
- str | None
- List[str|None]
- callable func(**kwargs) -> str | None
When defined as a string, it will be used as without extra processing; this is this most common use case.
Defining as a list allows for parameter that changes across images in the job, for example, the 'Seed' parameter.
The list should have the same length as the total number of images in the entire job.
Defining as a callable function allows parameter cannot be generated earlier or when extra logic is required.
For example 'Hires prompt', due to reasons the hr_prompt might be changed by process in the pipeline or extensions
and may vary across different images, defining as a static string or list would not work.
The function takes locals() as **kwargs, as such will have access to variables like 'p' and 'index'.
the base signature of the function should be:
func(**kwargs) -> str | None
optionally it can have additional arguments that will be used in the function:
func(p, index, **kwargs) -> str | None
note: for better future compatibility even though this function will have access to all variables in the locals(),
it is recommended to only use the arguments present in the function signature of create_infotext.
For actual implementation examples, see StableDiffusionProcessingTxt2Img.init > get_hr_prompt.
"""
if use_main_prompt:
index = 0
elif index is None:
index = position_in_batch + iteration * p.batch_size index = position_in_batch + iteration * p.batch_size
if all_negative_prompts is None: if all_negative_prompts is None:
@ -715,6 +761,9 @@ def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iter
token_merging_ratio = p.get_token_merging_ratio() token_merging_ratio = p.get_token_merging_ratio()
token_merging_ratio_hr = p.get_token_merging_ratio(for_hr=True) token_merging_ratio_hr = p.get_token_merging_ratio(for_hr=True)
prompt_text = p.main_prompt if use_main_prompt else all_prompts[index]
negative_prompt = p.main_negative_prompt if use_main_prompt else all_negative_prompts[index]
uses_ensd = opts.eta_noise_seed_delta != 0 uses_ensd = opts.eta_noise_seed_delta != 0
if uses_ensd: if uses_ensd:
uses_ensd = sd_samplers_common.is_sampler_using_eta_noise_seed_delta(p) uses_ensd = sd_samplers_common.is_sampler_using_eta_noise_seed_delta(p)
@ -747,22 +796,24 @@ def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iter
"RNG": opts.randn_source if opts.randn_source != "GPU" else None, "RNG": opts.randn_source if opts.randn_source != "GPU" else None,
"NGMS": None if p.s_min_uncond == 0 else p.s_min_uncond, "NGMS": None if p.s_min_uncond == 0 else p.s_min_uncond,
"Tiling": "True" if p.tiling else None, "Tiling": "True" if p.tiling else None,
"Hires prompt": None, # This is set later, insert here to keep order
"Hires negative prompt": None, # This is set later, insert here to keep order
**p.extra_generation_params, **p.extra_generation_params,
"Version": program_version() if opts.add_version_to_infotext else None, "Version": program_version() if opts.add_version_to_infotext else None,
"User": p.user if opts.add_user_name_to_info else None, "User": p.user if opts.add_user_name_to_info else None,
} }
if all_hr_prompts := all_hr_prompts or getattr(p, 'all_hr_prompts', None): for key, value in generation_params.items():
generation_params['Hires prompt'] = all_hr_prompts[index] if all_hr_prompts[index] != all_prompts[index] else None try:
if all_hr_negative_prompts := all_hr_negative_prompts or getattr(p, 'all_hr_negative_prompts', None): if isinstance(value, list):
generation_params['Hires negative prompt'] = all_hr_negative_prompts[index] if all_hr_negative_prompts[index] != all_negative_prompts[index] else None generation_params[key] = value[index]
elif callable(value):
generation_params[key] = value(**locals())
except Exception:
errors.report(f'Error creating infotext for key "{key}"', exc_info=True)
generation_params[key] = None
generation_params_text = ", ".join([k if k == v else f'{k}: {infotext_utils.quote(v)}' for k, v in generation_params.items() if v is not None]) generation_params_text = ", ".join([k if k == v else f'{k}: {infotext_utils.quote(v)}' for k, v in generation_params.items() if v is not None])
prompt_text = p.main_prompt if use_main_prompt else all_prompts[index] negative_prompt_text = f"\nNegative prompt: {negative_prompt}" if negative_prompt else ""
negative_prompt_text = f"\nNegative prompt: {p.main_negative_prompt if use_main_prompt else all_negative_prompts[index]}" if all_negative_prompts[index] else ""
return f"{prompt_text}{negative_prompt_text}\n{generation_params_text}".strip() return f"{prompt_text}{negative_prompt_text}\n{generation_params_text}".strip()
@ -1204,6 +1255,17 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
if self.hr_sampler_name is not None and self.hr_sampler_name != self.sampler_name: if self.hr_sampler_name is not None and self.hr_sampler_name != self.sampler_name:
self.extra_generation_params["Hires sampler"] = self.hr_sampler_name self.extra_generation_params["Hires sampler"] = self.hr_sampler_name
def get_hr_prompt(p, index, prompt_text, **kwargs):
hr_prompt = p.all_hr_prompts[index]
return hr_prompt if hr_prompt != prompt_text else None
def get_hr_negative_prompt(p, index, negative_prompt, **kwargs):
hr_negative_prompt = p.all_hr_negative_prompts[index]
return hr_negative_prompt if hr_negative_prompt != negative_prompt else None
self.extra_generation_params["Hires prompt"] = get_hr_prompt
self.extra_generation_params["Hires negative prompt"] = get_hr_negative_prompt
self.extra_generation_params["Hires schedule type"] = None # to be set in sd_samplers_kdiffusion.py self.extra_generation_params["Hires schedule type"] = None # to be set in sd_samplers_kdiffusion.py
if self.hr_scheduler is None: if self.hr_scheduler is None:
@ -1549,16 +1611,23 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
if self.inpaint_full_res: if self.inpaint_full_res:
self.mask_for_overlay = image_mask self.mask_for_overlay = image_mask
mask = image_mask.convert('L') mask = image_mask.convert('L')
crop_region = masking.get_crop_region(mask, self.inpaint_full_res_padding) crop_region = masking.get_crop_region_v2(mask, self.inpaint_full_res_padding)
crop_region = masking.expand_crop_region(crop_region, self.width, self.height, mask.width, mask.height) if crop_region:
x1, y1, x2, y2 = crop_region crop_region = masking.expand_crop_region(crop_region, self.width, self.height, mask.width, mask.height)
x1, y1, x2, y2 = crop_region
mask = mask.crop(crop_region) mask = mask.crop(crop_region)
image_mask = images.resize_image(2, mask, self.width, self.height) image_mask = images.resize_image(2, mask, self.width, self.height)
self.paste_to = (x1, y1, x2-x1, y2-y1) self.inpaint_full_res = False
self.paste_to = (x1, y1, x2-x1, y2-y1)
self.extra_generation_params["Inpaint area"] = "Only masked" self.extra_generation_params["Inpaint area"] = "Only masked"
self.extra_generation_params["Masked area padding"] = self.inpaint_full_res_padding self.extra_generation_params["Masked area padding"] = self.inpaint_full_res_padding
else:
crop_region = None
image_mask = None
self.mask_for_overlay = None
massage = 'Unable to perform "Inpaint Only mask" because mask is blank, switch to img2img mode.'
model_hijack.comments.append(massage)
logging.info(massage)
else: else:
image_mask = images.resize_image(self.resize_mode, image_mask, self.width, self.height) image_mask = images.resize_image(self.resize_mode, image_mask, self.width, self.height)
np_mask = np.array(image_mask) np_mask = np.array(image_mask)
@ -1586,6 +1655,8 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
image = images.resize_image(self.resize_mode, image, self.width, self.height) image = images.resize_image(self.resize_mode, image, self.width, self.height)
if image_mask is not None: if image_mask is not None:
if self.mask_for_overlay.size != (image.width, image.height):
self.mask_for_overlay = images.resize_image(self.resize_mode, self.mask_for_overlay, image.width, image.height)
image_masked = Image.new('RGBa', (image.width, image.height)) image_masked = Image.new('RGBa', (image.width, image.height))
image_masked.paste(image.convert("RGBA").convert("RGBa"), mask=ImageOps.invert(self.mask_for_overlay.convert('L'))) image_masked.paste(image.convert("RGBA").convert("RGBa"), mask=ImageOps.invert(self.mask_for_overlay.convert('L')))

View File

@ -439,12 +439,18 @@ def remove_current_script_callbacks():
for callback_list in callback_map.values(): for callback_list in callback_map.values():
for callback_to_remove in [cb for cb in callback_list if cb.script == filename]: for callback_to_remove in [cb for cb in callback_list if cb.script == filename]:
callback_list.remove(callback_to_remove) callback_list.remove(callback_to_remove)
for ordered_callbacks_list in ordered_callbacks_map.values():
for callback_to_remove in [cb for cb in ordered_callbacks_list if cb.script == filename]:
ordered_callbacks_list.remove(callback_to_remove)
def remove_callbacks_for_function(callback_func): def remove_callbacks_for_function(callback_func):
for callback_list in callback_map.values(): for callback_list in callback_map.values():
for callback_to_remove in [cb for cb in callback_list if cb.callback == callback_func]: for callback_to_remove in [cb for cb in callback_list if cb.callback == callback_func]:
callback_list.remove(callback_to_remove) callback_list.remove(callback_to_remove)
for ordered_callback_list in ordered_callbacks_map.values():
for callback_to_remove in [cb for cb in ordered_callback_list if cb.callback == callback_func]:
ordered_callback_list.remove(callback_to_remove)
def on_app_started(callback, *, name=None): def on_app_started(callback, *, name=None):

View File

@ -2,13 +2,20 @@ import os
import importlib.util import importlib.util
from modules import errors from modules import errors
import sys
loaded_scripts = {}
def load_module(path): def load_module(path):
module_spec = importlib.util.spec_from_file_location(os.path.basename(path), path) module_name, _ = os.path.splitext(os.path.basename(path))
full_module_name = "scripts." + module_name
module_spec = importlib.util.spec_from_file_location(full_module_name, path)
module = importlib.util.module_from_spec(module_spec) module = importlib.util.module_from_spec(module_spec)
module_spec.loader.exec_module(module) module_spec.loader.exec_module(module)
loaded_scripts[path] = module
sys.modules[full_module_name] = module
return module return module

View File

@ -739,12 +739,17 @@ class ScriptRunner:
def onload_script_visibility(params): def onload_script_visibility(params):
title = params.get('Script', None) title = params.get('Script', None)
if title: if title:
title_index = self.titles.index(title) try:
visibility = title_index == self.script_load_ctr title_index = self.titles.index(title)
self.script_load_ctr = (self.script_load_ctr + 1) % len(self.titles) visibility = title_index == self.script_load_ctr
return gr.update(visible=visibility) self.script_load_ctr = (self.script_load_ctr + 1) % len(self.titles)
else: return gr.update(visible=visibility)
return gr.update(visible=False) except ValueError:
params['Script'] = None
massage = f'Cannot find Script: "{title}"'
print(massage)
gr.Warning(massage)
return gr.update(visible=False)
self.infotext_fields.append((dropdown, lambda x: gr.update(value=x.get('Script', 'None')))) self.infotext_fields.append((dropdown, lambda x: gr.update(value=x.get('Script', 'None'))))
self.infotext_fields.extend([(script.group, onload_script_visibility) for script in self.selectable_scripts]) self.infotext_fields.extend([(script.group, onload_script_visibility) for script in self.selectable_scripts])

View File

@ -143,6 +143,7 @@ class ScriptPostprocessingRunner:
self.initialize_scripts(modules.scripts.postprocessing_scripts_data) self.initialize_scripts(modules.scripts.postprocessing_scripts_data)
scripts_order = shared.opts.postprocessing_operation_order scripts_order = shared.opts.postprocessing_operation_order
scripts_filter_out = set(shared.opts.postprocessing_disable_in_extras)
def script_score(name): def script_score(name):
for i, possible_match in enumerate(scripts_order): for i, possible_match in enumerate(scripts_order):
@ -151,9 +152,10 @@ class ScriptPostprocessingRunner:
return len(self.scripts) return len(self.scripts)
script_scores = {script.name: (script_score(script.name), script.order, script.name, original_index) for original_index, script in enumerate(self.scripts)} filtered_scripts = [script for script in self.scripts if script.name not in scripts_filter_out]
script_scores = {script.name: (script_score(script.name), script.order, script.name, original_index) for original_index, script in enumerate(filtered_scripts)}
return sorted(self.scripts, key=lambda x: script_scores[x.name]) return sorted(filtered_scripts, key=lambda x: script_scores[x.name])
def setup_ui(self): def setup_ui(self):
inputs = [] inputs = []

View File

@ -1,5 +1,5 @@
import collections import collections
import os.path import os
import sys import sys
import threading import threading
@ -7,7 +7,6 @@ import torch
import re import re
import safetensors.torch import safetensors.torch
from omegaconf import OmegaConf, ListConfig from omegaconf import OmegaConf, ListConfig
from os import mkdir
from urllib import request from urllib import request
import ldm.modules.midas as midas import ldm.modules.midas as midas
@ -151,7 +150,7 @@ def list_models():
if shared.cmd_opts.no_download_sd_model or cmd_ckpt != shared.sd_model_file or os.path.exists(cmd_ckpt): if shared.cmd_opts.no_download_sd_model or cmd_ckpt != shared.sd_model_file or os.path.exists(cmd_ckpt):
model_url = None model_url = None
else: else:
model_url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" model_url = f"{shared.hf_endpoint}/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors"
model_list = modelloader.load_models(model_path=model_path, model_url=model_url, command_path=shared.cmd_opts.ckpt_dir, ext_filter=[".ckpt", ".safetensors"], download_name="v1-5-pruned-emaonly.safetensors", ext_blacklist=[".vae.ckpt", ".vae.safetensors"]) model_list = modelloader.load_models(model_path=model_path, model_url=model_url, command_path=shared.cmd_opts.ckpt_dir, ext_filter=[".ckpt", ".safetensors"], download_name="v1-5-pruned-emaonly.safetensors", ext_blacklist=[".vae.ckpt", ".vae.safetensors"])
@ -508,7 +507,7 @@ def enable_midas_autodownload():
path = midas.api.ISL_PATHS[model_type] path = midas.api.ISL_PATHS[model_type]
if not os.path.exists(path): if not os.path.exists(path):
if not os.path.exists(midas_path): if not os.path.exists(midas_path):
mkdir(midas_path) os.mkdir(midas_path)
print(f"Downloading midas model weights for {model_type} to {path}") print(f"Downloading midas model weights for {model_type} to {path}")
request.urlretrieve(midas_urls[model_type], path) request.urlretrieve(midas_urls[model_type], path)
@ -787,6 +786,13 @@ def reuse_model_from_already_loaded(sd_model, checkpoint_info, timer):
Additionally deletes loaded models that are over the limit set in settings (sd_checkpoints_limit). Additionally deletes loaded models that are over the limit set in settings (sd_checkpoints_limit).
""" """
if sd_model is not None and sd_model.sd_checkpoint_info.filename == checkpoint_info.filename:
return sd_model
if shared.opts.sd_checkpoints_keep_in_cpu:
send_model_to_cpu(sd_model)
timer.record("send model to cpu")
already_loaded = None already_loaded = None
for i in reversed(range(len(model_data.loaded_sd_models))): for i in reversed(range(len(model_data.loaded_sd_models))):
loaded_model = model_data.loaded_sd_models[i] loaded_model = model_data.loaded_sd_models[i]
@ -796,14 +802,10 @@ def reuse_model_from_already_loaded(sd_model, checkpoint_info, timer):
if len(model_data.loaded_sd_models) > shared.opts.sd_checkpoints_limit > 0: if len(model_data.loaded_sd_models) > shared.opts.sd_checkpoints_limit > 0:
print(f"Unloading model {len(model_data.loaded_sd_models)} over the limit of {shared.opts.sd_checkpoints_limit}: {loaded_model.sd_checkpoint_info.title}") print(f"Unloading model {len(model_data.loaded_sd_models)} over the limit of {shared.opts.sd_checkpoints_limit}: {loaded_model.sd_checkpoint_info.title}")
model_data.loaded_sd_models.pop() del model_data.loaded_sd_models[i]
send_model_to_trash(loaded_model) send_model_to_trash(loaded_model)
timer.record("send model to trash") timer.record("send model to trash")
if shared.opts.sd_checkpoints_keep_in_cpu:
send_model_to_cpu(sd_model)
timer.record("send model to cpu")
if already_loaded is not None: if already_loaded is not None:
send_model_to_device(already_loaded) send_model_to_device(already_loaded)
timer.record("send model to device") timer.record("send model to device")

View File

@ -90,3 +90,5 @@ list_checkpoint_tiles = shared_items.list_checkpoint_tiles
refresh_checkpoints = shared_items.refresh_checkpoints refresh_checkpoints = shared_items.refresh_checkpoints
list_samplers = shared_items.list_samplers list_samplers = shared_items.list_samplers
reload_hypernetworks = shared_items.reload_hypernetworks reload_hypernetworks = shared_items.reload_hypernetworks
hf_endpoint = os.getenv('HF_ENDPOINT', 'https://huggingface.co')

View File

@ -19,7 +19,9 @@ restricted_opts = {
"outdir_grids", "outdir_grids",
"outdir_txt2img_grids", "outdir_txt2img_grids",
"outdir_save", "outdir_save",
"outdir_init_images" "outdir_init_images",
"temp_dir",
"clean_temp_dir_at_start",
} }
categories.register_category("saving", "Saving images") categories.register_category("saving", "Saving images")
@ -382,6 +384,7 @@ options_templates.update(options_section(('sampler-params', "Sampler parameters"
options_templates.update(options_section(('postprocessing', "Postprocessing", "postprocessing"), { options_templates.update(options_section(('postprocessing', "Postprocessing", "postprocessing"), {
'postprocessing_enable_in_main_ui': OptionInfo([], "Enable postprocessing operations in txt2img and img2img tabs", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts()]}), 'postprocessing_enable_in_main_ui': OptionInfo([], "Enable postprocessing operations in txt2img and img2img tabs", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts()]}),
'postprocessing_disable_in_extras': OptionInfo([], "Disable postprocessing operations in extras tab", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts()]}),
'postprocessing_operation_order': OptionInfo([], "Postprocessing operation order", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts()]}), 'postprocessing_operation_order': OptionInfo([], "Postprocessing operation order", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts()]}),
'upscaling_max_images_in_cache': OptionInfo(5, "Maximum number of images in upscaling cache", gr.Slider, {"minimum": 0, "maximum": 10, "step": 1}), 'upscaling_max_images_in_cache': OptionInfo(5, "Maximum number of images in upscaling cache", gr.Slider, {"minimum": 0, "maximum": 10, "step": 1}),
'postprocessing_existing_caption_action': OptionInfo("Ignore", "Action for existing captions", gr.Radio, {"choices": ["Ignore", "Keep", "Prepend", "Append"]}).info("when generating captions using postprocessing; Ignore = use generated; Keep = use original; Prepend/Append = combine both"), 'postprocessing_existing_caption_action': OptionInfo("Ignore", "Action for existing captions", gr.Radio, {"choices": ["Ignore", "Keep", "Prepend", "Append"]}).info("when generating captions using postprocessing; Ignore = use generated; Keep = use original; Prepend/Append = combine both"),

View File

@ -1,12 +1,16 @@
import base64 import base64
import json import json
import os.path
import warnings import warnings
import logging
import numpy as np import numpy as np
import zlib import zlib
from PIL import Image, ImageDraw from PIL import Image, ImageDraw
import torch import torch
logger = logging.getLogger(__name__)
class EmbeddingEncoder(json.JSONEncoder): class EmbeddingEncoder(json.JSONEncoder):
def default(self, obj): def default(self, obj):
@ -43,7 +47,7 @@ def lcg(m=2**32, a=1664525, c=1013904223, seed=0):
def xor_block(block): def xor_block(block):
g = lcg() g = lcg()
randblock = np.array([next(g) for _ in range(np.product(block.shape))]).astype(np.uint8).reshape(block.shape) randblock = np.array([next(g) for _ in range(np.prod(block.shape))]).astype(np.uint8).reshape(block.shape)
return np.bitwise_xor(block.astype(np.uint8), randblock & 0x0F) return np.bitwise_xor(block.astype(np.uint8), randblock & 0x0F)
@ -114,7 +118,7 @@ def extract_image_data_embed(image):
outarr = crop_black(np.array(image.convert('RGB').getdata()).reshape(image.size[1], image.size[0], d).astype(np.uint8)) & 0x0F outarr = crop_black(np.array(image.convert('RGB').getdata()).reshape(image.size[1], image.size[0], d).astype(np.uint8)) & 0x0F
black_cols = np.where(np.sum(outarr, axis=(0, 2)) == 0) black_cols = np.where(np.sum(outarr, axis=(0, 2)) == 0)
if black_cols[0].shape[0] < 2: if black_cols[0].shape[0] < 2:
print('No Image data blocks found.') logger.debug(f'{os.path.basename(getattr(image, "filename", "unknown image file"))}: no embedded information found.')
return None return None
data_block_lower = outarr[:, :black_cols[0].min(), :].astype(np.uint8) data_block_lower = outarr[:, :black_cols[0].min(), :].astype(np.uint8)

View File

@ -17,7 +17,7 @@ import modules.textual_inversion.dataset
from modules.textual_inversion.learn_schedule import LearnRateScheduler from modules.textual_inversion.learn_schedule import LearnRateScheduler
from modules.textual_inversion.image_embedding import embedding_to_b64, embedding_from_b64, insert_image_data_embed, extract_image_data_embed, caption_image_overlay from modules.textual_inversion.image_embedding import embedding_to_b64, embedding_from_b64, insert_image_data_embed, extract_image_data_embed, caption_image_overlay
from modules.textual_inversion.logging import save_settings_to_file from modules.textual_inversion.saving_settings import save_settings_to_file
TextualInversionTemplate = namedtuple("TextualInversionTemplate", ["name", "path"]) TextualInversionTemplate = namedtuple("TextualInversionTemplate", ["name", "path"])

View File

@ -327,8 +327,8 @@ def create_ui():
hr_checkpoint_name = gr.Dropdown(label='Checkpoint', elem_id="hr_checkpoint", choices=["Use same checkpoint"] + modules.sd_models.checkpoint_tiles(use_short=True), value="Use same checkpoint") hr_checkpoint_name = gr.Dropdown(label='Checkpoint', elem_id="hr_checkpoint", choices=["Use same checkpoint"] + modules.sd_models.checkpoint_tiles(use_short=True), value="Use same checkpoint")
create_refresh_button(hr_checkpoint_name, modules.sd_models.list_models, lambda: {"choices": ["Use same checkpoint"] + modules.sd_models.checkpoint_tiles(use_short=True)}, "hr_checkpoint_refresh") create_refresh_button(hr_checkpoint_name, modules.sd_models.list_models, lambda: {"choices": ["Use same checkpoint"] + modules.sd_models.checkpoint_tiles(use_short=True)}, "hr_checkpoint_refresh")
hr_sampler_name = gr.Dropdown(label='Sampling method', elem_id="hr_sampler", choices=["Use same sampler"] + sd_samplers.visible_sampler_names(), value="Use same sampler") hr_sampler_name = gr.Dropdown(label='Hires sampling method', elem_id="hr_sampler", choices=["Use same sampler"] + sd_samplers.visible_sampler_names(), value="Use same sampler")
hr_scheduler = gr.Dropdown(label='Schedule type', elem_id="hr_scheduler", choices=["Use same scheduler"] + [x.label for x in sd_schedulers.schedulers], value="Use same scheduler") hr_scheduler = gr.Dropdown(label='Hires schedule type', elem_id="hr_scheduler", choices=["Use same scheduler"] + [x.label for x in sd_schedulers.schedulers], value="Use same scheduler")
with FormRow(elem_id="txt2img_hires_fix_row4", variant="compact", visible=opts.hires_fix_show_prompts) as hr_prompts_container: with FormRow(elem_id="txt2img_hires_fix_row4", variant="compact", visible=opts.hires_fix_show_prompts) as hr_prompts_container:
with gr.Column(scale=80): with gr.Column(scale=80):

View File

@ -3,14 +3,13 @@ import dataclasses
import json import json
import html import html
import os import os
import platform
import sys
import gradio as gr import gradio as gr
import subprocess as sp import subprocess as sp
from PIL import Image from PIL import Image
from modules import call_queue, shared, ui_tempdir from modules import call_queue, shared, ui_tempdir, util
from modules.infotext_utils import image_from_url_text
import modules.images import modules.images
from modules.ui_components import ToolButton from modules.ui_components import ToolButton
import modules.infotext_utils as parameters_copypaste import modules.infotext_utils as parameters_copypaste
@ -179,31 +178,7 @@ def create_output_panel(tabname, outdir, toprow=None):
except Exception: except Exception:
pass pass
if not os.path.exists(f): util.open_folder(f)
msg = f'Folder "{f}" does not exist. After you create an image, the folder will be created.'
print(msg)
gr.Info(msg)
return
elif not os.path.isdir(f):
msg = f"""
WARNING
An open_folder request was made with an argument that is not a folder.
This could be an error or a malicious attempt to run code on your computer.
Requested path was: {f}
"""
print(msg, file=sys.stderr)
gr.Warning(msg)
return
path = os.path.normpath(f)
if platform.system() == "Windows":
os.startfile(path)
elif platform.system() == "Darwin":
sp.Popen(["open", path])
elif "microsoft-standard-WSL2" in platform.uname().release:
sp.Popen(["wsl-open", path])
else:
sp.Popen(["xdg-open", path])
with gr.Column(elem_id=f"{tabname}_results"): with gr.Column(elem_id=f"{tabname}_results"):
if toprow: if toprow:

View File

@ -58,8 +58,9 @@ def apply_and_restart(disable_list, update_list, disable_all):
def save_config_state(name): def save_config_state(name):
current_config_state = config_states.get_config() current_config_state = config_states.get_config()
if not name:
name = "Config" name = os.path.basename(name or "Config")
current_config_state["name"] = name current_config_state["name"] = name
timestamp = datetime.now().strftime('%Y_%m_%d-%H_%M_%S') timestamp = datetime.now().strftime('%Y_%m_%d-%H_%M_%S')
filename = os.path.join(config_states_dir, f"{timestamp}_{name}.json") filename = os.path.join(config_states_dir, f"{timestamp}_{name}.json")

View File

@ -57,7 +57,10 @@ class Upscaler:
dest_h = int((img.height * scale) // 8 * 8) dest_h = int((img.height * scale) // 8 * 8)
for _ in range(3): for _ in range(3):
if img.width >= dest_w and img.height >= dest_h: if img.width >= dest_w and img.height >= dest_h and scale != 1:
break
if shared.state.interrupted:
break break
shape = (img.width, img.height) shape = (img.width, img.height)

View File

@ -69,6 +69,8 @@ def upscale_with_model(
for y, h, row in grid.tiles: for y, h, row in grid.tiles:
newrow = [] newrow = []
for x, w, tile in row: for x, w, tile in row:
if shared.state.interrupted:
return img
output = upscale_pil_patch(model, tile) output = upscale_pil_patch(model, tile)
scale_factor = output.width // tile.width scale_factor = output.width // tile.width
newrow.append([x * scale_factor, w * scale_factor, output]) newrow.append([x * scale_factor, w * scale_factor, output])

View File

@ -148,6 +148,11 @@ class MassFileLister:
"""Clear the cache of all directories.""" """Clear the cache of all directories."""
self.cached_dirs.clear() self.cached_dirs.clear()
def update_file_entry(self, path):
"""Update the cache for a specific directory."""
dirname, filename = os.path.split(path)
if cached_dir := self.cached_dirs.get(dirname):
cached_dir.update_entry(filename)
def topological_sort(dependencies): def topological_sort(dependencies):
"""Accepts a dictionary mapping name to its dependencies, returns a list of names ordered according to dependencies. """Accepts a dictionary mapping name to its dependencies, returns a list of names ordered according to dependencies.
@ -171,3 +176,38 @@ def topological_sort(dependencies):
inner(depname) inner(depname)
return result return result
def open_folder(path):
"""Open a folder in the file manager of the respect OS."""
# import at function level to avoid potential issues
import gradio as gr
import platform
import sys
import subprocess
if not os.path.exists(path):
msg = f'Folder "{path}" does not exist. after you save an image, the folder will be created.'
print(msg)
gr.Info(msg)
return
elif not os.path.isdir(path):
msg = f"""
WARNING
An open_folder request was made with an path that is not a folder.
This could be an error or a malicious attempt to run code on your computer.
Requested path was: {path}
"""
print(msg, file=sys.stderr)
gr.Warning(msg)
return
path = os.path.normpath(path)
if platform.system() == "Windows":
os.startfile(path)
elif platform.system() == "Darwin":
subprocess.Popen(["open", path])
elif "microsoft-standard-WSL2" in platform.uname().release:
subprocess.Popen(["wsl-open", path])
else:
subprocess.Popen(["xdg-open", path])

View File

@ -30,3 +30,4 @@ torch
torchdiffeq torchdiffeq
torchsde torchsde
transformers==4.30.2 transformers==4.30.2
pillow-avif-plugin==1.4.3

View File

@ -29,3 +29,4 @@ torchdiffeq==0.2.3
torchsde==0.2.6 torchsde==0.2.6
transformers==4.30.2 transformers==4.30.2
httpx==0.24.1 httpx==0.24.1
pillow-avif-plugin==1.4.3

View File

@ -12,6 +12,17 @@ from modules.ui import switch_values_symbol
upscale_cache = {} upscale_cache = {}
def limit_size_by_one_dimention(w, h, limit):
if h > w and h > limit:
w = limit * w // h
h = limit
elif w > limit:
h = limit * h // w
w = limit
return int(w), int(h)
class ScriptPostprocessingUpscale(scripts_postprocessing.ScriptPostprocessing): class ScriptPostprocessingUpscale(scripts_postprocessing.ScriptPostprocessing):
name = "Upscale" name = "Upscale"
order = 1000 order = 1000
@ -30,7 +41,11 @@ class ScriptPostprocessingUpscale(scripts_postprocessing.ScriptPostprocessing):
with FormRow(): with FormRow():
with gr.Tabs(elem_id="extras_resize_mode"): with gr.Tabs(elem_id="extras_resize_mode"):
with gr.TabItem('Scale by', elem_id="extras_scale_by_tab") as tab_scale_by: with gr.TabItem('Scale by', elem_id="extras_scale_by_tab") as tab_scale_by:
upscaling_resize = gr.Slider(minimum=1.0, maximum=8.0, step=0.05, label="Resize", value=4, elem_id="extras_upscaling_resize") with gr.Row():
with gr.Column(scale=4):
upscaling_resize = gr.Slider(minimum=1.0, maximum=8.0, step=0.05, label="Resize", value=4, elem_id="extras_upscaling_resize")
with gr.Column(scale=1, min_width=160):
max_side_length = gr.Number(label="Max side length", value=0, elem_id="extras_upscale_max_side_length", tooltip="If any of two sides of the image ends up larger than specified, will downscale it to fit. 0 = no limit.", min_width=160, step=8, minimum=0)
with gr.TabItem('Scale to', elem_id="extras_scale_to_tab") as tab_scale_to: with gr.TabItem('Scale to', elem_id="extras_scale_to_tab") as tab_scale_to:
with FormRow(): with FormRow():
@ -61,6 +76,7 @@ class ScriptPostprocessingUpscale(scripts_postprocessing.ScriptPostprocessing):
"upscale_enabled": upscale_enabled, "upscale_enabled": upscale_enabled,
"upscale_mode": selected_tab, "upscale_mode": selected_tab,
"upscale_by": upscaling_resize, "upscale_by": upscaling_resize,
"max_side_length": max_side_length,
"upscale_to_width": upscaling_resize_w, "upscale_to_width": upscaling_resize_w,
"upscale_to_height": upscaling_resize_h, "upscale_to_height": upscaling_resize_h,
"upscale_crop": upscaling_crop, "upscale_crop": upscaling_crop,
@ -69,12 +85,18 @@ class ScriptPostprocessingUpscale(scripts_postprocessing.ScriptPostprocessing):
"upscaler_2_visibility": extras_upscaler_2_visibility, "upscaler_2_visibility": extras_upscaler_2_visibility,
} }
def upscale(self, image, info, upscaler, upscale_mode, upscale_by, upscale_to_width, upscale_to_height, upscale_crop): def upscale(self, image, info, upscaler, upscale_mode, upscale_by, max_side_length, upscale_to_width, upscale_to_height, upscale_crop):
if upscale_mode == 1: if upscale_mode == 1:
upscale_by = max(upscale_to_width/image.width, upscale_to_height/image.height) upscale_by = max(upscale_to_width/image.width, upscale_to_height/image.height)
info["Postprocess upscale to"] = f"{upscale_to_width}x{upscale_to_height}" info["Postprocess upscale to"] = f"{upscale_to_width}x{upscale_to_height}"
else: else:
info["Postprocess upscale by"] = upscale_by info["Postprocess upscale by"] = upscale_by
if max_side_length != 0 and max(*image.size)*upscale_by > max_side_length:
upscale_mode = 1
upscale_crop = False
upscale_to_width, upscale_to_height = limit_size_by_one_dimention(image.width*upscale_by, image.height*upscale_by, max_side_length)
upscale_by = max(upscale_to_width/image.width, upscale_to_height/image.height)
info["Max side length"] = max_side_length
cache_key = (hash(np.array(image.getdata()).tobytes()), upscaler.name, upscale_mode, upscale_by, upscale_to_width, upscale_to_height, upscale_crop) cache_key = (hash(np.array(image.getdata()).tobytes()), upscaler.name, upscale_mode, upscale_by, upscale_to_width, upscale_to_height, upscale_crop)
cached_image = upscale_cache.pop(cache_key, None) cached_image = upscale_cache.pop(cache_key, None)
@ -96,7 +118,7 @@ class ScriptPostprocessingUpscale(scripts_postprocessing.ScriptPostprocessing):
return image return image
def process_firstpass(self, pp: scripts_postprocessing.PostprocessedImage, upscale_enabled=True, upscale_mode=1, upscale_by=2.0, upscale_to_width=None, upscale_to_height=None, upscale_crop=False, upscaler_1_name=None, upscaler_2_name=None, upscaler_2_visibility=0.0): def process_firstpass(self, pp: scripts_postprocessing.PostprocessedImage, upscale_enabled=True, upscale_mode=1, upscale_by=2.0, max_side_length=0, upscale_to_width=None, upscale_to_height=None, upscale_crop=False, upscaler_1_name=None, upscaler_2_name=None, upscaler_2_visibility=0.0):
if upscale_mode == 1: if upscale_mode == 1:
pp.shared.target_width = upscale_to_width pp.shared.target_width = upscale_to_width
pp.shared.target_height = upscale_to_height pp.shared.target_height = upscale_to_height
@ -104,10 +126,13 @@ class ScriptPostprocessingUpscale(scripts_postprocessing.ScriptPostprocessing):
pp.shared.target_width = int(pp.image.width * upscale_by) pp.shared.target_width = int(pp.image.width * upscale_by)
pp.shared.target_height = int(pp.image.height * upscale_by) pp.shared.target_height = int(pp.image.height * upscale_by)
def process(self, pp: scripts_postprocessing.PostprocessedImage, upscale_enabled=True, upscale_mode=1, upscale_by=2.0, upscale_to_width=None, upscale_to_height=None, upscale_crop=False, upscaler_1_name=None, upscaler_2_name=None, upscaler_2_visibility=0.0): pp.shared.target_width, pp.shared.target_height = limit_size_by_one_dimention(pp.shared.target_width, pp.shared.target_height, max_side_length)
def process(self, pp: scripts_postprocessing.PostprocessedImage, upscale_enabled=True, upscale_mode=1, upscale_by=2.0, max_side_length=0, upscale_to_width=None, upscale_to_height=None, upscale_crop=False, upscaler_1_name=None, upscaler_2_name=None, upscaler_2_visibility=0.0):
if not upscale_enabled: if not upscale_enabled:
return return
upscaler_1_name = upscaler_1_name
if upscaler_1_name == "None": if upscaler_1_name == "None":
upscaler_1_name = None upscaler_1_name = None
@ -117,17 +142,20 @@ class ScriptPostprocessingUpscale(scripts_postprocessing.ScriptPostprocessing):
if not upscaler1: if not upscaler1:
return return
upscaler_2_name = upscaler_2_name
if upscaler_2_name == "None": if upscaler_2_name == "None":
upscaler_2_name = None upscaler_2_name = None
upscaler2 = next(iter([x for x in shared.sd_upscalers if x.name == upscaler_2_name and x.name != "None"]), None) upscaler2 = next(iter([x for x in shared.sd_upscalers if x.name == upscaler_2_name and x.name != "None"]), None)
assert upscaler2 or (upscaler_2_name is None), f'could not find upscaler named {upscaler_2_name}' assert upscaler2 or (upscaler_2_name is None), f'could not find upscaler named {upscaler_2_name}'
upscaled_image = self.upscale(pp.image, pp.info, upscaler1, upscale_mode, upscale_by, upscale_to_width, upscale_to_height, upscale_crop) upscaled_image = self.upscale(pp.image, pp.info, upscaler1, upscale_mode, upscale_by, max_side_length, upscale_to_width, upscale_to_height, upscale_crop)
pp.info["Postprocess upscaler"] = upscaler1.name pp.info["Postprocess upscaler"] = upscaler1.name
if upscaler2 and upscaler_2_visibility > 0: if upscaler2 and upscaler_2_visibility > 0:
second_upscale = self.upscale(pp.image, pp.info, upscaler2, upscale_mode, upscale_by, upscale_to_width, upscale_to_height, upscale_crop) second_upscale = self.upscale(pp.image, pp.info, upscaler2, upscale_mode, upscale_by, max_side_length, upscale_to_width, upscale_to_height, upscale_crop)
if upscaled_image.mode != second_upscale.mode:
second_upscale = second_upscale.convert(upscaled_image.mode)
upscaled_image = Image.blend(upscaled_image, second_upscale, upscaler_2_visibility) upscaled_image = Image.blend(upscaled_image, second_upscale, upscaler_2_visibility)
pp.info["Postprocess upscaler 2"] = upscaler2.name pp.info["Postprocess upscaler 2"] = upscaler2.name
@ -163,5 +191,5 @@ class ScriptPostprocessingUpscaleSimple(ScriptPostprocessingUpscale):
upscaler1 = next(iter([x for x in shared.sd_upscalers if x.name == upscaler_name]), None) upscaler1 = next(iter([x for x in shared.sd_upscalers if x.name == upscaler_name]), None)
assert upscaler1, f'could not find upscaler named {upscaler_name}' assert upscaler1, f'could not find upscaler named {upscaler_name}'
pp.image = self.upscale(pp.image, pp.info, upscaler1, 0, upscale_by, 0, 0, False) pp.image = self.upscale(pp.image, pp.info, upscaler1, 0, upscale_by, 0, 0, 0, False)
pp.info["Postprocess upscaler"] = upscaler1.name pp.info["Postprocess upscaler"] = upscaler1.name

View File

@ -113,13 +113,13 @@ then
exit 1 exit 1
fi fi
if [[ -d .git ]] if [[ -d "$SCRIPT_DIR/.git" ]]
then then
printf "\n%s\n" "${delimiter}" printf "\n%s\n" "${delimiter}"
printf "Repo already cloned, using it as install directory" printf "Repo already cloned, using it as install directory"
printf "\n%s\n" "${delimiter}" printf "\n%s\n" "${delimiter}"
install_dir="${PWD}/../" install_dir="${SCRIPT_DIR}/../"
clone_dir="${PWD##*/}" clone_dir="${SCRIPT_DIR##*/}"
fi fi
# Check prerequisites # Check prerequisites
@ -129,7 +129,7 @@ case "$gpu_info" in
export HSA_OVERRIDE_GFX_VERSION=10.3.0 export HSA_OVERRIDE_GFX_VERSION=10.3.0
if [[ -z "${TORCH_COMMAND}" ]] if [[ -z "${TORCH_COMMAND}" ]]
then then
pyv="$(${python_cmd} -c 'import sys; print(".".join(map(str, sys.version_info[0:2])))')" pyv="$(${python_cmd} -c 'import sys; print(f"{sys.version_info[0]}.{sys.version_info[1]:02d}")')"
# Using an old nightly compiled against rocm 5.2 for Navi1, see https://github.com/pytorch/pytorch/issues/106728#issuecomment-1749511711 # Using an old nightly compiled against rocm 5.2 for Navi1, see https://github.com/pytorch/pytorch/issues/106728#issuecomment-1749511711
if [[ $pyv == "3.8" ]] if [[ $pyv == "3.8" ]]
then then
@ -243,7 +243,7 @@ prepare_tcmalloc() {
for lib in "${TCMALLOC_LIBS[@]}" for lib in "${TCMALLOC_LIBS[@]}"
do do
# Determine which type of tcmalloc library the library supports # Determine which type of tcmalloc library the library supports
TCMALLOC="$(PATH=/usr/sbin:$PATH ldconfig -p | grep -P $lib | head -n 1)" TCMALLOC="$(PATH=/sbin:/usr/sbin:$PATH ldconfig -p | grep -P $lib | head -n 1)"
TC_INFO=(${TCMALLOC//=>/}) TC_INFO=(${TCMALLOC//=>/})
if [[ ! -z "${TC_INFO}" ]]; then if [[ ! -z "${TC_INFO}" ]]; then
echo "Check TCMalloc: ${TC_INFO}" echo "Check TCMalloc: ${TC_INFO}"