mirror of
https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
synced 2025-04-03 11:09:01 +08:00
Fix issue with cloning CLIP repository
Related to #9774 Add network connectivity check and retry mechanism for `git_clone` function. * **launch.py** - Add a check for network connectivity before attempting to clone the repository. - Print a message and exit if the network connectivity check fails. * **modules/launch_utils.py** - Update the `git_clone` function to include a retry mechanism with a default of 3 retries. - Print a message and retry the clone operation if it fails, up to the specified number of retries. - Clean up the directory before retrying the clone operation.
This commit is contained in:
parent
82a973c043
commit
ecb039644a
432
README.md
432
README.md
@ -1,205 +1,227 @@
|
|||||||
# Stable Diffusion web UI
|
# Stable Diffusion web UI
|
||||||
A web interface for Stable Diffusion, implemented using Gradio library.
|
A web interface for Stable Diffusion, implemented using Gradio library.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
|
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
|
||||||
- Original txt2img and img2img modes
|
- Original txt2img and img2img modes
|
||||||
- One click install and run script (but you still must install python and git)
|
- One click install and run script (but you still must install python and git)
|
||||||
- Outpainting
|
- Outpainting
|
||||||
- Inpainting
|
- Inpainting
|
||||||
- Color Sketch
|
- Color Sketch
|
||||||
- Prompt Matrix
|
- Prompt Matrix
|
||||||
- Stable Diffusion Upscale
|
- Stable Diffusion Upscale
|
||||||
- Attention, specify parts of text that the model should pay more attention to
|
- Attention, specify parts of text that the model should pay more attention to
|
||||||
- a man in a `((tuxedo))` - will pay more attention to tuxedo
|
- a man in a `((tuxedo))` - will pay more attention to tuxedo
|
||||||
- a man in a `(tuxedo:1.21)` - alternative syntax
|
- a man in a `(tuxedo:1.21)` - alternative syntax
|
||||||
- select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
|
- select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
|
||||||
- Loopback, run img2img processing multiple times
|
- Loopback, run img2img processing multiple times
|
||||||
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
|
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
|
||||||
- Textual Inversion
|
- Textual Inversion
|
||||||
- have as many embeddings as you want and use any names you like for them
|
- have as many embeddings as you want and use any names you like for them
|
||||||
- use multiple embeddings with different numbers of vectors per token
|
- use multiple embeddings with different numbers of vectors per token
|
||||||
- works with half precision floating point numbers
|
- works with half precision floating point numbers
|
||||||
- train embeddings on 8GB (also reports of 6GB working)
|
- train embeddings on 8GB (also reports of 6GB working)
|
||||||
- Extras tab with:
|
- Extras tab with:
|
||||||
- GFPGAN, neural network that fixes faces
|
- GFPGAN, neural network that fixes faces
|
||||||
- CodeFormer, face restoration tool as an alternative to GFPGAN
|
- CodeFormer, face restoration tool as an alternative to GFPGAN
|
||||||
- RealESRGAN, neural network upscaler
|
- RealESRGAN, neural network upscaler
|
||||||
- ESRGAN, neural network upscaler with a lot of third party models
|
- ESRGAN, neural network upscaler with a lot of third party models
|
||||||
- SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
|
- SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
|
||||||
- LDSR, Latent diffusion super resolution upscaling
|
- LDSR, Latent diffusion super resolution upscaling
|
||||||
- Resizing aspect ratio options
|
- Resizing aspect ratio options
|
||||||
- Sampling method selection
|
- Sampling method selection
|
||||||
- Adjust sampler eta values (noise multiplier)
|
- Adjust sampler eta values (noise multiplier)
|
||||||
- More advanced noise setting options
|
- More advanced noise setting options
|
||||||
- Interrupt processing at any time
|
- Interrupt processing at any time
|
||||||
- 4GB video card support (also reports of 2GB working)
|
- 4GB video card support (also reports of 2GB working)
|
||||||
- Correct seeds for batches
|
- Correct seeds for batches
|
||||||
- Live prompt token length validation
|
- Live prompt token length validation
|
||||||
- Generation parameters
|
- Generation parameters
|
||||||
- parameters you used to generate images are saved with that image
|
- parameters you used to generate images are saved with that image
|
||||||
- in PNG chunks for PNG, in EXIF for JPEG
|
- in PNG chunks for PNG, in EXIF for JPEG
|
||||||
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
|
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
|
||||||
- can be disabled in settings
|
- can be disabled in settings
|
||||||
- drag and drop an image/text-parameters to promptbox
|
- drag and drop an image/text-parameters to promptbox
|
||||||
- Read Generation Parameters Button, loads parameters in promptbox to UI
|
- Read Generation Parameters Button, loads parameters in promptbox to UI
|
||||||
- Settings page
|
- Settings page
|
||||||
- Running arbitrary python code from UI (must run with `--allow-code` to enable)
|
- Running arbitrary python code from UI (must run with `--allow-code` to enable)
|
||||||
- Mouseover hints for most UI elements
|
- Mouseover hints for most UI elements
|
||||||
- Possible to change defaults/mix/max/step values for UI elements via text config
|
- Possible to change defaults/mix/max/step values for UI elements via text config
|
||||||
- Tiling support, a checkbox to create images that can be tiled like textures
|
- Tiling support, a checkbox to create images that can be tiled like textures
|
||||||
- Progress bar and live image generation preview
|
- Progress bar and live image generation preview
|
||||||
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
|
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
|
||||||
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
|
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
|
||||||
- Styles, a way to save part of prompt and easily apply them via dropdown later
|
- Styles, a way to save part of prompt and easily apply them via dropdown later
|
||||||
- Variations, a way to generate same image but with tiny differences
|
- Variations, a way to generate same image but with tiny differences
|
||||||
- Seed resizing, a way to generate same image but at slightly different resolution
|
- Seed resizing, a way to generate same image but at slightly different resolution
|
||||||
- CLIP interrogator, a button that tries to guess prompt from an image
|
- CLIP interrogator, a button that tries to guess prompt from an image
|
||||||
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
|
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
|
||||||
- Batch Processing, process a group of files using img2img
|
- Batch Processing, process a group of files using img2img
|
||||||
- Img2img Alternative, reverse Euler method of cross attention control
|
- Img2img Alternative, reverse Euler method of cross attention control
|
||||||
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
|
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
|
||||||
- Reloading checkpoints on the fly
|
- Reloading checkpoints on the fly
|
||||||
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
|
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
|
||||||
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
|
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
|
||||||
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
|
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
|
||||||
- separate prompts using uppercase `AND`
|
- separate prompts using uppercase `AND`
|
||||||
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
|
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
|
||||||
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
|
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
|
||||||
- DeepDanbooru integration, creates danbooru style tags for anime prompts
|
- DeepDanbooru integration, creates danbooru style tags for anime prompts
|
||||||
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args)
|
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args)
|
||||||
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
|
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
|
||||||
- Generate forever option
|
- Generate forever option
|
||||||
- Training tab
|
- Training tab
|
||||||
- hypernetworks and embeddings options
|
- hypernetworks and embeddings options
|
||||||
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
|
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
|
||||||
- Clip skip
|
- Clip skip
|
||||||
- Hypernetworks
|
- Hypernetworks
|
||||||
- Loras (same as Hypernetworks but more pretty)
|
- Loras (same as Hypernetworks but more pretty)
|
||||||
- A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
|
- A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
|
||||||
- Can select to load a different VAE from settings screen
|
- Can select to load a different VAE from settings screen
|
||||||
- Estimated completion time in progress bar
|
- Estimated completion time in progress bar
|
||||||
- API
|
- API
|
||||||
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML
|
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML
|
||||||
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
|
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
|
||||||
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
|
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
|
||||||
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
|
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
|
||||||
- Now without any bad letters!
|
- Now without any bad letters!
|
||||||
- Load checkpoints in safetensors format
|
- Load checkpoints in safetensors format
|
||||||
- Eased resolution restriction: generated image's dimensions must be a multiple of 8 rather than 64
|
- Eased resolution restriction: generated image's dimensions must be a multiple of 8 rather than 64
|
||||||
- Now with a license!
|
- Now with a license!
|
||||||
- Reorder elements in the UI from settings screen
|
- Reorder elements in the UI from settings screen
|
||||||
- [Segmind Stable Diffusion](https://huggingface.co/segmind/SSD-1B) support
|
- [Segmind Stable Diffusion](https://huggingface.co/segmind/SSD-1B) support
|
||||||
|
|
||||||
## Installation and Running
|
## Installation and Running
|
||||||
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for:
|
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for:
|
||||||
- [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended)
|
- [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended)
|
||||||
- [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
|
- [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
|
||||||
- [Intel CPUs, Intel GPUs (both integrated and discrete)](https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon) (external wiki page)
|
- [Intel CPUs, Intel GPUs (both integrated and discrete)](https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon) (external wiki page)
|
||||||
- [Ascend NPUs](https://github.com/wangshuai09/stable-diffusion-webui/wiki/Install-and-run-on-Ascend-NPUs) (external wiki page)
|
- [Ascend NPUs](https://github.com/wangshuai09/stable-diffusion-webui/wiki/Install-and-run-on-Ascend-NPUs) (external wiki page)
|
||||||
|
|
||||||
Alternatively, use online services (like Google Colab):
|
Alternatively, use online services (like Google Colab):
|
||||||
|
|
||||||
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
|
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
|
||||||
|
|
||||||
### Installation on Windows 10/11 with NVidia-GPUs using release package
|
### Installation on Windows 10/11 with NVidia-GPUs using release package
|
||||||
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract its contents.
|
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract its contents.
|
||||||
2. Run `update.bat`.
|
2. Run `update.bat`.
|
||||||
3. Run `run.bat`.
|
3. Run `run.bat`.
|
||||||
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs)
|
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs)
|
||||||
|
|
||||||
### Automatic Installation on Windows
|
### Automatic Installation on Windows
|
||||||
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
|
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
|
||||||
2. Install [git](https://git-scm.com/download/win).
|
2. Install [git](https://git-scm.com/download/win).
|
||||||
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
|
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
|
||||||
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
|
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
|
||||||
|
|
||||||
### Automatic Installation on Linux
|
### Automatic Installation on Linux
|
||||||
1. Install the dependencies:
|
1. Install the dependencies:
|
||||||
```bash
|
```bash
|
||||||
# Debian-based:
|
# Debian-based:
|
||||||
sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0
|
sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0
|
||||||
# Red Hat-based:
|
# Red Hat-based:
|
||||||
sudo dnf install wget git python3 gperftools-libs libglvnd-glx
|
sudo dnf install wget git python3 gperftools-libs libglvnd-glx
|
||||||
# openSUSE-based:
|
# openSUSE-based:
|
||||||
sudo zypper install wget git python3 libtcmalloc4 libglvnd
|
sudo zypper install wget git python3 libtcmalloc4 libglvnd
|
||||||
# Arch-based:
|
# Arch-based:
|
||||||
sudo pacman -S wget git python3
|
sudo pacman -S wget git python3
|
||||||
```
|
```
|
||||||
If your system is very new, you need to install python3.11 or python3.10:
|
If your system is very new, you need to install python3.11 or python3.10:
|
||||||
```bash
|
```bash
|
||||||
# Ubuntu 24.04
|
# Ubuntu 24.04
|
||||||
sudo add-apt-repository ppa:deadsnakes/ppa
|
sudo add-apt-repository ppa:deadsnakes/ppa
|
||||||
sudo apt update
|
sudo apt update
|
||||||
sudo apt install python3.11
|
sudo apt install python3.11
|
||||||
|
|
||||||
# Manjaro/Arch
|
# Manjaro/Arch
|
||||||
sudo pacman -S yay
|
sudo pacman -S yay
|
||||||
yay -S python311 # do not confuse with python3.11 package
|
yay -S python311 # do not confuse with python3.11 package
|
||||||
|
|
||||||
# Only for 3.11
|
# Only for 3.11
|
||||||
# Then set up env variable in launch script
|
# Then set up env variable in launch script
|
||||||
export python_cmd="python3.11"
|
export python_cmd="python3.11"
|
||||||
# or in webui-user.sh
|
# or in webui-user.sh
|
||||||
python_cmd="python3.11"
|
python_cmd="python3.11"
|
||||||
```
|
```
|
||||||
2. Navigate to the directory you would like the webui to be installed and execute the following command:
|
2. Navigate to the directory you would like the webui to be installed and execute the following command:
|
||||||
```bash
|
```bash
|
||||||
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
|
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
|
||||||
```
|
```
|
||||||
Or just clone the repo wherever you want:
|
Or just clone the repo wherever you want:
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
|
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Run `webui.sh`.
|
3. Run `webui.sh`.
|
||||||
4. Check `webui-user.sh` for options.
|
4. Check `webui-user.sh` for options.
|
||||||
### Installation on Apple Silicon
|
### Installation on Apple Silicon
|
||||||
|
|
||||||
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
|
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
|
||||||
|
|
||||||
## Contributing
|
### Troubleshooting Network Issues
|
||||||
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
|
|
||||||
|
If you encounter network-related issues while cloning repositories or installing packages, follow these steps:
|
||||||
## Documentation
|
|
||||||
|
1. **Check Network Connectivity**: Ensure that your internet connection is stable and working. You can use the `ping` command to check connectivity to `github.com`:
|
||||||
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
|
```bash
|
||||||
|
ping github.com
|
||||||
For the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) [crawlable wiki](https://github-wiki-see.page/m/AUTOMATIC1111/stable-diffusion-webui/wiki).
|
```
|
||||||
|
|
||||||
## Credits
|
2. **Set Up Proxy**: If you are behind a proxy, configure your proxy settings for `git` and `pip`. Replace `ip:port` with your proxy's IP address and port:
|
||||||
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
|
```bash
|
||||||
|
git config --global http.proxy http://ip:port
|
||||||
- Stable Diffusion - https://github.com/Stability-AI/stablediffusion, https://github.com/CompVis/taming-transformers, https://github.com/mcmonkey4eva/sd3-ref
|
git config --global https.proxy http://ip:port
|
||||||
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
|
pip config set global.proxy http://ip:port
|
||||||
- Spandrel - https://github.com/chaiNNer-org/spandrel implementing
|
```
|
||||||
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
|
|
||||||
- CodeFormer - https://github.com/sczhou/CodeFormer
|
3. **Retry Mechanism**: If the issue persists, you can add a retry mechanism to the `git_clone` function in `modules/launch_utils.py` to handle transient network issues. This will automatically retry the cloning process a few times before failing.
|
||||||
- ESRGAN - https://github.com/xinntao/ESRGAN
|
|
||||||
- SwinIR - https://github.com/JingyunLiang/SwinIR
|
4. **Check Firewall and Security Software**: Ensure that your firewall or security software is not blocking the connection to `github.com`.
|
||||||
- Swin2SR - https://github.com/mv-lab/swin2sr
|
|
||||||
- LDSR - https://github.com/Hafiidz/latent-diffusion
|
5. **Use VPN**: If you are in a region with restricted access to certain websites, consider using a VPN to bypass these restrictions.
|
||||||
- MiDaS - https://github.com/isl-org/MiDaS
|
|
||||||
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
|
## Contributing
|
||||||
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
|
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
|
||||||
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
|
|
||||||
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
|
## Documentation
|
||||||
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
|
|
||||||
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
|
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
|
||||||
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
|
|
||||||
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
|
For the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) [crawlable wiki](https://github-wiki-see.page/m/AUTOMATIC1111/stable-diffusion-webui/wiki).
|
||||||
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
|
|
||||||
- xformers - https://github.com/facebookresearch/xformers
|
## Credits
|
||||||
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
|
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
|
||||||
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
|
|
||||||
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
|
- Stable Diffusion - https://github.com/Stability-AI/stablediffusion, https://github.com/CompVis/taming-transformers, https://github.com/mcmonkey4eva/sd3-ref
|
||||||
- Security advice - RyotaK
|
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
|
||||||
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
|
- Spandrel - https://github.com/chaiNNer-org/spandrel implementing
|
||||||
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
|
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
|
||||||
- LyCORIS - KohakuBlueleaf
|
- CodeFormer - https://github.com/sczhou/CodeFormer
|
||||||
- Restart sampling - lambertae - https://github.com/Newbeeer/diffusion_restart_sampling
|
- ESRGAN - https://github.com/xinntao/ESRGAN
|
||||||
- Hypertile - tfernd - https://github.com/tfernd/HyperTile
|
- SwinIR - https://github.com/JingyunLiang/SwinIR
|
||||||
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
|
- Swin2SR - https://github.com/mv-lab/swin2sr
|
||||||
- (You)
|
- LDSR - https://github.com/Hafiidz/latent-diffusion
|
||||||
|
- MiDaS - https://github.com/isl-org/MiDaS
|
||||||
|
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
|
||||||
|
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
|
||||||
|
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
|
||||||
|
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
|
||||||
|
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
|
||||||
|
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
|
||||||
|
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
|
||||||
|
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
|
||||||
|
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
|
||||||
|
- xformers - https://github.com/facebookresearch/xformers
|
||||||
|
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
|
||||||
|
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
|
||||||
|
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
|
||||||
|
- Security advice - RyotaK
|
||||||
|
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
|
||||||
|
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
|
||||||
|
- LyCORIS - KohakuBlueleaf
|
||||||
|
- Restart sampling - lambertae - https://github.com/Newbeeer/diffusion_restart_sampling
|
||||||
|
- Hypertile - tfernd - https://github.com/tfernd/HyperTile
|
||||||
|
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
|
||||||
|
- (You)
|
||||||
|
109
launch.py
109
launch.py
@ -1,48 +1,61 @@
|
|||||||
from modules import launch_utils
|
import socket
|
||||||
|
from modules import launch_utils
|
||||||
args = launch_utils.args
|
|
||||||
python = launch_utils.python
|
args = launch_utils.args
|
||||||
git = launch_utils.git
|
python = launch_utils.python
|
||||||
index_url = launch_utils.index_url
|
git = launch_utils.git
|
||||||
dir_repos = launch_utils.dir_repos
|
index_url = launch_utils.index_url
|
||||||
|
dir_repos = launch_utils.dir_repos
|
||||||
commit_hash = launch_utils.commit_hash
|
|
||||||
git_tag = launch_utils.git_tag
|
commit_hash = launch_utils.commit_hash
|
||||||
|
git_tag = launch_utils.git_tag
|
||||||
run = launch_utils.run
|
|
||||||
is_installed = launch_utils.is_installed
|
run = launch_utils.run
|
||||||
repo_dir = launch_utils.repo_dir
|
is_installed = launch_utils.is_installed
|
||||||
|
repo_dir = launch_utils.repo_dir
|
||||||
run_pip = launch_utils.run_pip
|
|
||||||
check_run_python = launch_utils.check_run_python
|
run_pip = launch_utils.run_pip
|
||||||
git_clone = launch_utils.git_clone
|
check_run_python = launch_utils.check_run_python
|
||||||
git_pull_recursive = launch_utils.git_pull_recursive
|
git_clone = launch_utils.git_clone
|
||||||
list_extensions = launch_utils.list_extensions
|
git_pull_recursive = launch_utils.git_pull_recursive
|
||||||
run_extension_installer = launch_utils.run_extension_installer
|
list_extensions = launch_utils.list_extensions
|
||||||
prepare_environment = launch_utils.prepare_environment
|
run_extension_installer = launch_utils.run_extension_installer
|
||||||
configure_for_tests = launch_utils.configure_for_tests
|
prepare_environment = launch_utils.prepare_environment
|
||||||
start = launch_utils.start
|
configure_for_tests = launch_utils.configure_for_tests
|
||||||
|
start = launch_utils.start
|
||||||
|
|
||||||
def main():
|
|
||||||
if args.dump_sysinfo:
|
def check_network_connectivity():
|
||||||
filename = launch_utils.dump_sysinfo()
|
try:
|
||||||
|
socket.create_connection(("www.github.com", 80))
|
||||||
print(f"Sysinfo saved as {filename}. Exiting...")
|
return True
|
||||||
|
except OSError:
|
||||||
exit(0)
|
return False
|
||||||
|
|
||||||
launch_utils.startup_timer.record("initial startup")
|
|
||||||
|
def main():
|
||||||
with launch_utils.startup_timer.subcategory("prepare environment"):
|
if args.dump_sysinfo:
|
||||||
if not args.skip_prepare_environment:
|
filename = launch_utils.dump_sysinfo()
|
||||||
prepare_environment()
|
|
||||||
|
print(f"Sysinfo saved as {filename}. Exiting...")
|
||||||
if args.test_server:
|
|
||||||
configure_for_tests()
|
exit(0)
|
||||||
|
|
||||||
start()
|
launch_utils.startup_timer.record("initial startup")
|
||||||
|
|
||||||
|
with launch_utils.startup_timer.subcategory("prepare environment"):
|
||||||
if __name__ == "__main__":
|
if not args.skip_prepare_environment:
|
||||||
main()
|
if check_network_connectivity():
|
||||||
|
prepare_environment()
|
||||||
|
else:
|
||||||
|
print("Network connectivity check failed. Please check your network settings and try again.")
|
||||||
|
exit(1)
|
||||||
|
|
||||||
|
if args.test_server:
|
||||||
|
configure_for_tests()
|
||||||
|
|
||||||
|
start()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
@ -1,482 +1,488 @@
|
|||||||
# this scripts installs necessary requirements and launches main program in webui.py
|
# this scripts installs necessary requirements and launches main program in webui.py
|
||||||
import logging
|
import logging
|
||||||
import re
|
import re
|
||||||
import subprocess
|
import subprocess
|
||||||
import os
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
import sys
|
import sys
|
||||||
import importlib.util
|
import importlib.util
|
||||||
import importlib.metadata
|
import importlib.metadata
|
||||||
import platform
|
import platform
|
||||||
import json
|
import json
|
||||||
import shlex
|
import shlex
|
||||||
from functools import lru_cache
|
from functools import lru_cache
|
||||||
|
|
||||||
from modules import cmd_args, errors
|
from modules import cmd_args, errors
|
||||||
from modules.paths_internal import script_path, extensions_dir
|
from modules.paths_internal import script_path, extensions_dir
|
||||||
from modules.timer import startup_timer
|
from modules.timer import startup_timer
|
||||||
from modules import logging_config
|
from modules import logging_config
|
||||||
|
|
||||||
args, _ = cmd_args.parser.parse_known_args()
|
args, _ = cmd_args.parser.parse_known_args()
|
||||||
logging_config.setup_logging(args.loglevel)
|
logging_config.setup_logging(args.loglevel)
|
||||||
|
|
||||||
python = sys.executable
|
python = sys.executable
|
||||||
git = os.environ.get('GIT', "git")
|
git = os.environ.get('GIT', "git")
|
||||||
index_url = os.environ.get('INDEX_URL', "")
|
index_url = os.environ.get('INDEX_URL', "")
|
||||||
dir_repos = "repositories"
|
dir_repos = "repositories"
|
||||||
|
|
||||||
# Whether to default to printing command output
|
# Whether to default to printing command output
|
||||||
default_command_live = (os.environ.get('WEBUI_LAUNCH_LIVE_OUTPUT') == "1")
|
default_command_live = (os.environ.get('WEBUI_LAUNCH_LIVE_OUTPUT') == "1")
|
||||||
|
|
||||||
os.environ.setdefault('GRADIO_ANALYTICS_ENABLED', 'False')
|
os.environ.setdefault('GRADIO_ANALYTICS_ENABLED', 'False')
|
||||||
|
|
||||||
|
|
||||||
def check_python_version():
|
def check_python_version():
|
||||||
is_windows = platform.system() == "Windows"
|
is_windows = platform.system() == "Windows"
|
||||||
major = sys.version_info.major
|
major = sys.version_info.major
|
||||||
minor = sys.version_info.minor
|
minor = sys.version_info.minor
|
||||||
micro = sys.version_info.micro
|
micro = sys.version_info.micro
|
||||||
|
|
||||||
if is_windows:
|
if is_windows:
|
||||||
supported_minors = [10]
|
supported_minors = [10]
|
||||||
else:
|
else:
|
||||||
supported_minors = [7, 8, 9, 10, 11]
|
supported_minors = [7, 8, 9, 10, 11]
|
||||||
|
|
||||||
if not (major == 3 and minor in supported_minors):
|
if not (major == 3 and minor in supported_minors):
|
||||||
import modules.errors
|
import modules.errors
|
||||||
|
|
||||||
modules.errors.print_error_explanation(f"""
|
modules.errors.print_error_explanation(f"""
|
||||||
INCOMPATIBLE PYTHON VERSION
|
INCOMPATIBLE PYTHON VERSION
|
||||||
|
|
||||||
This program is tested with 3.10.6 Python, but you have {major}.{minor}.{micro}.
|
This program is tested with 3.10.6 Python, but you have {major}.{minor}.{micro}.
|
||||||
If you encounter an error with "RuntimeError: Couldn't install torch." message,
|
If you encounter an error with "RuntimeError: Couldn't install torch." message,
|
||||||
or any other error regarding unsuccessful package (library) installation,
|
or any other error regarding unsuccessful package (library) installation,
|
||||||
please downgrade (or upgrade) to the latest version of 3.10 Python
|
please downgrade (or upgrade) to the latest version of 3.10 Python
|
||||||
and delete current Python and "venv" folder in WebUI's directory.
|
and delete current Python and "venv" folder in WebUI's directory.
|
||||||
|
|
||||||
You can download 3.10 Python from here: https://www.python.org/downloads/release/python-3106/
|
You can download 3.10 Python from here: https://www.python.org/downloads/release/python-3106/
|
||||||
|
|
||||||
{"Alternatively, use a binary release of WebUI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre" if is_windows else ""}
|
{"Alternatively, use a binary release of WebUI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre" if is_windows else ""}
|
||||||
|
|
||||||
Use --skip-python-version-check to suppress this warning.
|
Use --skip-python-version-check to suppress this warning.
|
||||||
""")
|
""")
|
||||||
|
|
||||||
|
|
||||||
@lru_cache()
|
@lru_cache()
|
||||||
def commit_hash():
|
def commit_hash():
|
||||||
try:
|
try:
|
||||||
return subprocess.check_output([git, "-C", script_path, "rev-parse", "HEAD"], shell=False, encoding='utf8').strip()
|
return subprocess.check_output([git, "-C", script_path, "rev-parse", "HEAD"], shell=False, encoding='utf8').strip()
|
||||||
except Exception:
|
except Exception:
|
||||||
return "<none>"
|
return "<none>"
|
||||||
|
|
||||||
|
|
||||||
@lru_cache()
|
@lru_cache()
|
||||||
def git_tag():
|
def git_tag():
|
||||||
try:
|
try:
|
||||||
return subprocess.check_output([git, "-C", script_path, "describe", "--tags"], shell=False, encoding='utf8').strip()
|
return subprocess.check_output([git, "-C", script_path, "describe", "--tags"], shell=False, encoding='utf8').strip()
|
||||||
except Exception:
|
except Exception:
|
||||||
try:
|
try:
|
||||||
|
|
||||||
changelog_md = os.path.join(script_path, "CHANGELOG.md")
|
changelog_md = os.path.join(script_path, "CHANGELOG.md")
|
||||||
with open(changelog_md, "r", encoding="utf-8") as file:
|
with open(changelog_md, "r", encoding="utf-8") as file:
|
||||||
line = next((line.strip() for line in file if line.strip()), "<none>")
|
line = next((line.strip() for line in file if line.strip()), "<none>")
|
||||||
line = line.replace("## ", "")
|
line = line.replace("## ", "")
|
||||||
return line
|
return line
|
||||||
except Exception:
|
except Exception:
|
||||||
return "<none>"
|
return "<none>"
|
||||||
|
|
||||||
|
|
||||||
def run(command, desc=None, errdesc=None, custom_env=None, live: bool = default_command_live) -> str:
|
def run(command, desc=None, errdesc=None, custom_env=None, live: bool = default_command_live) -> str:
|
||||||
if desc is not None:
|
if desc is not None:
|
||||||
print(desc)
|
print(desc)
|
||||||
|
|
||||||
run_kwargs = {
|
run_kwargs = {
|
||||||
"args": command,
|
"args": command,
|
||||||
"shell": True,
|
"shell": True,
|
||||||
"env": os.environ if custom_env is None else custom_env,
|
"env": os.environ if custom_env is None else custom_env,
|
||||||
"encoding": 'utf8',
|
"encoding": 'utf8',
|
||||||
"errors": 'ignore',
|
"errors": 'ignore',
|
||||||
}
|
}
|
||||||
|
|
||||||
if not live:
|
if not live:
|
||||||
run_kwargs["stdout"] = run_kwargs["stderr"] = subprocess.PIPE
|
run_kwargs["stdout"] = run_kwargs["stderr"] = subprocess.PIPE
|
||||||
|
|
||||||
result = subprocess.run(**run_kwargs)
|
result = subprocess.run(**run_kwargs)
|
||||||
|
|
||||||
if result.returncode != 0:
|
if result.returncode != 0:
|
||||||
error_bits = [
|
error_bits = [
|
||||||
f"{errdesc or 'Error running command'}.",
|
f"{errdesc or 'Error running command'}.",
|
||||||
f"Command: {command}",
|
f"Command: {command}",
|
||||||
f"Error code: {result.returncode}",
|
f"Error code: {result.returncode}",
|
||||||
]
|
]
|
||||||
if result.stdout:
|
if result.stdout:
|
||||||
error_bits.append(f"stdout: {result.stdout}")
|
error_bits.append(f"stdout: {result.stdout}")
|
||||||
if result.stderr:
|
if result.stderr:
|
||||||
error_bits.append(f"stderr: {result.stderr}")
|
error_bits.append(f"stderr: {result.stderr}")
|
||||||
raise RuntimeError("\n".join(error_bits))
|
raise RuntimeError("\n".join(error_bits))
|
||||||
|
|
||||||
return (result.stdout or "")
|
return (result.stdout or "")
|
||||||
|
|
||||||
|
|
||||||
def is_installed(package):
|
def is_installed(package):
|
||||||
try:
|
try:
|
||||||
dist = importlib.metadata.distribution(package)
|
dist = importlib.metadata.distribution(package)
|
||||||
except importlib.metadata.PackageNotFoundError:
|
except importlib.metadata.PackageNotFoundError:
|
||||||
try:
|
try:
|
||||||
spec = importlib.util.find_spec(package)
|
spec = importlib.util.find_spec(package)
|
||||||
except ModuleNotFoundError:
|
except ModuleNotFoundError:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
return spec is not None
|
return spec is not None
|
||||||
|
|
||||||
return dist is not None
|
return dist is not None
|
||||||
|
|
||||||
|
|
||||||
def repo_dir(name):
|
def repo_dir(name):
|
||||||
return os.path.join(script_path, dir_repos, name)
|
return os.path.join(script_path, dir_repos, name)
|
||||||
|
|
||||||
|
|
||||||
def run_pip(command, desc=None, live=default_command_live):
|
def run_pip(command, desc=None, live=default_command_live):
|
||||||
if args.skip_install:
|
if args.skip_install:
|
||||||
return
|
return
|
||||||
|
|
||||||
index_url_line = f' --index-url {index_url}' if index_url != '' else ''
|
index_url_line = f' --index-url {index_url}' if index_url != '' else ''
|
||||||
return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live)
|
return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live)
|
||||||
|
|
||||||
|
|
||||||
def check_run_python(code: str) -> bool:
|
def check_run_python(code: str) -> bool:
|
||||||
result = subprocess.run([python, "-c", code], capture_output=True, shell=False)
|
result = subprocess.run([python, "-c", code], capture_output=True, shell=False)
|
||||||
return result.returncode == 0
|
return result.returncode == 0
|
||||||
|
|
||||||
|
|
||||||
def git_fix_workspace(dir, name):
|
def git_fix_workspace(dir, name):
|
||||||
run(f'"{git}" -C "{dir}" fetch --refetch --no-auto-gc', f"Fetching all contents for {name}", f"Couldn't fetch {name}", live=True)
|
run(f'"{git}" -C "{dir}" fetch --refetch --no-auto-gc', f"Fetching all contents for {name}", f"Couldn't fetch {name}", live=True)
|
||||||
run(f'"{git}" -C "{dir}" gc --aggressive --prune=now', f"Pruning {name}", f"Couldn't prune {name}", live=True)
|
run(f'"{git}" -C "{dir}" gc --aggressive --prune=now', f"Pruning {name}", f"Couldn't prune {name}", live=True)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
||||||
def run_git(dir, name, command, desc=None, errdesc=None, custom_env=None, live: bool = default_command_live, autofix=True):
|
def run_git(dir, name, command, desc=None, errdesc=None, custom_env=None, live: bool = default_command_live, autofix=True):
|
||||||
try:
|
try:
|
||||||
return run(f'"{git}" -C "{dir}" {command}', desc=desc, errdesc=errdesc, custom_env=custom_env, live=live)
|
return run(f'"{git}" -C "{dir}" {command}', desc=desc, errdesc=errdesc, custom_env=custom_env, live=live)
|
||||||
except RuntimeError:
|
except RuntimeError:
|
||||||
if not autofix:
|
if not autofix:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
print(f"{errdesc}, attempting autofix...")
|
print(f"{errdesc}, attempting autofix...")
|
||||||
git_fix_workspace(dir, name)
|
git_fix_workspace(dir, name)
|
||||||
|
|
||||||
return run(f'"{git}" -C "{dir}" {command}', desc=desc, errdesc=errdesc, custom_env=custom_env, live=live)
|
return run(f'"{git}" -C "{dir}" {command}', desc=desc, errdesc=errdesc, custom_env=custom_env, live=live)
|
||||||
|
|
||||||
|
|
||||||
def git_clone(url, dir, name, commithash=None):
|
def git_clone(url, dir, name, commithash=None, retries=3):
|
||||||
# TODO clone into temporary dir and move if successful
|
# TODO clone into temporary dir and move if successful
|
||||||
|
|
||||||
if os.path.exists(dir):
|
if os.path.exists(dir):
|
||||||
if commithash is None:
|
if commithash is None:
|
||||||
return
|
return
|
||||||
|
|
||||||
current_hash = run_git(dir, name, 'rev-parse HEAD', None, f"Couldn't determine {name}'s hash: {commithash}", live=False).strip()
|
current_hash = run_git(dir, name, 'rev-parse HEAD', None, f"Couldn't determine {name}'s hash: {commithash}", live=False).strip()
|
||||||
if current_hash == commithash:
|
if current_hash == commithash:
|
||||||
return
|
return
|
||||||
|
|
||||||
if run_git(dir, name, 'config --get remote.origin.url', None, f"Couldn't determine {name}'s origin URL", live=False).strip() != url:
|
if run_git(dir, name, 'config --get remote.origin.url', None, f"Couldn't determine {name}'s origin URL", live=False).strip() != url:
|
||||||
run_git(dir, name, f'remote set-url origin "{url}"', None, f"Failed to set {name}'s origin URL", live=False)
|
run_git(dir, name, f'remote set-url origin "{url}"', None, f"Failed to set {name}'s origin URL", live=False)
|
||||||
|
|
||||||
run_git(dir, name, 'fetch', f"Fetching updates for {name}...", f"Couldn't fetch {name}", autofix=False)
|
run_git(dir, name, 'fetch', f"Fetching updates for {name}...", f"Couldn't fetch {name}", autofix=False)
|
||||||
|
|
||||||
run_git(dir, name, f'checkout {commithash}', f"Checking out commit for {name} with hash: {commithash}...", f"Couldn't checkout commit {commithash} for {name}", live=True)
|
run_git(dir, name, f'checkout {commithash}', f"Checking out commit for {name} with hash: {commithash}...", f"Couldn't checkout commit {commithash} for {name}", live=True)
|
||||||
|
|
||||||
return
|
return
|
||||||
|
|
||||||
try:
|
for attempt in range(retries):
|
||||||
run(f'"{git}" clone --config core.filemode=false "{url}" "{dir}"', f"Cloning {name} into {dir}...", f"Couldn't clone {name}", live=True)
|
try:
|
||||||
except RuntimeError:
|
run(f'"{git}" clone --config core.filemode=false "{url}" "{dir}"', f"Cloning {name} into {dir}...", f"Couldn't clone {name}", live=True)
|
||||||
shutil.rmtree(dir, ignore_errors=True)
|
break
|
||||||
raise
|
except RuntimeError:
|
||||||
|
if attempt < retries - 1:
|
||||||
if commithash is not None:
|
print(f"Retrying clone for {name} (attempt {attempt + 1}/{retries})...")
|
||||||
run(f'"{git}" -C "{dir}" checkout {commithash}', None, "Couldn't checkout {name}'s hash: {commithash}")
|
shutil.rmtree(dir, ignore_errors=True)
|
||||||
|
else:
|
||||||
|
shutil.rmtree(dir, ignore_errors=True)
|
||||||
def git_pull_recursive(dir):
|
raise
|
||||||
for subdir, _, _ in os.walk(dir):
|
|
||||||
if os.path.exists(os.path.join(subdir, '.git')):
|
if commithash is not None:
|
||||||
try:
|
run(f'"{git}" -C "{dir}" checkout {commithash}', None, "Couldn't checkout {name}'s hash: {commithash}")
|
||||||
output = subprocess.check_output([git, '-C', subdir, 'pull', '--autostash'])
|
|
||||||
print(f"Pulled changes for repository in '{subdir}':\n{output.decode('utf-8').strip()}\n")
|
|
||||||
except subprocess.CalledProcessError as e:
|
def git_pull_recursive(dir):
|
||||||
print(f"Couldn't perform 'git pull' on repository in '{subdir}':\n{e.output.decode('utf-8').strip()}\n")
|
for subdir, _, _ in os.walk(dir):
|
||||||
|
if os.path.exists(os.path.join(subdir, '.git')):
|
||||||
|
try:
|
||||||
def version_check(commit):
|
output = subprocess.check_output([git, '-C', subdir, 'pull', '--autostash'])
|
||||||
try:
|
print(f"Pulled changes for repository in '{subdir}':\n{output.decode('utf-8').strip()}\n")
|
||||||
import requests
|
except subprocess.CalledProcessError as e:
|
||||||
commits = requests.get('https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/branches/master').json()
|
print(f"Couldn't perform 'git pull' on repository in '{subdir}':\n{e.output.decode('utf-8').strip()}\n")
|
||||||
if commit != "<none>" and commits['commit']['sha'] != commit:
|
|
||||||
print("--------------------------------------------------------")
|
|
||||||
print("| You are not up to date with the most recent release. |")
|
def version_check(commit):
|
||||||
print("| Consider running `git pull` to update. |")
|
try:
|
||||||
print("--------------------------------------------------------")
|
import requests
|
||||||
elif commits['commit']['sha'] == commit:
|
commits = requests.get('https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/branches/master').json()
|
||||||
print("You are up to date with the most recent release.")
|
if commit != "<none>" and commits['commit']['sha'] != commit:
|
||||||
else:
|
print("--------------------------------------------------------")
|
||||||
print("Not a git clone, can't perform version check.")
|
print("| You are not up to date with the most recent release. |")
|
||||||
except Exception as e:
|
print("| Consider running `git pull` to update. |")
|
||||||
print("version check failed", e)
|
print("--------------------------------------------------------")
|
||||||
|
elif commits['commit']['sha'] == commit:
|
||||||
|
print("You are up to date with the most recent release.")
|
||||||
def run_extension_installer(extension_dir):
|
else:
|
||||||
path_installer = os.path.join(extension_dir, "install.py")
|
print("Not a git clone, can't perform version check.")
|
||||||
if not os.path.isfile(path_installer):
|
except Exception as e:
|
||||||
return
|
print("version check failed", e)
|
||||||
|
|
||||||
try:
|
|
||||||
env = os.environ.copy()
|
def run_extension_installer(extension_dir):
|
||||||
env['PYTHONPATH'] = f"{script_path}{os.pathsep}{env.get('PYTHONPATH', '')}"
|
path_installer = os.path.join(extension_dir, "install.py")
|
||||||
|
if not os.path.isfile(path_installer):
|
||||||
stdout = run(f'"{python}" "{path_installer}"', errdesc=f"Error running install.py for extension {extension_dir}", custom_env=env).strip()
|
return
|
||||||
if stdout:
|
|
||||||
print(stdout)
|
try:
|
||||||
except Exception as e:
|
env = os.environ.copy()
|
||||||
errors.report(str(e))
|
env['PYTHONPATH'] = f"{script_path}{os.pathsep}{env.get('PYTHONPATH', '')}"
|
||||||
|
|
||||||
|
stdout = run(f'"{python}" "{path_installer}"', errdesc=f"Error running install.py for extension {extension_dir}", custom_env=env).strip()
|
||||||
def list_extensions(settings_file):
|
if stdout:
|
||||||
settings = {}
|
print(stdout)
|
||||||
|
except Exception as e:
|
||||||
try:
|
errors.report(str(e))
|
||||||
with open(settings_file, "r", encoding="utf8") as file:
|
|
||||||
settings = json.load(file)
|
|
||||||
except FileNotFoundError:
|
def list_extensions(settings_file):
|
||||||
pass
|
settings = {}
|
||||||
except Exception:
|
|
||||||
errors.report(f'\nCould not load settings\nThe config file "{settings_file}" is likely corrupted\nIt has been moved to the "tmp/config.json"\nReverting config to default\n\n''', exc_info=True)
|
try:
|
||||||
os.replace(settings_file, os.path.join(script_path, "tmp", "config.json"))
|
with open(settings_file, "r", encoding="utf8") as file:
|
||||||
|
settings = json.load(file)
|
||||||
disabled_extensions = set(settings.get('disabled_extensions', []))
|
except FileNotFoundError:
|
||||||
disable_all_extensions = settings.get('disable_all_extensions', 'none')
|
pass
|
||||||
|
except Exception:
|
||||||
if disable_all_extensions != 'none' or args.disable_extra_extensions or args.disable_all_extensions or not os.path.isdir(extensions_dir):
|
errors.report(f'\nCould not load settings\nThe config file "{settings_file}" is likely corrupted\nIt has been moved to the "tmp/config.json"\nReverting config to default\n\n''', exc_info=True)
|
||||||
return []
|
os.replace(settings_file, os.path.join(script_path, "tmp", "config.json"))
|
||||||
|
|
||||||
return [x for x in os.listdir(extensions_dir) if x not in disabled_extensions]
|
disabled_extensions = set(settings.get('disabled_extensions', []))
|
||||||
|
disable_all_extensions = settings.get('disable_all_extensions', 'none')
|
||||||
|
|
||||||
def run_extensions_installers(settings_file):
|
if disable_all_extensions != 'none' or args.disable_extra_extensions or args.disable_all_extensions or not os.path.isdir(extensions_dir):
|
||||||
if not os.path.isdir(extensions_dir):
|
return []
|
||||||
return
|
|
||||||
|
return [x for x in os.listdir(extensions_dir) if x not in disabled_extensions]
|
||||||
with startup_timer.subcategory("run extensions installers"):
|
|
||||||
for dirname_extension in list_extensions(settings_file):
|
|
||||||
logging.debug(f"Installing {dirname_extension}")
|
def run_extensions_installers(settings_file):
|
||||||
|
if not os.path.isdir(extensions_dir):
|
||||||
path = os.path.join(extensions_dir, dirname_extension)
|
return
|
||||||
|
|
||||||
if os.path.isdir(path):
|
with startup_timer.subcategory("run extensions installers"):
|
||||||
run_extension_installer(path)
|
for dirname_extension in list_extensions(settings_file):
|
||||||
startup_timer.record(dirname_extension)
|
logging.debug(f"Installing {dirname_extension}")
|
||||||
|
|
||||||
|
path = os.path.join(extensions_dir, dirname_extension)
|
||||||
re_requirement = re.compile(r"\s*([-_a-zA-Z0-9]+)\s*(?:==\s*([-+_.a-zA-Z0-9]+))?\s*")
|
|
||||||
|
if os.path.isdir(path):
|
||||||
|
run_extension_installer(path)
|
||||||
def requirements_met(requirements_file):
|
startup_timer.record(dirname_extension)
|
||||||
"""
|
|
||||||
Does a simple parse of a requirements.txt file to determine if all rerqirements in it
|
|
||||||
are already installed. Returns True if so, False if not installed or parsing fails.
|
re_requirement = re.compile(r"\s*([-_a-zA-Z0-9]+)\s*(?:==\s*([-+_.a-zA-Z0-9]+))?\s*")
|
||||||
"""
|
|
||||||
|
|
||||||
import importlib.metadata
|
def requirements_met(requirements_file):
|
||||||
import packaging.version
|
"""
|
||||||
|
Does a simple parse of a requirements.txt file to determine if all rerqirements in it
|
||||||
with open(requirements_file, "r", encoding="utf8") as file:
|
are already installed. Returns True if so, False if not installed or parsing fails.
|
||||||
for line in file:
|
"""
|
||||||
if line.strip() == "":
|
|
||||||
continue
|
import importlib.metadata
|
||||||
|
import packaging.version
|
||||||
m = re.match(re_requirement, line)
|
|
||||||
if m is None:
|
with open(requirements_file, "r", encoding="utf8") as file:
|
||||||
return False
|
for line in file:
|
||||||
|
if line.strip() == "":
|
||||||
package = m.group(1).strip()
|
continue
|
||||||
version_required = (m.group(2) or "").strip()
|
|
||||||
|
m = re.match(re_requirement, line)
|
||||||
if version_required == "":
|
if m is None:
|
||||||
continue
|
return False
|
||||||
|
|
||||||
try:
|
package = m.group(1).strip()
|
||||||
version_installed = importlib.metadata.version(package)
|
version_required = (m.group(2) or "").strip()
|
||||||
except Exception:
|
|
||||||
return False
|
if version_required == "":
|
||||||
|
continue
|
||||||
if packaging.version.parse(version_required) != packaging.version.parse(version_installed):
|
|
||||||
return False
|
try:
|
||||||
|
version_installed = importlib.metadata.version(package)
|
||||||
return True
|
except Exception:
|
||||||
|
return False
|
||||||
|
|
||||||
def prepare_environment():
|
if packaging.version.parse(version_required) != packaging.version.parse(version_installed):
|
||||||
torch_index_url = os.environ.get('TORCH_INDEX_URL', "https://download.pytorch.org/whl/cu121")
|
return False
|
||||||
torch_command = os.environ.get('TORCH_COMMAND', f"pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url {torch_index_url}")
|
|
||||||
if args.use_ipex:
|
return True
|
||||||
if platform.system() == "Windows":
|
|
||||||
# The "Nuullll/intel-extension-for-pytorch" wheels were built from IPEX source for Intel Arc GPU: https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main
|
|
||||||
# This is NOT an Intel official release so please use it at your own risk!!
|
def prepare_environment():
|
||||||
# See https://github.com/Nuullll/intel-extension-for-pytorch/releases/tag/v2.0.110%2Bxpu-master%2Bdll-bundle for details.
|
torch_index_url = os.environ.get('TORCH_INDEX_URL', "https://download.pytorch.org/whl/cu121")
|
||||||
#
|
torch_command = os.environ.get('TORCH_COMMAND', f"pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url {torch_index_url}")
|
||||||
# Strengths (over official IPEX 2.0.110 windows release):
|
if args.use_ipex:
|
||||||
# - AOT build (for Arc GPU only) to eliminate JIT compilation overhead: https://github.com/intel/intel-extension-for-pytorch/issues/399
|
if platform.system() == "Windows":
|
||||||
# - Bundles minimal oneAPI 2023.2 dependencies into the python wheels, so users don't need to install oneAPI for the whole system.
|
# The "Nuullll/intel-extension-for-pytorch" wheels were built from IPEX source for Intel Arc GPU: https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main
|
||||||
# - Provides a compatible torchvision wheel: https://github.com/intel/intel-extension-for-pytorch/issues/465
|
# This is NOT an Intel official release so please use it at your own risk!!
|
||||||
# Limitation:
|
# See https://github.com/Nuullll/intel-extension-for-pytorch/releases/tag/v2.0.110%2Bxpu-master%2Bdll-bundle for details.
|
||||||
# - Only works for python 3.10
|
#
|
||||||
url_prefix = "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.0.110%2Bxpu-master%2Bdll-bundle"
|
# Strengths (over official IPEX 2.0.110 windows release):
|
||||||
torch_command = os.environ.get('TORCH_COMMAND', f"pip install {url_prefix}/torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64.whl {url_prefix}/torchvision-0.15.2a0+fa99a53-cp310-cp310-win_amd64.whl {url_prefix}/intel_extension_for_pytorch-2.0.110+gitc6ea20b-cp310-cp310-win_amd64.whl")
|
# - AOT build (for Arc GPU only) to eliminate JIT compilation overhead: https://github.com/intel/intel-extension-for-pytorch/issues/399
|
||||||
else:
|
# - Bundles minimal oneAPI 2023.2 dependencies into the python wheels, so users don't need to install oneAPI for the whole system.
|
||||||
# Using official IPEX release for linux since it's already an AOT build.
|
# - Provides a compatible torchvision wheel: https://github.com/intel/intel-extension-for-pytorch/issues/465
|
||||||
# However, users still have to install oneAPI toolkit and activate oneAPI environment manually.
|
# Limitation:
|
||||||
# See https://intel.github.io/intel-extension-for-pytorch/index.html#installation for details.
|
# - Only works for python 3.10
|
||||||
torch_index_url = os.environ.get('TORCH_INDEX_URL', "https://pytorch-extension.intel.com/release-whl/stable/xpu/us/")
|
url_prefix = "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.0.110%2Bxpu-master%2Bdll-bundle"
|
||||||
torch_command = os.environ.get('TORCH_COMMAND', f"pip install torch==2.0.0a0 intel-extension-for-pytorch==2.0.110+gitba7f6c1 --extra-index-url {torch_index_url}")
|
torch_command = os.environ.get('TORCH_COMMAND', f"pip install {url_prefix}/torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64.whl {url_prefix}/torchvision-0.15.2a0+fa99a53-cp310-cp310-win_amd64.whl {url_prefix}/intel_extension_for_pytorch-2.0.110+gitc6ea20b-cp310-cp310-win_amd64.whl")
|
||||||
requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt")
|
else:
|
||||||
requirements_file_for_npu = os.environ.get('REQS_FILE_FOR_NPU', "requirements_npu.txt")
|
# Using official IPEX release for linux since it's already an AOT build.
|
||||||
|
# However, users still have to install oneAPI toolkit and activate oneAPI environment manually.
|
||||||
xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.23.post1')
|
# See https://intel.github.io/intel-extension-for-pytorch/index.html#installation for details.
|
||||||
clip_package = os.environ.get('CLIP_PACKAGE', "https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip")
|
torch_index_url = os.environ.get('TORCH_INDEX_URL', "https://pytorch-extension.intel.com/release-whl/stable/xpu/us/")
|
||||||
openclip_package = os.environ.get('OPENCLIP_PACKAGE', "https://github.com/mlfoundations/open_clip/archive/bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b.zip")
|
torch_command = os.environ.get('TORCH_COMMAND', f"pip install torch==2.0.0a0 intel-extension-for-pytorch==2.0.110+gitba7f6c1 --extra-index-url {torch_index_url}")
|
||||||
|
requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt")
|
||||||
assets_repo = os.environ.get('ASSETS_REPO', "https://github.com/AUTOMATIC1111/stable-diffusion-webui-assets.git")
|
requirements_file_for_npu = os.environ.get('REQS_FILE_FOR_NPU', "requirements_npu.txt")
|
||||||
stable_diffusion_repo = os.environ.get('STABLE_DIFFUSION_REPO', "https://github.com/Stability-AI/stablediffusion.git")
|
|
||||||
stable_diffusion_xl_repo = os.environ.get('STABLE_DIFFUSION_XL_REPO', "https://github.com/Stability-AI/generative-models.git")
|
xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.23.post1')
|
||||||
k_diffusion_repo = os.environ.get('K_DIFFUSION_REPO', 'https://github.com/crowsonkb/k-diffusion.git')
|
clip_package = os.environ.get('CLIP_PACKAGE', "https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip")
|
||||||
blip_repo = os.environ.get('BLIP_REPO', 'https://github.com/salesforce/BLIP.git')
|
openclip_package = os.environ.get('OPENCLIP_PACKAGE', "https://github.com/mlfoundations/open_clip/archive/bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b.zip")
|
||||||
|
|
||||||
assets_commit_hash = os.environ.get('ASSETS_COMMIT_HASH', "6f7db241d2f8ba7457bac5ca9753331f0c266917")
|
assets_repo = os.environ.get('ASSETS_REPO', "https://github.com/AUTOMATIC1111/stable-diffusion-webui-assets.git")
|
||||||
stable_diffusion_commit_hash = os.environ.get('STABLE_DIFFUSION_COMMIT_HASH', "cf1d67a6fd5ea1aa600c4df58e5b47da45f6bdbf")
|
stable_diffusion_repo = os.environ.get('STABLE_DIFFUSION_REPO', "https://github.com/Stability-AI/stablediffusion.git")
|
||||||
stable_diffusion_xl_commit_hash = os.environ.get('STABLE_DIFFUSION_XL_COMMIT_HASH', "45c443b316737a4ab6e40413d7794a7f5657c19f")
|
stable_diffusion_xl_repo = os.environ.get('STABLE_DIFFUSION_XL_REPO', "https://github.com/Stability-AI/generative-models.git")
|
||||||
k_diffusion_commit_hash = os.environ.get('K_DIFFUSION_COMMIT_HASH', "ab527a9a6d347f364e3d185ba6d714e22d80cb3c")
|
k_diffusion_repo = os.environ.get('K_DIFFUSION_REPO', 'https://github.com/crowsonkb/k-diffusion.git')
|
||||||
blip_commit_hash = os.environ.get('BLIP_COMMIT_HASH', "48211a1594f1321b00f14c9f7a5b4813144b2fb9")
|
blip_repo = os.environ.get('BLIP_REPO', 'https://github.com/salesforce/BLIP.git')
|
||||||
|
|
||||||
try:
|
assets_commit_hash = os.environ.get('ASSETS_COMMIT_HASH', "6f7db241d2f8ba7457bac5ca9753331f0c266917")
|
||||||
# the existence of this file is a signal to webui.sh/bat that webui needs to be restarted when it stops execution
|
stable_diffusion_commit_hash = os.environ.get('STABLE_DIFFUSION_COMMIT_HASH', "cf1d67a6fd5ea1aa600c4df58e5b47da45f6bdbf")
|
||||||
os.remove(os.path.join(script_path, "tmp", "restart"))
|
stable_diffusion_xl_commit_hash = os.environ.get('STABLE_DIFFUSION_XL_COMMIT_HASH', "45c443b316737a4ab6e40413d7794a7f5657c19f")
|
||||||
os.environ.setdefault('SD_WEBUI_RESTARTING', '1')
|
k_diffusion_commit_hash = os.environ.get('K_DIFFUSION_COMMIT_HASH', "ab527a9a6d347f364e3d185ba6d714e22d80cb3c")
|
||||||
except OSError:
|
blip_commit_hash = os.environ.get('BLIP_COMMIT_HASH', "48211a1594f1321b00f14c9f7a5b4813144b2fb9")
|
||||||
pass
|
|
||||||
|
try:
|
||||||
if not args.skip_python_version_check:
|
# the existence of this file is a signal to webui.sh/bat that webui needs to be restarted when it stops execution
|
||||||
check_python_version()
|
os.remove(os.path.join(script_path, "tmp", "restart"))
|
||||||
|
os.environ.setdefault('SD_WEBUI_RESTARTING', '1')
|
||||||
startup_timer.record("checks")
|
except OSError:
|
||||||
|
pass
|
||||||
commit = commit_hash()
|
|
||||||
tag = git_tag()
|
if not args.skip_python_version_check:
|
||||||
startup_timer.record("git version info")
|
check_python_version()
|
||||||
|
|
||||||
print(f"Python {sys.version}")
|
startup_timer.record("checks")
|
||||||
print(f"Version: {tag}")
|
|
||||||
print(f"Commit hash: {commit}")
|
commit = commit_hash()
|
||||||
|
tag = git_tag()
|
||||||
if args.reinstall_torch or not is_installed("torch") or not is_installed("torchvision"):
|
startup_timer.record("git version info")
|
||||||
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
|
|
||||||
startup_timer.record("install torch")
|
print(f"Python {sys.version}")
|
||||||
|
print(f"Version: {tag}")
|
||||||
if args.use_ipex:
|
print(f"Commit hash: {commit}")
|
||||||
args.skip_torch_cuda_test = True
|
|
||||||
if not args.skip_torch_cuda_test and not check_run_python("import torch; assert torch.cuda.is_available()"):
|
if args.reinstall_torch or not is_installed("torch") or not is_installed("torchvision"):
|
||||||
raise RuntimeError(
|
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
|
||||||
'Torch is not able to use GPU; '
|
startup_timer.record("install torch")
|
||||||
'add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'
|
|
||||||
)
|
if args.use_ipex:
|
||||||
startup_timer.record("torch GPU test")
|
args.skip_torch_cuda_test = True
|
||||||
|
if not args.skip_torch_cuda_test and not check_run_python("import torch; assert torch.cuda.is_available()"):
|
||||||
if not is_installed("clip"):
|
raise RuntimeError(
|
||||||
run_pip(f"install {clip_package}", "clip")
|
'Torch is not able to use GPU; '
|
||||||
startup_timer.record("install clip")
|
'add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'
|
||||||
|
)
|
||||||
if not is_installed("open_clip"):
|
startup_timer.record("torch GPU test")
|
||||||
run_pip(f"install {openclip_package}", "open_clip")
|
|
||||||
startup_timer.record("install open_clip")
|
if not is_installed("clip"):
|
||||||
|
run_pip(f"install {clip_package}", "clip")
|
||||||
if (not is_installed("xformers") or args.reinstall_xformers) and args.xformers:
|
startup_timer.record("install clip")
|
||||||
run_pip(f"install -U -I --no-deps {xformers_package}", "xformers")
|
|
||||||
startup_timer.record("install xformers")
|
if not is_installed("open_clip"):
|
||||||
|
run_pip(f"install {openclip_package}", "open_clip")
|
||||||
if not is_installed("ngrok") and args.ngrok:
|
startup_timer.record("install open_clip")
|
||||||
run_pip("install ngrok", "ngrok")
|
|
||||||
startup_timer.record("install ngrok")
|
if (not is_installed("xformers") or args.reinstall_xformers) and args.xformers:
|
||||||
|
run_pip(f"install -U -I --no-deps {xformers_package}", "xformers")
|
||||||
os.makedirs(os.path.join(script_path, dir_repos), exist_ok=True)
|
startup_timer.record("install xformers")
|
||||||
|
|
||||||
git_clone(assets_repo, repo_dir('stable-diffusion-webui-assets'), "assets", assets_commit_hash)
|
if not is_installed("ngrok") and args.ngrok:
|
||||||
git_clone(stable_diffusion_repo, repo_dir('stable-diffusion-stability-ai'), "Stable Diffusion", stable_diffusion_commit_hash)
|
run_pip("install ngrok", "ngrok")
|
||||||
git_clone(stable_diffusion_xl_repo, repo_dir('generative-models'), "Stable Diffusion XL", stable_diffusion_xl_commit_hash)
|
startup_timer.record("install ngrok")
|
||||||
git_clone(k_diffusion_repo, repo_dir('k-diffusion'), "K-diffusion", k_diffusion_commit_hash)
|
|
||||||
git_clone(blip_repo, repo_dir('BLIP'), "BLIP", blip_commit_hash)
|
os.makedirs(os.path.join(script_path, dir_repos), exist_ok=True)
|
||||||
|
|
||||||
startup_timer.record("clone repositores")
|
git_clone(assets_repo, repo_dir('stable-diffusion-webui-assets'), "assets", assets_commit_hash)
|
||||||
|
git_clone(stable_diffusion_repo, repo_dir('stable-diffusion-stability-ai'), "Stable Diffusion", stable_diffusion_commit_hash)
|
||||||
if not os.path.isfile(requirements_file):
|
git_clone(stable_diffusion_xl_repo, repo_dir('generative-models'), "Stable Diffusion XL", stable_diffusion_xl_commit_hash)
|
||||||
requirements_file = os.path.join(script_path, requirements_file)
|
git_clone(k_diffusion_repo, repo_dir('k-diffusion'), "K-diffusion", k_diffusion_commit_hash)
|
||||||
|
git_clone(blip_repo, repo_dir('BLIP'), "BLIP", blip_commit_hash)
|
||||||
if not requirements_met(requirements_file):
|
|
||||||
run_pip(f"install -r \"{requirements_file}\"", "requirements")
|
startup_timer.record("clone repositores")
|
||||||
startup_timer.record("install requirements")
|
|
||||||
|
if not os.path.isfile(requirements_file):
|
||||||
if not os.path.isfile(requirements_file_for_npu):
|
requirements_file = os.path.join(script_path, requirements_file)
|
||||||
requirements_file_for_npu = os.path.join(script_path, requirements_file_for_npu)
|
|
||||||
|
if not requirements_met(requirements_file):
|
||||||
if "torch_npu" in torch_command and not requirements_met(requirements_file_for_npu):
|
run_pip(f"install -r \"{requirements_file}\"", "requirements")
|
||||||
run_pip(f"install -r \"{requirements_file_for_npu}\"", "requirements_for_npu")
|
startup_timer.record("install requirements")
|
||||||
startup_timer.record("install requirements_for_npu")
|
|
||||||
|
if not os.path.isfile(requirements_file_for_npu):
|
||||||
if not args.skip_install:
|
requirements_file_for_npu = os.path.join(script_path, requirements_file_for_npu)
|
||||||
run_extensions_installers(settings_file=args.ui_settings_file)
|
|
||||||
|
if "torch_npu" in torch_command and not requirements_met(requirements_file_for_npu):
|
||||||
if args.update_check:
|
run_pip(f"install -r \"{requirements_file_for_npu}\"", "requirements_for_npu")
|
||||||
version_check(commit)
|
startup_timer.record("install requirements_for_npu")
|
||||||
startup_timer.record("check version")
|
|
||||||
|
if not args.skip_install:
|
||||||
if args.update_all_extensions:
|
run_extensions_installers(settings_file=args.ui_settings_file)
|
||||||
git_pull_recursive(extensions_dir)
|
|
||||||
startup_timer.record("update extensions")
|
if args.update_check:
|
||||||
|
version_check(commit)
|
||||||
if "--exit" in sys.argv:
|
startup_timer.record("check version")
|
||||||
print("Exiting because of --exit argument")
|
|
||||||
exit(0)
|
if args.update_all_extensions:
|
||||||
|
git_pull_recursive(extensions_dir)
|
||||||
|
startup_timer.record("update extensions")
|
||||||
def configure_for_tests():
|
|
||||||
if "--api" not in sys.argv:
|
if "--exit" in sys.argv:
|
||||||
sys.argv.append("--api")
|
print("Exiting because of --exit argument")
|
||||||
if "--ckpt" not in sys.argv:
|
exit(0)
|
||||||
sys.argv.append("--ckpt")
|
|
||||||
sys.argv.append(os.path.join(script_path, "test/test_files/empty.pt"))
|
|
||||||
if "--skip-torch-cuda-test" not in sys.argv:
|
def configure_for_tests():
|
||||||
sys.argv.append("--skip-torch-cuda-test")
|
if "--api" not in sys.argv:
|
||||||
if "--disable-nan-check" not in sys.argv:
|
sys.argv.append("--api")
|
||||||
sys.argv.append("--disable-nan-check")
|
if "--ckpt" not in sys.argv:
|
||||||
|
sys.argv.append("--ckpt")
|
||||||
os.environ['COMMANDLINE_ARGS'] = ""
|
sys.argv.append(os.path.join(script_path, "test/test_files/empty.pt"))
|
||||||
|
if "--skip-torch-cuda-test" not in sys.argv:
|
||||||
|
sys.argv.append("--skip-torch-cuda-test")
|
||||||
def start():
|
if "--disable-nan-check" not in sys.argv:
|
||||||
print(f"Launching {'API server' if '--nowebui' in sys.argv else 'Web UI'} with arguments: {shlex.join(sys.argv[1:])}")
|
sys.argv.append("--disable-nan-check")
|
||||||
import webui
|
|
||||||
if '--nowebui' in sys.argv:
|
os.environ['COMMANDLINE_ARGS'] = ""
|
||||||
webui.api_only()
|
|
||||||
else:
|
|
||||||
webui.webui()
|
def start():
|
||||||
|
print(f"Launching {'API server' if '--nowebui' in sys.argv else 'Web UI'} with arguments: {shlex.join(sys.argv[1:])}")
|
||||||
|
import webui
|
||||||
def dump_sysinfo():
|
if '--nowebui' in sys.argv:
|
||||||
from modules import sysinfo
|
webui.api_only()
|
||||||
import datetime
|
else:
|
||||||
|
webui.webui()
|
||||||
text = sysinfo.get()
|
|
||||||
filename = f"sysinfo-{datetime.datetime.utcnow().strftime('%Y-%m-%d-%H-%M')}.json"
|
|
||||||
|
def dump_sysinfo():
|
||||||
with open(filename, "w", encoding="utf8") as file:
|
from modules import sysinfo
|
||||||
file.write(text)
|
import datetime
|
||||||
|
|
||||||
return filename
|
text = sysinfo.get()
|
||||||
|
filename = f"sysinfo-{datetime.datetime.utcnow().strftime('%Y-%m-%d-%H-%M')}.json"
|
||||||
|
|
||||||
|
with open(filename, "w", encoding="utf8") as file:
|
||||||
|
file.write(text)
|
||||||
|
|
||||||
|
return filename
|
||||||
|
Loading…
x
Reference in New Issue
Block a user