From f2dd776770c733b25d7aabe71c2b5f80669cda0e Mon Sep 17 00:00:00 2001
From: ClashSAN <98228077+ClashSAN@users.noreply.github.com>
Date: Sun, 26 Feb 2023 12:52:08 +0000
Subject: [PATCH] update windows+amd install guide
---
Install-and-Run-on-AMD-GPUs.md | 28 ++++++++++++++--------------
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/Install-and-Run-on-AMD-GPUs.md b/Install-and-Run-on-AMD-GPUs.md
index 92c24fc..bbda091 100644
--- a/Install-and-Run-on-AMD-GPUs.md
+++ b/Install-and-Run-on-AMD-GPUs.md
@@ -1,25 +1,25 @@
# Windows
-(**In Testing**)
-For Windows users, try this fork using **Direct-ml**. If you get this working, it would be nice to share here: [Discussion](https://github.com/lshqqytiger/stable-diffusion-webui-directml/discussions)
+Windows+AMD support has **not** officially been made for webui, \
+but you can install lshqqytiger's fork of webui that uses **Direct-ml**.
-* https://github.com/lshqqytiger/stable-diffusion-webui-directml
+-Training currently doesn't work, yet a variety of features/extensions do, such as LoRAs and controlnet. [Discussion](https://github.com/lshqqytiger/stable-diffusion-webui-directml/discussions)
-make sure you have the modified repositories in `stable-diffusion-webui-directml/repositories/`:
+1. Install [Python 3.10.6](https://www.python.org/ftp/python/3.10.6/python-3.10.6-amd64.exe) (ticking **Add to PATH**), and [git](https://github.com/git-for-windows/git/releases/download/v2.39.2.windows.1/Git-2.39.2-64-bit.exe)
+2. paste this line in cmd/terminal: `git clone https://github.com/lshqqytiger/stable-diffusion-webui-directml && cd stable-diffusion-webui-directml && git submodule init && git submodule update` \
+(you can move the program folder somewhere else.)
+3. Double-click webui-user.bat
+4. If it looks like it is stuck when installing or running, press enter in the terminal and it should continue.
+
+If you have 4-6gb vram, try adding these flags to `webui-user.bat` like so:
-* https://github.com/lshqqytiger/k-diffusion-directml/tree/master
-* https://github.com/lshqqytiger/stablediffusion-directml/tree/main
+- `COMMANDLINE_ARGS=--opt-sub-quad-attention --lowvram --disable-nan-check`
-(rename them to `k-diffusion` and `stable-diffusion-stability-ai`)
-Place any stable diffusion checkpoint (ckpt or safetensor) in the models/Stable-diffusion directory, and double-click `webui-user.bat`. If you have 4-6gb vram, try adding these flags to `webui-user.bat` like so:
-
-`COMMANDLINE_ARGS=--opt-sub-quad-attention --lowvram --disable-nan-check`
-
-You can add --autolaunch to auto open the url for you.
-
-If it looks like it is stuck when installing gfpgan or gfgan, press enter and it should continue.
+- You can add --autolaunch to auto open the url for you.
+
(The rest **below are installation guides for linux** with rocm.)
+
# Automatic Installation
(As of [1/15/23](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/6709) you can just run webui-user.sh and pytorch+rocm should be automatically installed for you.)