feat: unblock cpu training (#889)

* Update train_nsf_sim_cache_sid_load_pretrain.py

patch to unblock cpu training. CPU training took ~12 hours for me.

* Update train_nsf_sim_cache_sid_load_pretrain.py

Co-authored-by: Nato Boram <NatoBoram@users.noreply.github.com>

---------

Co-authored-by: 源文雨 <41315874+fumiama@users.noreply.github.com>
Co-authored-by: Nato Boram <NatoBoram@users.noreply.github.com>
This commit is contained in:
GratefulTony 2023-07-27 20:44:16 -06:00 committed by GitHub
parent 8d8eb8e3e4
commit 0b15d48f20
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 5 additions and 0 deletions

View File

@ -67,8 +67,13 @@ class EpochRecorder:
def main():
n_gpus = torch.cuda.device_count()
if torch.cuda.is_available() == False and torch.backends.mps.is_available() == True:
n_gpus = 1
if n_gpus < 1:
# patch to unblock people without gpus. there is probably a better way.
print("NO GPU DETECTED: falling back to CPU - this may take a while")
n_gpus = 1
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = str(randint(20000, 55555))
children = []