Retrieval-based-Voice-Conve.../docs
JiHo Han 71e2733719
docs(README.ko): add Korean Translation of README.md (#157)
* docs(README.ko): add Korean Translation of README.md

* docs(Faiss): add Korean tips for Faiss

* docs(README): add hyperlinks for Korean translation on all README

* docs(training_tips): add Korean translation for training tips

---------

Co-authored-by: Ftps <63702646+Tps-F@users.noreply.github.com>
2023-04-25 21:55:48 +08:00
..
faiss_tips_en.md Faiss Tutorial for Developers (#97) 2023-04-24 20:18:34 +08:00
faiss_tips_ja.md Faiss Tutorial for Developers (#97) 2023-04-24 20:18:34 +08:00
faiss_tips_ko.md docs(README.ko): add Korean Translation of README.md (#157) 2023-04-25 21:55:48 +08:00
README.en.md docs(README.ko): add Korean Translation of README.md (#157) 2023-04-25 21:55:48 +08:00
README.ja.md docs(README.ko): add Korean Translation of README.md (#157) 2023-04-25 21:55:48 +08:00
README.ko.md docs(README.ko): add Korean Translation of README.md (#157) 2023-04-25 21:55:48 +08:00
training_tips_en.md Training tutorial (#109) 2023-04-22 14:04:56 +08:00
training_tips_ja.md Training tutorial (#109) 2023-04-22 14:04:56 +08:00
training_tips_ko.md docs(README.ko): add Korean Translation of README.md (#157) 2023-04-25 21:55:48 +08:00
小白简易教程.doc optimize: 优化代码结构 (#66) 2023-04-16 06:29:01 +00:00

Retrieval-based-Voice-Conversion-WebUI

An easy-to-use SVC framework based on VITS.

madewithlove


Open In Colab Licence Huggingface

Discord


Changelog

English | 中文简体 | 日本語 | 한국어

Check our Demo Video here!

Realtime Voice Conversion Software using RVC : w-okada/voice-changer

The dataset for the pre-training model uses nearly 50 hours of high quality VCTK open source dataset.

High quality licensed song datasets will be added to training-set one after another for your use, without worrying about copyright infringement.

Summary

This repository has the following features:

  • Reduce tone leakage by replacing source feature to training-set feature using top1 retrieval;
  • Easy and fast training, even on relatively poor graphics cards;
  • Training with a small amount of data also obtains relatively good results (>=10min low noise speech recommended);
  • Supporting model fusion to change timbres (using ckpt processing tab->ckpt merge);
  • Easy-to-use Webui interface;
  • Use the UVR5 model to quickly separate vocals and instruments.

Preparing the environment

We recommend you install the dependencies through poetry.

The following commands need to be executed in the environment of Python version 3.8 or higher:

# Install PyTorch-related core dependencies, skip if installed
# Reference: https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio

#For Windows + Nvidia Ampere Architecture(RTX30xx), you need to specify the cuda version corresponding to pytorch according to the experience of https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/issues/21
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117

# Install the Poetry dependency management tool, skip if installed
# Reference: https://python-poetry.org/docs/#installation
curl -sSL https://install.python-poetry.org | python3 -

# Install the project dependencies
poetry install

You can also use pip to install the dependencies

Notice: faiss 1.7.2 will raise Segmentation Fault: 11 under MacOS, please use pip install faiss-cpu==1.7.0 if you use pip to install it manually.

pip install -r requirements.txt

Preparation of other Pre-models

RVC requires other pre-models to infer and train.

You need to download them from our Huggingface space.

Here's a list of Pre-models and other files that RVC needs:

hubert_base.pt

./pretrained 

./uvr5_weights

#If you are using Windows, you may also need this dictionary, skip if FFmpeg is installed
ffmpeg.exe

Then use this command to start Webui:

python infer-web.py

If you are using Windows, you can download and extract RVC-beta.7z to use RVC directly and use go-web.bat to start Webui.

There's also a tutorial on RVC in Chinese and you can check it out if needed.

Credits

Thanks to all contributors for their efforts