Retrieval-based-Voice-Conve.../README_en.md

3.3 KiB

Retrieval-based-Voice-Conversion-WebUI

madewithlove

Open In Colab Licence Huggingface

Realtime Voice Conversion Software using RVC : w-okada/voice-changer


An easy-to-use SVC framework based on VITS.

English | 中文简体

Check our Demo Video here!

Summary

This repository has the following features:

  • Using top1 feature model retrieval to reduce tone leakage;
  • Easy and fast training, even on relatively poor graphics cards;
  • Training with a small amount of data also obtains relatively good results;
  • Supporting model fusion to change timbres;
  • Easy-to-use Webui interface;
  • Use the UVR5 model to quickly separate vocals and instruments.

Preparing the environment

We recommend you install the dependencies through poetry.

The following commands need to be executed in the environment of Python version 3.8 or higher:

# Install PyTorch-related core dependencies, skip if installed
# Reference: https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio

# Install the Poetry dependency management tool, skip if installed
# Reference: https://python-poetry.org/docs/#installation
curl -sSL https://install.python-poetry.org | python3 -

# Install the project dependencies
poetry install

You can also use pip to install the dependencies

pip install -r requirements.txt

Preparation of other Pre-models

RVC requires other pre-models to infer and train.

You need to download them from our Huggingface space.

Here's a list of Pre-models and other files that RVC needs:

hubert_base.pt

./pretrained 

./uvr5_weights

#If you are using Windows, you may also need this dictionary, skip if FFmpeg is installed
ffmpeg.exe

Then use this command to start Webui:

python infer-web.py

If you are using Windows, you can download and extract RVC-beta.7z to use RVC directly and use go-web.bat to start Webui.

We will develop an English version of the WebUI in 2 weeks.

There's also a tutorial on RVC in Chinese and you can check it out if needed.

Credits

Thanks to all contributors for their efforts