Update of en_US.json and faq_en.md. Proposal for i18n standard. (#318)

* Update en_US.json

1. Severe mistake fixed: certain translation is previously incomplete.

* Update faq_en.md

1. Modified 1 entry for context consistency with lately merged en_US translation

* Update en_US.json

1. Attached colons to all Input Prompts as proposed.
2. Minor changes to translation expressions.

* Update en_US.json

1. Removed trailing periods on button texts
This commit is contained in:
tzshao 2023-05-20 20:14:23 +08:00 committed by GitHub
parent 3f17356c11
commit 50a121fc74
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 55 additions and 55 deletions

View File

@ -87,7 +87,7 @@ Save via model extraction at the bottom of the ckpt processing tab.
## Q14:File/memory error(when training)?
Too many processes and your memory is not enough. You may fix it by:
1、decrease "Number of CPU threads".
1、decrease the input in field "Threads of CPU".
2、pre-cut trainset to shorter audio files.

View File

@ -1,5 +1,5 @@
{
"很遗憾您这没有能用的显卡来支持您训练": "No supported GPU is found. Training may be ",
"很遗憾您这没有能用的显卡来支持您训练": "No supported GPU is found. Training may be slow or unavailable.",
"是": "yes",
"step1:正在处理数据": "step 1: processing data",
"step2a:无需提取音高": "step 2a: skipping pitch extraction",
@ -8,89 +8,89 @@
"训练结束, 您可查看控制台训练日志或实验文件夹下的train.log": "Training complete. Logs are available in the console, or the 'train.log' under experiment folder",
"全流程结束!": "all processes have been completed!",
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.": "This software is open source under the MIT license, the author does not have any control over the software, and those who use the software and spread the sounds exported by the software are solely responsible. <br>If you do not agree with this clause, you cannot use or quote any codes and files in the software package. See root directory <b>Agreement-LICENSE.txt</b> for details.",
"模型推理": "Model inference",
"推理音色": "Inferencing voice",
"模型推理": "Model Inference",
"推理音色": "Inferencing voice:",
"刷新音色列表和索引路径": "Refresh voice list and index path",
"卸载音色省显存": "Unload voice to save GPU memory",
"请选择说话人id": "Select Singer/Speaker ID",
"卸载音色省显存": "Unload voice to save GPU memory:",
"请选择说话人id": "Select Singer/Speaker ID:",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "It is recommended +12key for male to female conversion, and -12key for female to male conversion. If the sound range goes too far and the voice is distorted, you can also adjust it to the appropriate range by yourself. ",
"变调(整数, 半音数量, 升八度12降八度-12)": "transpose(integer, number of semitones, octave sharp 12 octave flat -12)",
"输入待处理音频文件路径(默认是正确格式示例)": "Enter the path of the audio file to be processed (the default is the correct format example)",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比": "Select the algorithm for pitch extraction. 'pm': fast conversions; 'harvest': better pitch accuracy, but conversion might be extremely slow.",
"变调(整数, 半音数量, 升八度12降八度-12)": "transpose(Input must be integer, represents number of semitones. Example: octave sharp: 12;octave flat: -12):",
"输入待处理音频文件路径(默认是正确格式示例)": "Enter the path of the audio file to be processed (the default is example of the correct format(Windows)):",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比": "Select the algorithm for pitch extraction.('pm': fast conversions; 'harvest': better pitch accuracy, but conversion might be extremely slow):",
">=3则使用对harvest音高识别的结果使用中值滤波数值为滤波半径使用可以削弱哑音": "If >=3: using median filter for f0. The number is median filter radius.",
"特征检索库文件路径,为空则使用下拉的选择结果": "Feature index file path. If null, use dropdown result.",
"自动检测index路径,下拉式选择(dropdown)": "Path to the '.index' file in 'logs' directory is auto detected. Pick the matching file from the dropdown.",
"特征文件路径": "Feature file path",
"检索特征占比": "Search feature ratio",
"后处理重采样至最终采样率0为不进行重采样": "Resample the audio in post to a different sample rate. Default: don't use post resample.",
"输入源音量包络替换输出音量包络融合比例越靠近1越使用输出包络": "Use volume envelope of input to mix or replace the volume envelope of output, the closer the rate is to 1, the more output envelope is used. Default 1 (don't mix input envelope)",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0 curve file, optional, one pitch per line, instead of the default F0 and ups and downs",
"特征检索库文件路径,为空则使用下拉的选择结果": "Path to Feature index file(If null, use dropdown result):",
"自动检测index路径,下拉式选择(dropdown)": "Path to the '.index' file in 'logs' directory is auto detected. Pick the matching file from the dropdown:",
"特征文件路径": "Path to Feature file:",
"检索特征占比": "Search feature ratio:",
"后处理重采样至最终采样率0为不进行重采样": "Resample the audio in post-processing to a different sample rate.(Default(0): No post-resampling):",
"输入源音量包络替换输出音量包络融合比例越靠近1越使用输出包络": "Use volume envelope of input to mix or replace the volume envelope of output, the closer the rate is to 1, the more output envelope is used.(Default(1): don't mix input envelope):",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0 curve file(optional),one pitch per line. Overrides the default F0 and ups and downs :",
"转换": "Convert",
"输出信息": "Output message",
"输出音频(右下角三个点,点了可以下载)": "Export audio (Click on the three dots in the bottom right corner to download)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "For batch conversion, input the audio folder to be converted, or upload multiple audio files, and output the converted audio in the specified folder ('opt' by default). ",
"指定输出文件夹": "Specify output folder",
"指定输出文件夹": "Path to output folder:",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "Enter the path of the audio folder to be processed (just go to the address bar of the file manager and copy it)",
"也可批量输入音频文件, 二选一, 优先读文件夹": "You can also input audio files in batches, choose one of the two, and read the folder first",
"伴奏人声分离": "Accompaniment and vocal separation",
"人声伴奏分离批量处理, 使用UVR5模型. <br>不带和声用HP2, 带和声且提取的人声不需要和声用HP5<br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)": "Batch processing of vocal accompaniment separation, using UVR5 model. <br>Without harmony, use HP2, with harmony and extracted vocals do not need harmony, use HP5<br>Example of qualified folder path format: E:\\ codes\\py39\\vits_vc_gpu\\Egret Shuanghua test sample (just go to the address bar of the file manager and copy it)",
"输入待处理音频文件夹路径": "Input audio folder path",
"伴奏人声分离": "Seperation of Accompaniment and Vocal",
"人声伴奏分离批量处理, 使用UVR5模型. <br>不带和声用HP2, 带和声且提取的人声不需要和声用HP5<br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)": "Batch processing of vocal accompaniment separation using UVR5 Model. <br>If input is without harmony, use HP2; If with harmony and the extracted vocals do not need harmony, use HP5<br>Example of legal folder path format: E:\\ codes\\py39\\vits_vc_gpu\\Egret Shuanghua test sample (just go to the address bar of the file manager and copy it)",
"输入待处理音频文件夹路径": "Path to Input audio folder:",
"模型": "Model",
"指定输出人声文件夹": "Specify vocals output folder",
"指定输出乐器文件夹": "Specify instrumentals output folder",
"指定输出人声文件夹": "Path to vocals output folder:",
"指定输出乐器文件夹": "Path to instrumentals output folder:",
"训练": "Train",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "step1: Fill in the experimental configuration. The experimental data is placed under 'logs', and each experiment has a folder. You need to manually enter the experimental name path, which contains the experimental configuration, logs, and model files obtained from training. ",
"输入实验名": "Input experiment name",
"目标采样率": "Target sample rate",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "Does the model have pitch guidance (required for singing; optional for speech, but recommended)",
"版本(目前仅40k支持了v2)": "Model architecture version (v2 version only supports 40k sample rate for testing purposes)",
"提取音高和处理数据使用的CPU进程数": "Threads of CPU to use, for pitch extraction and dataset processing",
"输入实验名": "Experiment name:",
"目标采样率": "Target sample rate:",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "If the model have pitch guidance (Required for singing as Input; Optional for speech as Input, but recommended):",
"版本(目前仅40k支持了v2)": "Model architecture version (v2 version only supports 40k sample rate for testing purposes):",
"提取音高和处理数据使用的CPU进程数": "Threads of CPU, for pitch extraction and dataset processing:",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "step2a: Automatically traverse all files that can be decoded into audio in the training folder and perform slice normalization. Generates 2 wav folders in the experiment directory; Only single-singer/speaker training is supported for the time being. ",
"输入训练文件夹路径": "Input training folder path",
"请指定说话人id": "Please specify speaker ID",
"输入训练文件夹路径": "Path to training folder:",
"请指定说话人id": "Specify Singer/Speaker ID:",
"处理数据": "Process data",
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "step2b: Use CPU to extract pitch (if the model has pitch), use GPU to extract features (must specify GPU)",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "Enter GPU Index(es),separated by '-'. Example: 0-1-2 to select card 1, 2 and 3 ",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "Enter GPU Index(es),separated by '-'.(Example: 0-1-2 to select card 1, 2 and 3):",
"显卡信息": "GPU Information",
"选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢": "Select pitch extraction algorithm. 'pm': fastest extraction but lower-quality speech; 'dio': improved speech but slower extraction; 'harvest': best quality but slowest extraction.",
"选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢": "Select pitch extraction algorithm.('pm': fastest extraction but lower-quality speech; 'dio': improved speech but slower extraction; 'harvest': best quality but slowest extraction):",
"特征提取": "Feature extraction",
"step3: 填写训练设置, 开始训练模型和索引": "step3: Fill in the training settings, start training the model and index",
"保存频率save_every_epoch": "Saving frequency (save_every_epoch)",
"总训练轮数total_epoch": "Total training epochs (total_epoch)",
"每张显卡的batch_size": "batch_size for every GPU",
"是否仅保存最新的ckpt文件以节省硬盘空间": "Save only the latest ckpt file to reduce disk usage",
"保存频率save_every_epoch": "Saving frequency (save_every_epoch):",
"总训练轮数total_epoch": "Total training epochs (total_epoch):",
"每张显卡的batch_size": "batch_size for every GPU:",
"是否仅保存最新的ckpt文件以节省硬盘空间": "Save only the latest ckpt file to reduce disk usage:",
"否": "no",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "Cache all training sets to GPU Memory. Small data(~under 10 minutes) can be cached to speed up training, but large data caching will eats up the GPU Memory and may not increase the speed",
"是否在每次保存时间点将最终小模型保存至weights文件夹": "Save a small finished model to the 'weights' directory for every epoch matching the specified 'save frequency'",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "Cache all training sets to GPU Memory. Small data(~under 10 minutes) can be cached to speed up training, but large data caching will eats up the GPU Memory and may not increase the speed :",
"是否在每次保存时间点将最终小模型保存至weights文件夹": "Save a small finished model to the 'weights' directory for every epoch matching the specified 'save frequency' :",
"加载预训练底模G路径": "Load pre-trained base model G path.",
"加载预训练底模D路径": "Load pre-trained base model D path.",
"训练模型": "Train model.",
"训练特征索引": "Train feature index.",
"一键训练": "One-click training.",
"ckpt处理": "ckpt Processing.",
"训练特征索引": "Train feature index",
"一键训练": "One-click training",
"ckpt处理": "ckpt Processing",
"模型融合, 可用于测试音色融合": "Model Fusion, which can be used to test timbre fusion",
"A模型路径": "A model path.",
"B模型路径": "B model path.",
"A模型权重": "Weight(w) for model A.",
"模型是否带音高指导": "Whether the model has pitch guidance.",
"要置入的模型信息": "Model information to be placed.",
"保存的模型名不带后缀": "Saved modelname without extension.",
"模型版本型号": "model architecture version",
"融合": "Fusion.",
"A模型路径": "Path to Model A:",
"B模型路径": "Path to Model B:",
"A模型权重": "Weight(w) for model A:",
"模型是否带音高指导": "Whether the model has pitch guidance:",
"要置入的模型信息": "Model information to be placed:",
"保存的模型名不带后缀": "Saved modelname(without extension):",
"模型版本型号": "Model architecture version:",
"融合": "Fusion",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "Modify model information (only small model files extracted from the 'weights' folder are supported)",
"模型路径": "Model path",
"要改的模型信息": "Model information to be modified",
"保存的文件名, 默认空为和源文件同名": "Savefile Name. If empty, name is the same as the source file; Default: empty",
"模型路径": "Path to Model:",
"要改的模型信息": "Model information to be modified:",
"保存的文件名, 默认空为和源文件同名": "Savefile Name. Default(empty): Name is the same as the source file :",
"修改": "Modify",
"查看模型信息(仅支持weights文件夹下提取的小模型文件)": "View model information (only small model files extracted from the 'weights' folder are supported)",
"查看": "View",
"模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况": "Model extraction (enter the path of the large file model under the logs folder), which is suitable for half of the training and does not want to train the model without automatically extracting and saving the small file model, or if you want to test the intermediate model",
"保存名": "Savefile Name",
"模型是否带音高指导,1是0否": "Whether the model has pitch guidance, 1: yes, 0: no",
"保存名": "Savefile Name:",
"模型是否带音高指导,1是0否": "Whether the model has pitch guidance(1: yes, 0: no):",
"提取": "Extract",
"Onnx导出": "Export Onnx",
"RVC模型路径": "RVC Model Path",
"Onnx输出路径": "Onnx Export Path",
"RVC模型路径": "RVC Model Path:",
"Onnx输出路径": "Onnx Export Path:",
"MoeVS模型": "MoeVS Model",
"导出Onnx模型": "Export Onnx Model",
"常见问题解答": "FAQ (Frequently Asked Questions)",