README.md 4.5 KB
Newer Older
C
Corentin Jemine 已提交
1
# Real-Time Voice Cloning
C
Corentin Jemine 已提交
2
This repository is an implementation of [Transfer Learning from Speaker Verification to
C
Corentin Jemine 已提交
3
Multispeaker Text-To-Speech Synthesis](https://arxiv.org/pdf/1806.04558.pdf) (SV2TTS) with a vocoder that works in real-time. Feel free to check [my thesis](https://matheo.uliege.be/handle/2268.2/6801) if you're curious or if you're looking for info I haven't documented. Mostly I would recommend giving a quick look to the figures beyond the introduction.
C
Corentin Jemine 已提交
4

C
Corentin Jemine 已提交
5
SV2TTS is a three-stage deep learning framework that allows to create a numerical representation of a voice from a few seconds of audio, and to use it to condition a text-to-speech model trained to generalize to new voices.
C
Corentin Jemine 已提交
6

7
**Video demonstration** (click the picture):
C
Corentin Jemine 已提交
8

C
Corentin Jemine 已提交
9
[![Toolbox demo](https://i.imgur.com/8lFUlgz.png)](https://www.youtube.com/watch?v=-O_hYhToKoA)
C
Corentin Jemine 已提交
10 11


C
Corentin Jemine 已提交
12 13 14 15 16

### Papers implemented  
| URL | Designation | Title | Implementation source |
| --- | ----------- | ----- | --------------------- |
|[**1806.04558**](https://arxiv.org/pdf/1806.04558.pdf) | **SV2TTS** | **Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis** | This repo |
C
Corentin Jemine 已提交
17
|[1802.08435](https://arxiv.org/pdf/1802.08435.pdf) | WaveRNN (vocoder) | Efficient Neural Audio Synthesis | [fatchord/WaveRNN](https://github.com/fatchord/WaveRNN) |
B
blue-fish 已提交
18
|[1703.10135](https://arxiv.org/pdf/1703.10135.pdf) | Tacotron (synthesizer) | Tacotron: Towards End-to-End Speech Synthesis | [fatchord/WaveRNN](https://github.com/fatchord/WaveRNN)
C
Corentin Jemine 已提交
19 20
|[1710.10467](https://arxiv.org/pdf/1710.10467.pdf) | GE2E (encoder)| Generalized End-To-End Loss for Speaker Verification | This repo |

21
## News
C
Corentin Jemine 已提交
22 23
**14/02/21**: This repo now runs on PyTorch instead of Tensorflow, thanks to the help of @bluefish. If you wish to run the tensorflow version instead, checkout commit `5425557`.

C
Corentin Jemine 已提交
24
**13/11/19**: I'm now working full time and I will not maintain this repo anymore. To anyone who reads this:
B
blue-fish 已提交
25 26
- **If you just want to clone your voice (and not someone else's):** I recommend our free plan on [Resemble.AI](https://www.resemble.ai/). You will get a better voice quality and less prosody errors.
- **If this is not your case:** proceed with this repository, but you might end up being disappointed by the results. If you're planning to work on a serious project, my strong advice: find another TTS repo. Go [here](https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/364) for more info.
C
Corentin Jemine 已提交
27

28 29
**20/08/19:** I'm working on [resemblyzer](https://github.com/resemble-ai/Resemblyzer), an independent package for the voice encoder. You can use your trained encoder models from this repo with it.

C
Corentin Jemine 已提交
30
**06/07/19:** Need to run within a docker container on a remote server? See [here](https://sean.lane.sh/posts/2019/07/Running-the-Real-Time-Voice-Cloning-project-in-Docker/).
C
Corentin Jemine 已提交
31

32
**25/06/19:** Experimental support for low-memory GPUs (~2gb) added for the synthesizer. Pass `--low_mem` to `demo_cli.py` or `demo_toolbox.py` to enable it. It adds a big overhead, so it's not recommended if you have enough VRAM.
C
Corentin Jemine 已提交
33 34


C
Corentin Jemine 已提交
35 36
## Setup

B
blue-fish 已提交
37
### 1. Install Requirements
C
Corentin Jemine 已提交
38

B
blue-fish 已提交
39
**Python 3.6 or 3.7** is needed to run the toolbox.
C
Corentin Jemine 已提交
40

41
* Install [PyTorch](https://pytorch.org/get-started/locally/) (>=1.1.0).
B
blue-fish 已提交
42 43
* Install [ffmpeg](https://ffmpeg.org/download.html#get-packages).
* Run `pip install -r requirements.txt` to install the remaining necessary packages.
C
Corentin Jemine 已提交
44

B
blue-fish 已提交
45
### 2. Download Pretrained Models
46
Download the latest [here](https://github.com/CorentinJ/Real-Time-Voice-Cloning/wiki/Pretrained-models).
V
Valiox 已提交
47

B
blue-fish 已提交
48
### 3. (Optional) Test Configuration
49
Before you download any dataset, you can begin by testing your configuration with:
C
Corentin Jemine 已提交
50

51
`python demo_cli.py`
C
Corentin Jemine 已提交
52

53
If all tests pass, you're good to go.
54

B
blue-fish 已提交
55
### 4. (Optional) Download Datasets
B
blue-fish 已提交
56
For playing with the toolbox alone, I only recommend downloading [`LibriSpeech/train-clean-100`](https://www.openslr.org/resources/12/train-clean-100.tar.gz). Extract the contents as `<datasets_root>/LibriSpeech/train-clean-100` where `<datasets_root>` is a directory of your choosing. Other datasets are supported in the toolbox, see [here](https://github.com/CorentinJ/Real-Time-Voice-Cloning/wiki/Training#datasets). You're free not to download any dataset, but then you will need your own data as audio files or you will have to record it with the toolbox.
C
Corentin Jemine 已提交
57

B
blue-fish 已提交
58
### 5. Launch the Toolbox
59
You can then try the toolbox:
C
Corentin Jemine 已提交
60

61 62 63
`python demo_toolbox.py -d <datasets_root>`  
or  
`python demo_toolbox.py`  
C
Corentin Jemine 已提交
64

65
depending on whether you downloaded any datasets. If you are running an X-server or if you have the error `Aborted (core dumped)`, see [this issue](https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/11#issuecomment-504733590).