Neural building blocks for speaker diarization:
- speech activity detection
- speaker change detection
- overlapped speech detection
- speaker embedding
- speaker diarization pipeline
# create a conda environment with Python 3.6 or later
$ conda create --name pyannote python=3.6
$ source activate pyannote
# install from source in the "develop" branch
$ git clone https://github.com/pyannote/pyannote-audio.git
$ cd pyannote-audio
$ git checkout develop
$ pip install .
If you use pyannote.audio
please use the following citations.
- Speech activity and speaker change detection
@inproceedings{Yin2017, Author = {Ruiqing Yin and Herv\'e Bredin and Claude Barras}, Title = {{Speaker Change Detection in Broadcast TV using Bidirectional Long Short-Term Memory Networks}}, Booktitle = {{18th Annual Conference of the International Speech Communication Association, Interspeech 2017}}, Year = {2017}, Month = {August}, Address = {Stockholm, Sweden}, Url = {https://github.com/yinruiqing/change_detection} }
- Speaker embedding
@inproceedings{Bredin2017, author = {Herv\'{e} Bredin}, title = {{TristouNet: Triplet Loss for Speaker Turn Embedding}}, booktitle = {42nd IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2017}, year = {2017}, url = {http://arxiv.org/abs/1609.04301}, }
- Speaker diarization pipeline
@inproceedings{Yin2018, Author = {Ruiqing Yin and Herv\'e Bredin and Claude Barras}, Title = {{Neural Speech Turn Segmentation and Affinity Propagation for Speaker Diarization}}, Booktitle = {{19th Annual Conference of the International Speech Communication Association, Interspeech 2018}}, Year = {2018}, Month = {September}, Address = {Hyderabad, India}, }
develop
branch of pyannote.audio
.
pyannote.audio 1.x
.
-
Models
-
Pipelines
-
In-house datasets
Part of the API is described in this tutorial.
Other than that, there is still a lot to do (contribute?) documentation-wise...