This is a fork of the speech embedding network Resemblyzer trained by Resemble.AI which was used in our work studying the Diarization of Legal Proceedings. Resemblyzer allows you to derive a high-level representation of a voice through a deep learning model (referred to as the voice encoder). Given an audio file of speech, it creates a summary vector of 256 values (an embedding, often shortened to "embed" in this repo) that summarizes the characteristics of the voice spoken. This is a stripped repo which only contains the primary functions for utilizing their Voice Encoder plus our small enhancements. We added the functionality to retain speaker labels with d-vectors and process audio files on the GPU which are too large to fit into memory in a single instance. Aside from these changes, the model follows Resemble.AI's almost exactly, and we suggest looking to the original repo for additional detail and demos. If are looking for details on how we applied this model, please see our diarization repo where the code calling this fork is located. This repo is a submodule of that repo.
Resemblyzer emerged as a side project of the Real-Time Voice Cloning repository. The pretrained model that comes with Resemblyzer is interchangeable with models trained in that repository, so feel free to finetune a model on new data and possibly new languages! The paper from which the voice encoder was implemented is Generalized End-To-End Loss for Speaker Verification (in which it is called the speaker encoder).