Language-agnostic automatic synchronization of subtitles with video, so that subtitles are aligned to the correct starting point within the video.
Turn this: | Into this: |
---|---|
At the request of some, you can now help cover my coffee expenses using the Github Sponsors button at the top, or using the below Paypal Donate button:
First, make sure ffmpeg is installed. On MacOS, this looks like:
brew install ffmpeg
Next, grab the script. It should work with both Python 2 and Python 3:
pip install ffsubsync
If you want to live dangerously, you can grab the latest version as follows:
pip install git+https://github.com/smacke/ffsubsync@latest
ffs
, subsync
and ffsubsync
all work as entrypoints:
ffs video.mp4 -i unsynchronized.srt -o synchronized.srt
There may be occasions where you have a correctly synchronized srt file in a language you are unfamiliar with, as well as an unsynchronized srt file in your native language. In this case, you can use the correctly synchronized srt file directly as a reference for synchronization, instead of using the video as the reference:
ffsubsync reference.srt -i unsynchronized.srt -o synchronized.srt
ffsubsync
uses the file extension to decide whether to perform voice activity
detection on the audio or to directly extract speech from an srt file.
If the sync fails, there are a few recourses available. The best one to try
first is to specify --vad=auditok
as a command line option, since sometimes
auditok works well with ffsubsync in the
case of of muffled or otherwise low-quality audio. Auditok does not
specifically detect voice, but instead detects all audio; this property can
yield suboptimal syncing behavior when a proper VAD can work
well, but can be effective in some cases.
The next step is to try different values for --max-offset-seconds
. By default
ffsubsync runs with --max-offset-seconds=600
, since subititles are unlikely
to be offset by more than 10 minutes in practice, and enforcing this constraint
typically leads to a better outcome. There may be some rare cases in which
subtitles are more egregiously out of sync and where increasing this value can
help.
If the sync still fails, consider trying one of the following similar tools:
- sc0ty/subsync: does speech-to-text and looks for matching word morphemes
- kaegi/alass: rust-based subtitle synchronizer with a fancy dynamic programming algorithm
- tympanix/subsync: neural net based approach that optimizes directly for alignment when performing speech detection
- oseiskar/autosubsync: performs speech detection with bespoke spectrogram + logistic regression
- pums974/srtsync: similar approach to ffsubsync (WebRTC's VAD + FFT to maximize signal cross correlation)
ffsubsync
usually finishes in 20 to 30 seconds, depending on the length of the
video. The most expensive step is actually extraction of raw audio. If you
already have a correctly synchronized "reference" srt file (in which case audio
extraction can be skipped), ffsubsync
typically runs in less than a second.
The synchronization algorithm operates in 3 steps:
- Discretize video and subtitles by time into 10ms windows.
- For each 10ms window, determine whether that window contains speech. This is trivial to do for subtitles (we just determine whether any subtitle is "on" during each time window); for video, use an off-the-shelf voice activity detector (VAD) like the one built into webrtc.
- Now we have two binary strings: one for the subtitles, and one for the video. Try to align these strings by matching 0's with 0's and 1's with 1's. We score these alignments as (# video 1's matched w/ subtitle 1's) - (# video 1's matched with subtitle 0's).
The best-scoring alignment from step 3 determines how to offset the subtitles in time so that they are properly synced with the video. Because the binary strings are fairly long (millions of digits for video longer than an hour), the naive O(n^2) strategy for scoring all alignments is unacceptable. Instead, we use the fact that "scoring all alignments" is a convolution operation and can be implemented with the Fast Fourier Transform (FFT), bringing the complexity down to O(n log n).
In most cases, inconsistencies between video and subtitles occur when starting or ending segments present in video are not present in subtitles, or vice versa. This can occur, for example, when a TV episode recap in the subtitles was pruned from video. FFsubsync typically works well in these cases, and in my experience this covers >95% of use cases. Handling breaks and splits outside of the beginning and ending segments is left to future work (see below).
Besides general stability and usability improvements, one line of work aims to extend the synchronization algorithm to handle splits / breaks in the middle of video not present in subtitles (or vice versa). Developing a robust solution will take some time (assuming one is possible). See #10 for more details.
The implementation for this project was started during HackIllinois 2019, for which it received an Honorable Mention (ranked in the top 5 projects, excluding projects that won company-specific prizes).
This project would not be possible without the following libraries:
- ffmpeg and the ffmpeg-python wrapper, for extracting raw audio from video
- VAD from webrtc and the py-webrtcvad wrapper, for speech detection
- srt for operating on SRT files
- numpy and, indirectly, FFTPACK, which powers the FFT-based algorithm for fast scoring of alignments between subtitles (or subtitles and video)
- Other excellent Python libraries like argparse and tqdm, not related to the core functionality, but which enable much better experiences for developers and users.
Code in this project is MIT licensed.