Crash if --processors > 8
Closed this issue · 5 comments
I am writing an application that is able to transcribe multiple audio in parallel using the same model.
For that I use one common whisper_context
for multiple whisper_state
used by worker threads where transcriptions processing are performed with whisper_full_with_state()
.
It works perfectly until 8 parallel transcriptions but crashes into whisper_full_with_state()
if running more transcriptions.
Because this implementation is based on whisper_full_parallel()
used by the main
sample application, it is possible to reproduce the issue by running it using more than 8 --processors
: ./build/bin/main --model ggml-tiny.bin --processors 9 10min_audio_french.wav
It does not matter if it is running on cpu, openvino, cuda,...: it always crashes.
Results:
$ ./build/bin/main --model ggml-tiny.bin --processors 9 10min_audio_french.wav
whisper_init_from_file_with_params_no_state: loading model from 'ggml-tiny.bin'
whisper_init_with_params_no_state: use gpu = 1
whisper_init_with_params_no_state: flash attn = 0
whisper_init_with_params_no_state: gpu_device = 0
whisper_init_with_params_no_state: dtw = 0
whisper_model_load: loading model
whisper_model_load: n_vocab = 51865
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 384
whisper_model_load: n_audio_head = 6
whisper_model_load: n_audio_layer = 4
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 384
whisper_model_load: n_text_head = 6
whisper_model_load: n_text_layer = 4
whisper_model_load: n_mels = 80
whisper_model_load: ftype = 1
whisper_model_load: qntvr = 0
whisper_model_load: type = 1 (tiny)
whisper_model_load: adding 1608 extra tokens
whisper_model_load: n_langs = 99
whisper_model_load: CPU total size = 77.11 MB
whisper_model_load: model size = 77.11 MB
whisper_init_state: kv self size = 3.15 MB
whisper_init_state: kv cross size = 9.44 MB
whisper_init_state: kv pad size = 2.36 MB
whisper_init_state: compute buffer (conv) = 13.19 MB
whisper_init_state: compute buffer (encode) = 64.79 MB
whisper_init_state: compute buffer (cross) = 3.88 MB
whisper_init_state: compute buffer (decode) = 95.89 MB
system_info: n_threads = 36 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 0 | COREML = 0 | OPENVINO = 0 | CANN = 0
main: processing '10min_audio_french.wav' (9600000 samples, 600.0 sec), 4 threads, 9 processors, 5 beams + best of 5, lang = en, task = transcribe, timestamps = 1 ...
whisper_init_state: kv self size = 3.15 MB
whisper_init_state: kv cross size = 9.44 MB
whisper_init_state: kv pad size = 2.36 MB
whisper_init_state: compute buffer (conv) = 13.19 MB
whisper_init_state: compute buffer (encode) = 64.79 MB
whisper_init_state: compute buffer (cross) = 3.88 MB
whisper_init_state: compute buffer (decode) = 95.89 MB
whisper_init_state: kv self size = 3.15 MB
whisper_init_state: kv cross size = 9.44 MB
whisper_init_state: kv pad size = 2.36 MB
whisper_init_state: compute buffer (conv) = 13.19 MB
whisper_init_state: compute buffer (encode) = 64.79 MB
whisper_init_state: compute buffer (cross) = 3.88 MB
whisper_init_state: compute buffer (decode) = 95.89 MB
whisper_init_state: kv self size = 3.15 MB
whisper_init_state: kv cross size = 9.44 MB
whisper_init_state: kv pad size = 2.36 MB
whisper_init_state: compute buffer (conv) = 13.19 MB
whisper_init_state: compute buffer (encode) = 64.79 MB
whisper_init_state: compute buffer (cross) = 3.88 MB
whisper_init_state: compute buffer (decode) = 95.89 MB
whisper_init_state: kv self size = 3.15 MB
whisper_init_state: kv cross size = 9.44 MB
whisper_init_state: kv pad size = 2.36 MB
whisper_init_state: compute buffer (conv) = 13.19 MB
whisper_init_state: compute buffer (encode) = 64.79 MB
whisper_init_state: compute buffer (cross) = 3.88 MB
whisper_init_state: compute buffer (decode) = 95.89 MB
whisper_init_state: kv self size = 3.15 MB
whisper_init_state: kv cross size = 9.44 MB
whisper_init_state: kv pad size = 2.36 MB
whisper_init_state: compute buffer (conv) = 13.19 MB
whisper_init_state: compute buffer (encode) = 64.79 MB
whisper_init_state: compute buffer (cross) = 3.88 MB
whisper_init_state: compute buffer (decode) = 95.89 MB
whisper_init_state: kv self size = 3.15 MB
whisper_init_state: kv cross size = 9.44 MB
whisper_init_state: kv pad size = 2.36 MB
whisper_init_state: compute buffer (conv) = 13.19 MB
whisper_init_state: compute buffer (encode) = 64.79 MB
whisper_init_state: compute buffer (cross) = 3.88 MB
whisper_init_state: compute buffer (decode) = 95.89 MB
whisper_init_state: kv self size = 3.15 MB
whisper_init_state: kv cross size = 9.44 MB
whisper_init_state: kv pad size = 2.36 MB
whisper_init_state: compute buffer (conv) = 13.19 MB
whisper_init_state: compute buffer (encode) = 64.79 MB
whisper_init_state: compute buffer (cross) = 3.88 MB
whisper_init_state: compute buffer (decode) = 95.89 MB
whisper_init_state: kv self size = 3.15 MB
whisper_init_state: kv cross size = 9.44 MB
whisper_init_state: kv pad size = 2.36 MB
whisper_init_state: compute buffer (conv) = 13.19 MB
whisper_init_state: compute buffer (encode) = 64.79 MB
whisper_init_state: compute buffer (cross) = 3.88 MB
whisper_init_state: compute buffer (decode) = 95.89 MB
Segmentation fault (core dumped)
Questions:
- Are 8 parallel transcriptions (so 8 running
whisper_state
for 1 common context) a known limitation ? - If yes, is there a way to increase that limitation ?
- Or is it a bug ?
When I increase GGML_MAX_CONTEXTS
in ggml/include/ggml.h
, main
doesn't crash. 8 is magic number but there should be some limitation.
refs: #2520
When I increase
GGML_MAX_CONTEXTS
inggml/include/ggml.h
,main
doesn't crash. 8 is magic number but there should be some limitation.refs: #2520
Indeed, it works well by increasing this value (tested at 256, so running 32 parallel transcriptions). It would be great to make it possible to change it in whisper_context_params
or somewhere else.
With #2525 merged there is no longer limit on the context, so it should work with any number of processors.
Yes, no limitation anymore. Many thanks !