Pinned issues
Issues
- 0
Using a partitioned A100 GPU via MIG with device_index?
#1018 opened by johnrisby - 0
Could not load library libcublasLt.so.11.
#1017 opened by nguyenhoanganh2002 - 0
Cache path?
#1016 opened by ROBERT-MCDOWELL - 0
Could not locate cublasLt64_11.dll
#1012 opened by TechInterMezzo - 1
[bug] distil-small conversion results in junk!
#1011 opened by SinanAkkoyun - 0
- 0
FYI: FUTO did an ACFT finetune of whisper that works with <30s of audio
#1006 opened by thiswillbeyourgithub - 0
- 4
Very bigger segment (chunk) size (almost 30 second each) with BatchedInferencePipeline
#985 opened by alamnasim - 2
Fair Benchmarking of Faster-Whisper - Parameter equivalents to Hugginface
#993 opened by asusdisciple - 1
Bug - "No active speech found in audio results"
#997 opened by asusdisciple - 2
how to use on cudnn9.1.0
#971 opened by Garyguhaifeng - 0
Deploy faster-whisper as a native web-app.
#999 opened by animikhaich - 0
Is docker image `nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu22.04` more suitable than `nvidia/cuda:12.0.0-runtime-ubuntu22.04` for faster-whisper's cudnn requirements?
#998 opened by yasu-kondo - 8
Cannot start faster-whisper
#951 opened by Denis-Kazakov - 0
- 4
Memory on GPU not cleared after transcription
#992 opened by DinnoKoluh - 0
- 0
Cyrillic letters in Polish transcription
#991 opened by Venzon - 0
a simple web-ui for whisper
#990 opened by pika-online - 2
- 1
Possible to abort transcription?
#984 opened by mariano54 - 0
- 0
- 0
- 5
ValueError: Requested int8 compute type, but the target device or backend do not support efficient int8 computation.
#955 opened by facundobatista - 9
Medium model output is nonsense for batched pipeline (for short 15s audio clips)
#977 opened by tjongsma - 0
- 1
Complained that 'Unable to open file model.bin in model' when loading a model folder with 'model.safetensors'
#982 opened by koharubiyori - 11
Updated benchmarks please!
#974 opened by BBC-Esq - 0
different transcribe results with same whisper model and same audios in same process
#975 opened by JH90iOS - 1
- 0
- 3
Better chunking/loading
#968 opened by KTibow - 1
Error #15: Initializing libiomp5md.dll, but found libomp140.x86_64.dll already initialized.
#967 opened by SiriusArtLtd - 1
how to create the gradio UI with faster-whisper
#957 opened by kustcl - 0
Thai language error
#964 opened by lukeewin - 0
- 0
Is it possible to add lora adapter support and switching between adapters as per request ?
#962 opened by Jeevi10 - 4
Why does the transcription speed significantly decrease when the WhisperModel instance is wrapped inside a class attribute?
#960 opened by wildwind0 - 0
RuntimeError: Unsupported model binary version. This executable supports models with binary version v6 or below, but the model has binary version v1936876918.
#959 opened by DearTan - 2
- 1
Suddenly no start, end or text in generator?
#952 opened by michaelcuneo - 4
Funny and revealing hallucinations
#949 opened by AgatheBauer - 0
Jupyter Lab crashing with faster-whisper
#950 opened by Denis-Kazakov - 8
IMPORTANT: 1.0.3 VAD v5 is much worse than 1.0.2 or 1.0.1 VAD v4 for some certain audio data. WHY?
#944 opened by ckgithub2019 - 1
- 1
Unable to open file 'model.bin' in model 'models\base'
#945 opened by cup113 - 0
Model Seperate GPU Assignment
#946 opened by ibrahimdevs - 1
LM (language model) + whisper
#943 opened by cod3r0k