davabase/whisper_real_time

whisper real-time from Jetson Nano

Opened this issue · 2 comments

Hello.
Thanks to your whisper real-time, I tried STT on my computer.
I want to use this package on my Jetson Nano, but when I run it on my Jetson Nano, the CPU and memory usage is very high and the screen freezes.
Then someone told me to use the API of OPENAI, and just like running GPT with python code, I can use the API of WHISPER.

So I'm wondering if I can use the STT function in this code by entering the api key without downloading the model or running heavy.

Take a look at this branch that implements this idea: #13
I haven't tested it, but my guess is that it will be too slow for real-time transcription.

You may want to try an alternative Whisper implementation that has significantly better performance than the original one, such as:

I have not tried any of these but they are reputable projects. WhisperX and Faster Whisper have similar functions to the OpenAI's original Whisper package so it should integrate easily.

@davabase how can i use faster whisper on this project=