Opened this issue a year ago · 1 comments
See if there is a way to use the built-in microphone to recognize speech for prompting the LLM (possibly from a very limited vocabulary). This could even make use of the TPU.
Relevant examples:
coralmicro/examples/classify_speech
coralmicro/examples/classify_audio
coralmicro/examples/tflm_micro_speech
Potentially useful references:
model_maker_speech_recognition.ipynb
kaggle/conformer