Not using my voice input to create audio response
Opened this issue · 1 comments
Davidgomezrob commented
The assistant should use my microphone input (audio) to process responses dynamically. However, despite sending my audio data to the API, the model doesn’t seem to recognize or respond based on my actual audio input. Instead, it produces default responses, ignoring the context of my input audio.
Technical Setup:
- Using Node.js with the realtime-api-beta package
- Audio input is captured from the microphone, converted to PCM16 format, and streamed to the API.
- The appendInputAudio() method is used to send audio chunks, followed by createResponse() to initiate a response when silence is detected.
Any insights on debugging this setup or ensuring the model correctly interprets the live audio input would be greatly appreciated!
aarizirf commented
having the same issue ... any luck?