Where `TDAudioFileStream ` gets audio format?
Opened this issue · 1 comments
Hi, great piece of work!
I'm working on a (temporary) replacement for GKVoiceChatService
, so I feed the output stream with samples collected from microphone manually whenever it has space available.
But I can't see how can I set [TDAudioFileStream basicDescription], in the current code I just can't figure out how it is initialized to a given format.
I have the AudioStreamBasicDescription
of every CMSampleBuffer
I got from AVCaptureSession
, but can't see exactly how to inject this information into the processing (otherwise the buffers just traversing fine as I simply NSLogged out).
If I'm right, the underlying audio queue also gets the format from TDAudioFileStream
.
Can you give me some hint on this?
Ahh, I suppose I have to mimic an audio file header, and send it as the first packet. A bit hurts that you did not answered anyway. Feels like anti-socail coding. :)
As AudioFileStreamParseBytes
documentation says:
Streamed audio file data is expected to be passed to the parser in the same sequence in which it appears in the audio file, from the beginning of the audio file stream, without gaps.