wiseman/py-webrtcvad

Trouble converting pyAudio Mic input to VAD frames

Deamon12 opened this issue · 4 comments

I'm having the hardest time figuring out how to convert standard PyAudio frames into your VAD format. I feel like there should be an example for something this basic.
I saw your post about using PyAudio Mic in, and how you have confirmed it works. #29

I'm trying to adapt the frame_generator, but am coming up short. My logic is to use PyAudio to determine if the input is "Silent" or "Not Silent". After a noise is detected and the logic drops back into "Silent" mode, then we run the cached audio frames through VAD.

Something like this...

         FORMAT = pyaudio.paInt16
         CHANNELS = 1  # 2
         RATE = 48000
         NUM_SAMPLES = 1024
   
           self.stream = self.p.open(
            format=self.settings.FORMAT,
            channels=self.settings.CHANNELS,
            rate=self.settings.RATE,
            # frames_per_buffer=
            frames_per_buffer=self.settings.NUM_SAMPLES,
            input=True,
            input_device_index=self.settings.INPUT_DEVICE_INDEX
        )

The RMS detect logic

           if mean > silence and in_silence:
                print("Sound!")
                in_silence = False
                
            if mean > silence and not in_silence:
                frames.append(frame)  # the list to eventually send to VAD

            elif mean < silence and not in_silence:
                print("Silence!")
                in_silence = True
                self.doVadDetect(frames, self.settings.RATE)

doVadDetect function is pretty much copy pasta - what am I missing in the conversion??

 def doVadDetect(self, audio_frames, sample_rate):
        frames = self.frame_generator(10, audio_frames, sample_rate)
        frames = list(frames)
        print("frames: " + str(len(frames)))
        segments = self.vad_collector(sample_rate, 30, 300, self.vad, frames)
        for i, segment in enumerate(segments):
            path = 'chunk-%002d.wav' % (i,)
            print(' Writing %s' % (path,))
            self.write_wave(path, segment, sample_rate)

Thanks for the help!

Damn, I think I got it working.... I had to reset the frame_generator offset to keep it chugging along for each frame.
Still not sure its in a great place, but your lib is detecting speech and chunking off files still. I am hearing some distortion in the recordings however.

    def frame_generator(self, frame_duration_ms, audio, sample_rate):
        """Generates audio frames from PCM audio data.

        Takes the desired frame duration in milliseconds, the PCM data, and
        the sample rate.

        Yields Frames of the requested duration.
        """
        print("len(audio): " + str(len(audio)))
        channels = 2
        n = int(sample_rate * (frame_duration_ms / 1000.0) * channels)
        print("n : " + str(n))
        offset = 0
        timestamp = 0.0
        duration = (float(n) / sample_rate) / channels

        for audioFrame in audio:
            offset = 0  # reset for every frame
            if audioFrame is not None:
                while offset + n < len(audioFrame):
                    yield Frame(audioFrame[offset:offset + n], timestamp, duration)
                    # print("Frame created")
                    timestamp += duration
                    offset += n

Eh, nah. I need help with this.
I now have gstreamer pulling audio frames in from an alsasrc. This is working, until I get to passing audio frames to the VAD detection. Blows up after frame_generator.

Errors when attempting to evaluate the frames:
File "SoundDetector.py", line 351, in vad_collector
is_speech = vad.is_speech(frame.bytes, sample_rate)
File "python3.8/site-packages/webrtcvad.py", line 27, in is_speech
return _webrtcvad.process(self._vad, sample_rate, buf, length)

I am struggling with creating the 10, 20, 30 ms segments...

I ended up getting it by setting the gstreamer alsasrc 'blocksize' to match pyAudio size. The buffer data was not adequate for the VAD parsing.

works now tho

I ended up getting it by setting the gstreamer alsasrc 'blocksize' to match pyAudio size. The buffer data was not adequate for the VAD parsing.

works now tho

Hi,

Can i have sample of your code that reading the mic input realtime and output the result into chunks of files?

thanks in advance