xiph/rnnoise

Segmentation fault while training on custom data

AsimFayyazRaja opened this issue · 1 comments

Environment Specs

  • Python 3.6.9
  • Virtual Environment using pip
  • Ubuntu 16.04

Problem

I have set of noisy and clean wav files and would like to train rnn_noise on them, which are converted to raw files like this:

from pydub import AudioSegment

sound = AudioSegment.from_wav(base_path+'93.wav')

# sound._data is a bytestring
raw_data = sound._data   # saving this as .raw file now via pickle

and then, I am executing the './denoise_training' like this:

./denoise_training train_clean.raw train_noise.raw count > training.f32

image

My raw file looks like this on printing:

b'r\xffo\xff}\xff\x88\xff\x7f\xff\x93\xff\x8a\xff\x84\xff\xa0\xff\x8a\xff\x96\xff\x9e\xff\x8e\xff\x99\xff\xa0\xff\x84\xff\x8e\....

but it gives this error:

Segmentation fault (core dumped)

Working fine for inference

The inference with python is working fine.

wav_path='93.wav'

TARGET_SR = 48000
TEMP_FILE = 'test.wav'

sound = AudioSegment.from_wav(wav_path)
sound = sound.set_frame_rate(TARGET_SR)
sound = sound.set_channels(1)

sound.export(TEMP_FILE,
             format="wav")

audio, sample_rate = read_wave(TEMP_FILE)
assert sample_rate == TARGET_SR
frames = frame_generator(10, audio, TARGET_SR)
frames = list(frames)
tups = [denoiser.process_frame(frame) for frame in frames]
denoised_frames = [tup[1] for tup in tups]

denoised_wav = np.concatenate([np.frombuffer(frame,
                                             dtype=np.int16)
                               for frame in denoised_frames])

wavfile.write('denoised1.wav',
              TARGET_SR,
              denoised_wav)

Can someone help me in training this model on my own wav files? Any help will be highly appreciated!

Replace count with some positive Integer say 10000.