smeetrs/deep_avsr

tuple attribute error when running pretrain.py

arunm95 opened this issue · 9 comments

Hi,

I'm currently trying out training the model on the LRS2 dataset. Preprocess executes successfully, but on trying to run the pretrain I run into the following error:

Traceback (most recent call last):
File "audio_visual/pretrain.py", line 106, in
trainingLoss, trainingCER, trainingWER = train(model, pretrainLoader, optimizer, loss_function, device, trainParams)
File "audio_visual\utils\general.py", line 39, in train
for batch, (inputBatch, targetBatch, inputLenBatch, targetLenBatch) in enumerate(tqdm(trainLoader, leave=False, desc="Train", ncols=75)):
File "anaconda3\lib\site-packages\tqdm\std.py", line 1107, in iter
for obj in iterable:
File "anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 346, in next
data = self.dataset_fetcher.fetch(index) # may raise StopIteration
File "anaconda3\lib\site-packages\torch\utils\data_utils\fetch.py", line 47, in fetch
return self.collate_fn(data)
File "audio_visual\data\utils.py", line 242, in collate_fn
inputBatch = pad_sequence([data[0] for data in dataBatch])
File "anaconda3\lib\site-packages\torch\nn\utils\rnn.py", line 369, in pad_sequence
max_size = sequences[0].size()
AttributeError: 'tuple' object has no attribute 'size'

I've traced it to the use of a tuple to pass both the audio and video inputs from prepare_pretrain_input in utils.py but am unsure how to resolve the problem.

Hi,
Thanks for pointing this out.

Can you replace

inputBatch = pad_sequence([data[0] for data in dataBatch])

with

inputBatch = (pad_sequence([data[0][0] for data in dataBatch]),
             pad_sequence([data[0][1] for data in dataBatch]))

in collate_fn function defined in audio_visual/data/utils.py file and then try?

Please let me know if that resolves the issue. I will make this change in the next commit.

Hi, I replaced the line in question, but am getting a invalid comparison within the collate_fn:

Traceback (most recent call last):
File "D:/Users/arunm/PycharmProjects/AV_Speech_Recognition/audio_visual/pretrain.py", line 106, in
trainingLoss, trainingCER, trainingWER = train(model, pretrainLoader, optimizer, loss_function, device, trainParams)
File "D:\Users\arunm\PycharmProjects\AV_Speech_Recognition\audio_visual\utils\general.py", line 39, in train
for batch, (inputBatch, targetBatch, inputLenBatch, targetLenBatch) in enumerate(tqdm(trainLoader, leave=False, desc="Train", ncols=75)):
File "D:\Users\arunm\anaconda3\lib\site-packages\tqdm\std.py", line 1107, in iter
for obj in iterable:
File "D:\Users\arunm\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 346, in next
data = self.dataset_fetcher.fetch(index) # may raise StopIteration
File "D:\Users\arunm\anaconda3\lib\site-packages\torch\utils\data_utils\fetch.py", line 47, in fetch
return self.collate_fn(data)
File "D:\Users\arunm\PycharmProjects\AV_Speech_Recognition\audio_visual\data\utils.py", line 243, in collate_fn
if None not in [data[1] for data in dataBatch]:
TypeError: eq() received an invalid combination of arguments - got (NoneType), but expected one of:

  • (Tensor other)
    didn't match because some of the arguments have invalid types: (!NoneType!)
  • (Number other)
    didn't match because some of the arguments have invalid types: (!NoneType!)

I've pasted the current code of collate_fn below:

def collate_fn(dataBatch):
"""
Collate function definition used in Dataloaders.
"""
inputBatch = (pad_sequence([data[0][0] for data in dataBatch]), pad_sequence([data[0][1] for data in dataBatch]))
if None not in [data[1] for data in dataBatch]:
targetBatch = torch.cat([data[1] for data in dataBatch])
else:
targetBatch = None

inputLenBatch = torch.stack([data[2] for data in dataBatch])
if None not in [data[3] for data in dataBatch]:
    targetLenBatch = torch.stack([data[3] for data in dataBatch])
else:
    targetLenBatch = None

return inputBatch, targetBatch, inputLenBatch, targetLenBatch

@arunm95, my latest pull request fixes both both of these issues:

For the invalid comparison replace:

if None not in [data[1] for data in dataBatch]:

with

if any(data[1] is None for data in dataBatch) == False

@lordmartian I am trying to reproduce the results you post in this (very nice) repo, could you provide more details regarding the hyper parameters (learning rate, no of words, n-epochs, ets.) for the different pre-training and training steps you followed for the AV model?

@mlomnitz I have used the same values of hyperparameters which are given in the paper. I plan to include a document which lists down the important implementation details in the repo in the near future so that it will be easy for everyone to train from scratch. For now, you can go through the closed issues of the repo. I have mentioned many of the implementation details in the comments therein. Two people (as per my knowledge) had used the code and were able to achieve results close to mine. If something you would like to know is missing, kindly open an issue here and I'll be happy to answer.

@arunm95 you can use @mlomnitz 's solution. Please let me know if you face any other issues. Seems that my last commit has introduced some bugs. I'll go over the changes and rectify the bugs in the next commit.

I have resolved both the bugs in the latest commit.

@lordmartian That change resolved the error, however, I had set numworkers in the config file to 0 in order to avoid a mutliprocess error that was masking the original error. Setting the numworkers to anything other than 0 still causes this error.

File "", line 1, in
File "D:\Users\arunm\anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "D:\Users\arunm\anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "D:\Users\arunm\anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "D:\Users\arunm\anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "D:\Users\arunm\anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "D:\Users\arunm\anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "D:\Users\arunm\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\Users\arunm\PycharmProjects\AV_Speech_Recognition\audio_visual\pretrain.py", line 106, in
trainingLoss, trainingCER, trainingWER = train(model, pretrainLoader, optimizer, loss_function, device, trainParams)
File "D:\Users\arunm\PycharmProjects\AV_Speech_Recognition\audio_visual\utils\general.py", line 39, in train
for batch, (inputBatch, targetBatch, inputLenBatch, targetLenBatch) in enumerate(tqdm(trainLoader, leave=False, desc="Train", ncols=75)):
File "D:\Users\arunm\anaconda3\lib\site-packages\tqdm\std.py", line 1107, in iter
for obj in iterable:
File "D:\Users\arunm\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 278, in iter
return _MultiProcessingDataLoaderIter(self)
File "D:\Users\arunm\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 682, in init
w.start()
File "D:\Users\arunm\anaconda3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "D:\Users\arunm\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Users\arunm\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\Users\arunm\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 46, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "D:\Users\arunm\anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
Traceback (most recent call last):
File "D:/Users/arunm/PycharmProjects/AV_Speech_Recognition/audio_visual/pretrain.py", line 106, in
_check_not_importing_main()
File "D:\Users\arunm\anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
trainingLoss, trainingCER, trainingWER = train(model, pretrainLoader, optimizer, loss_function, device, trainParams)
File "D:\Users\arunm\PycharmProjects\AV_Speech_Recognition\audio_visual\utils\general.py", line 39, in train
for batch, (inputBatch, targetBatch, inputLenBatch, targetLenBatch) in enumerate(tqdm(trainLoader, leave=False, desc="Train", ncols=75)):
File "D:\Users\arunm\anaconda3\lib\site-packages\tqdm\std.py", line 1107, in iter
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.
for obj in iterable:

File "D:\Users\arunm\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 278, in iter
return _MultiProcessingDataLoaderIter(self)
File "D:\Users\arunm\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 682, in init
w.start()
File "D:\Users\arunm\anaconda3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "D:\Users\arunm\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Users\arunm\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\Users\arunm\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "D:\Users\arunm\anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe

I have created a new issue #11 for this error.