speechbrain/speechbrain

AMP at inference time

lucadellalib opened this issue · 3 comments

Describe the bug

I think we should have a separate flag to enable AMP at test time, as the change in #2406 is not backward compatible. Typically you want to train in fp16 but test in fp32. Lowering precision at inference can hamper accuracy and cause unexpected behaviors.
For example, I'm saving some samples at test time and with the new code with --precision=fp16 my pipeline stops working:

>>> write_audio("sig_pred.wav", sig_pred, 16000)
>>> RuntimeError: Input tensor has to be one of float32, int32, int16 or uint8 type.

Expected behaviour

An option to set the AMP configuration at inference time.

To Reproduce

No response

Environment Details

No response

Relevant Log Output

No response

Additional Context

No response

@Adel-Moumen, what do you think about that?

I think that we can have a new flag that check if we apply it at test time or not. We can improve the data class https://github.com/speechbrain/speechbrain/blob/develop/speechbrain/core.py#L90 to have new args such as enable_testing so that by default enable_testing=False and we can turn it to True by using a new flag --amp_testing=True or smth like that. Wdyt ?

CC: @asumagic you might want to be part of this discussion.

Maybe just an --eval-precision flag that defaults to fp32?