[IDEA] Phase/Polarity adjust
Opened this issue · 10 comments
Hi there,
dunno if it falls within the scope of the project but often, after the aligning, some phase/polarity "errors" could degrade the recording.
Here's a couple of interesting resources about those issues:
- https://www.uaudio.com/blog/understanding-audio-phase
- https://www.izotope.com/en/learn/5-ways-to-adjust-phase-after-recording.html
- https://www.harrisonconsoles.com/mixbus/mixbus32c-5-live-manual/1/en/topic/polarity-maximizer
Dunno if these softwares may help...
- @csteinmetz1's (JUCE) PhaseAnalyzer;
- @x42's phaserotate.lv2;
- @nullstar's (VST) KickFace;
- @conundrumer's A4PC;
- @victormassatieze's phase_reconstruction;
- @zied-mnasri's phase_retrieval;
- @hgroenenboom's Phase Rotation Experiment;
Last but not least, here's a very interesting research about phase recovery by @magronp:
Phase recovery with Bregman divergences for audio source separation
Hope that inspires !
Thank you for the references!
In what context do the phase/polarity errors occur?
The spectrogram-based recognizers (everything except for the correlation recognizer) are much too inaccuracurate time-wise to account for phase. They are accurate to about 0.04 seconds with the default settings.
I would think that the correlation recognizer would properly handle phase? Especially if you were to use the correlation recognizer during the fine_align step?
Could you send audio files and describe the code methods so I can reproduce the issue?
Well, as very well explained above "resources articles", phase/polarity issues occours when a live audio performance is multitracked from 2+ different source points.
To have a Check out this photo:
As you can easily understand, after the synching of all audio tracks (expecially with cameras' ones) it's is quite likely that phase cancellations could be generated.
Here's an interesting explaination video about phase/polarity:
Note that some commercial A/V synch software - like PluralEyes, for example - does automatically performs audio drift correction (a kind of phase/polarity fix ?) when needed.
Again, dunno if it falls within the scope of the project, but would be certainly very useful to have.
Last but not least, I hope that some of the mentioned phase correction softwares' authors can provide expertise on the subject for phase fixing techniques/details.
Thanks in advance.
Have you observed this problem with specific audio files or using specific recognizers?
Again, correlation should accurately account for phase. The other recognizers are way too inaccurate time-wise due to the spectrograms for phase issues to be an addressable concern.
I don't think addressing polarity in audalign would yield meaningful results and that it would be better addressed in a DAW afterward.
Are you suggesting that phase alignment be applied after every alignment?
With multi-mic recordings, it is rather common to flip the polarity of some channels. A common example is snare drum top/bottom, mic'in. Phase cancellation (comb filtering) is rather obvious in that case. It other cases it be more subtle. e.g. when using a figure-8 mic.
One can aid detection of which polarities to invert by correlating channels. Harrison Mixbus for example has a built-in tool for this: https://youtu.be/f_f8G5tnkfk?t=272
Phase rotation or sub-sample alignment is of no real concern for alignment.
I don't know audalign, so I cannot judge if such a feature would be better addressed there or in a DAW.
However IIRC Sonic Visualiser takes phase into account for https://www.sonicvisualiser.org/sonic-lineup/index.html but I'm pretty sure that it does not match polarities either.
I have recorded dozens of live music shows, but I have been able to listen phase issues with my ears in huge stage or classical music performances.
Of course, phase and/or polarity correction/optimization must be performed AFTER alignment - that's why I was wondering if it could fit within the project's scope - but certainly they should be user-selectable additional options and not done by default.
@x42 Thanks for your interesting contributions which allowed me to discover Music Alignment Tool CHest !
Thanks again for the resources! I'll definitely look into a post-processing phase alignment function
Bump.
Is "AI" your friend ?
This interesting @karisigurd4's deep learning project that aims to solve this problem by leveraging deep learning techniques to automatically correct phase discrepancies:
StereoPhaseNet: Phase Correction for Stereo Audio Using Deep Learning
Hope to test it soon !
Looks neat! I'll see if I can incorporate it
Looks neat! I'll see if I can incorporate it
Well, you may incorporate the inferencer but not the trainer (I believe), so it could be a not-that-good move.
For a non-AI softwares like audalign I would go with the "classic" approach...
...for example @andmholt's Phase Align could be a good starting point.
Anyway I've added some more resources in HyMPS project \ AUDIO \ Treatments \ Phasing if you need.
EDIT
About AI phase aligners, I would ask to @harveyf2801 who seems one of the most expert users (here at GH) in this field.
Thanks for the mention! My university dissertation focused on comparing DSP versus AI-based auto phase alignment techniques. I explored a variety of methods, including:
- Phase difference analysis
- Cross-correlation and cross-spectrum
- Reinforcement learning, and black-box modeling to fine-tune all-pass filters.
If you're interested in AI approaches, I highly recommend checking out this repository: https://github.com/abargum/diff-apf - their work is a great resource.
My own repo still needs a bit of cleanup (I've not touched it since leaving university), but it contains all the tools and information necessary for building a DNN that leverages phase features as an input, to then output the all-pass filter parameters: https://github.com/harveyf2801/DNNAutoAlign. Please check out any of my 'AutoAlign' repos for examples of alignment techniques.
Feel free to reach out if you have any questions or need further guidance!