Provide a simple Audio Visualizer
oseiasmribeiro opened this issue · 97 comments
Provide a simple Audio Visualizer (showing low, medium and high frequencies) that offers the possibility of increasing or decreasing a number of bars. This is useful for the user to make sure that the audio is being played or recorded in the application. Sometimes the volume or mic can be at a minimum.
I also think this feature would be a good idea, although it is not necessarily the highest on my personal priority list. I will of course welcome pull requests.
The underlying API on the Android side to implement this is:
https://developer.android.com/reference/android/media/audiofx/Visualizer
This API is relatively straightforward to use.
However, on iOS, it does not appear to be as straightforward.
Thanks for your answer! I look forward to and await this resource.
- Someone made a plugin (only Android) FlutterVisualizers
-And there is this iOS implementation : DisPlayers-Audio-Visualizers written in objective c
@sachaarbonel thanks!
Any chance you might add this? It's not present in any sound libraries for Flutter currently, and would be really nice to be able to build visualizers.
Edit, looking into it a bit further, I think we just need this value on iOS, is that a big lift?
https://developer.apple.com/documentation/avfoundation/avaudioplayer/1390838-averagepowerforchannel?language=objc
That would allow for a rudimentary visualiser, although probably what we want is something equivalent to Android's API, so we'd want to do a FFT on the audio signal.
After accidentally stumbling upon it, it seems there is a way to do this. First, we create an AVMutableAudioMix
and set it in AVPlayerItem.audioMix
. To this audio mix's inputParameters
array we add an instance of AVMutableAudioMixInputParameters
. And on this instance we can access the audioTapProcessor
through which it should be possible to analyse the audio signal and do the FFT.
@ryanheise I think the best solution for audio visualization is providing a clean way to subscribe or retrieve the samples/buffers of audio data. Providing a way to pull in the raw PCM data would allow for more than just FFT analysis but would open the door to nearly any other type of audio analysis.
The audioTapProcessor
could be the way to access the raw data on iOS and then making use of the Renderer
in exoplayer to access the raw data for android. I imagine the api would be as simple as a stream on the audio player called rawDataStream
or sampleStream
or something akin to that. What are your thoughts on that?
@pstromberg98 that's possibly a good idea. Looking at the Android Visualizer API, it actually provides both the FFT data and the waveform data, so we could do the same.
Although you could argue we only need the latter, both Android and iOS provide us accelerated implementations of FFT which we should take advantage of.
@ryanheise Totally! It would be super nice for the library to provide the fft and I don’t see any harm in providing that. I was mainly saying in the case of having either one or the other it would probably be better to provide the raw waveform just in case users wanted to do other analysis and transforms on the data. But having both would be slick!
I've just implemented the Android side on the visualizer
branch. Call:
samplingRate = player.startVisualizer(
enableWaveform: true,
enableFft: true,
captureRate: 10000,
captureSize: 1024);
Then listen to data on visualizerWaveformStream
and visualizerFftStream
. The returned sampling rate can be used to interpret the FFT data. Stop the capturing with player.stopVisualizer()
.
The waveform data is in 8 bit unsigned PCM (i.e. subtract 128 from each byte to get it zero-centred). The FFT is in the Android format, and I'm not sure yet whether the iOS native format will be different, so that particular part of the API may be subject to change.
It may take a bit longer for me to get the iOS side working, but in the meantime would anyone consider contributing a pull request that adds a very minimalistic visualiser widget to the example, demonstrating how to make use of the data in visualizerWaveformStream
?
Information on how to interpret the Android FFT data:
https://developer.android.com/reference/android/media/audiofx/Visualizer#getFft(byte[])
https://stackoverflow.com/questions/4720512/android-2-3-visualizer-trouble-understanding-getfft
As for iOS, some ideas for implementation:
https://stackoverflow.com/questions/22751685/using-mtaudioprocessingtap-for-fft-on-ios
https://chritto.wordpress.com/2013/01/07/processing-avplayers-audio-with-mtaudioprocessingtap/ (not exactly relevant)
Some official Apple stuff:
https://developer.apple.com/documentation/avfoundation/avplayeritem/1388037-audiomix?language=objc
https://developer.apple.com/documentation/avfoundation/avaudiomix/1388791-inputparameters?language=objc
https://developer.apple.com/documentation/avfoundation/avmutableaudiomixinputparameters?language=objc
https://developer.apple.com/documentation/avfoundation/avmutableaudiomixinputparameters/1389296-audiotapprocessor?language=objc
https://developer.apple.com/forums/thread/11294 (forum post)
https://codedump.io/share/KtEfM7VG0wrL/1/how-do-you-release-an-mtaudioprocessingtap (releasing tap)
I've changed the API slightly to include the sampling rate in the captured data, and included a very simple visualiser widget in the example. The iOS side will be more difficult and I have some higher priority things to switch to for the moment but help is welcome. In particular, if you would like to either give feedback on the API, contribute a better visualiser widget example or even help getting started on the iOS implementation using the documentation linked above.
To those interested in this feature, would you like a separate method to start the request permission flow to record audio for the visualizer? Currently startVisualizer()
will start this flow when it detects that permission hasn't already been granted, but perhaps some apps would like more control over when permission is to be requested.
@ryanheise Personally I like when libraries provide more fine grain control but I can see the argument for both. Perhaps it would be best to have startVisualizer()
request permissions (if needed) by default but also have a way to request the permission separately.
I second the comment from @pstromberg98
I've made an initial implementation of the visualizer for iOS. Note that this is definitely not production ready. Some problems to investigate:
- Against Apple recommendations, I allocate an
NSData
buffer for each capture inside the TAP. You might want to keep an eye on memory usage in case this causes a leak. - I've only crudely converted the samples into 8 bit unsigned PCM. To make it look closer to Android, I fudged the scaling by 3x'ing every sample. Not really sure what Android is going under the hood so I can't emulate it exactly.
- There's a chance this might not work with different audio formats, and I may need to add special cases for the different formats.
- FFT is not implemented yet, only the raw waveform data.
Thanks, @pstromberg98 for the suggestion. I agree, and I'll try to implement that.
As before, I unfortunately need to work on some other issues for a while, particularly null safety. But hopefully this commit provides a good starting foundation to build on.
Contributions are also welcome, so here is the "help wanted":
- FFT code
- A better visualizer widget
- Testing on iOS for memory leaks
- Testing on iOS for support of various audio formats
Has anyone been able to give this a whirl yet? I think this would be a really useful feature to include in the next release, so I'd like to make it a priority, although for that to happen, it would definitely help to get some feedback on the iOS side in terms of memory efficiency and stability. I will of course eventually add the option to start the permission flow on Android on demand, but I think the iOS stability will be the most critical thing to be confident about before I include this in a release, along with the iOS FFT implementation.
Of course, I could just document it as being experimental and unstable, and release it that way, which might actually not be a bad idea to get more eyes on it.
@ryanheise If I can find time I will talk a look at the iOS side and give my thoughts and feedback. I appreciate your efforts on it so far and am eager to jump in and help when I find time 👍.
I think marking the feature as experimental would make a lot of sense.
I pulled it over and merged the changes to check out the android version, but cant seem to get it working.
Mic permission is granted (although it crashes app after prompt acceptance), but it prevents my player from playing anything. I even tried wrapping it in a future to ensure the player has data before calling. Any ideas?
Future.delayed(Duration(seconds: 2), () {
var samplingRate = activeState.player.startVisualizer(
enableWaveform: true,
enableFft: true,
captureRate: 48000,
captureSize: 1024);
activeState.player.visualizerWaveformStream.listen((event) {
print(event);
this.add(AudioManagerVizUpdateEvent(vizData: event.data));
});
});
Thanks for testing that. It turns out there is another change to the behaviour of ExoPlayer in that onAudioSessionIdChanged
is not called initially for the first value. I've done the merge and fixed this issue in the latest commit.
Perhaps. With the null safety release of Flutter soon to reach stable, I'm not sure if I'd like to do this before or after that. Currently I'm maintaining two branches which is a bit inconvenient to keep in sync.
We'll see how things pan out but first I may need to focus on getting the null safety releases ready.
Thanks for jumping to this. Things are working great so far, but the fft buffer visual is a bit different than I expected from using my custom visualizer on other fft sources. Will try to take a deeper look and report back the bug if I find it.
That's basically just the raw output from the Android API which is documented here:
https://developer.android.com/reference/android/media/audiofx/Visualizer#getFft(byte[])
So it's possible you might get weird output unless you interpret that byte array as per the above documentation.
Is this supported on iOS currently? I tried using the visualiser
branch, but I keep getting an error: flutter: setPitch not supported on this platform
Is this supported on iOS currently?
The waveform visualizer is implemented on iOS but not pitch. You can track the pitch feature here: #329
There is a big question at this point whether to continue with the current AVQueuePlayer-based implementation or switch to an AVAudioEngine-based implementation. For pitch scaling, I really want to take advantage of AVAudioEngine's built-in features, but that requires a rewrite of the iOS side - see #334 and this is a MUCH bigger project.
I would really like to see an AVAudioEngine-based solution see the light of day, but it will probably not happen if I work on it alone. If anyone would like to help, maybe we can pull it off with some solid open source teamwork. One of the attractive solutions is to use AudioKit which is a library built on top of AVAudioEngine which also provides access to pitch adjustment AND provides a ready-made API for a visualizer and equalizer. That is, it provides us with everything we need - BUT it is written in Swift and so that involves a language change and it means we may need to deal with complaints that old projects don't compile (we'd need to provide extra instructions on how to update their projects to be Swift-compatible).
Would anyone like to help me with this? (Please reply on #334)
- I will copy this comment to related issues.
@ryanheise
how do i use this experimental wave visualizer
Hi @hemanthkb97 . Clone this repo and checkout the visualizer
branch. Inside, the example/
directory has been modified to demonstrate how to use the API. There are also some comments above on things to be tested on iOS - though Android should be reliable. I plan to rewrite the iOS implementation on top of AudioKit.
@ryanheise
thank you so much for super fast reply.
i tired out but got error like this:
The following MissingPluginException was thrown while de-activating platform stream on channel com.ryanheise.just_audio.waveform_events.188f771f-2570-46fe-be90-7b07198f3587:
full error:
I/ExoPlayerImpl( 5453): Init 8b35ed1 [ExoPlayerLib/2.13.1] [generic_x86_arm, sdk_gphone_x86, Google, 30]
════════ Exception caught by services library ══════════════════════════════════
MissingPluginException(No implementation found for method listen on channel com.ryanheise.just_audio.waveform_events.2b0b1792-cd7f-4b16-88bd-ab92c7de5799)
════════════════════════════════════════════════════════════════════════════════
════════ Exception caught by services library ══════════════════════════════════
MissingPluginException(No implementation found for method listen on channel com.ryanheise.just_audio.fft_events.2b0b1792-cd7f-4b16-88bd-ab92c7de5799)
════════════════════════════════════════════════════════════════════════════════
E/flutter ( 5453): [ERROR:flutter/lib/ui/ui_dart_state.cc(199)] Unhandled Exception: MissingPluginException(No implementation found for method startVisualizer on channel com.ryanheise.just_audio.methods.2b0b1792-cd7f-4b16-88bd-ab92c7de5799)
E/flutter ( 5453): #0 MethodChannel._invokeMethod
E/flutter ( 5453):
E/flutter ( 5453): #1 MethodChannelAudioPlayer.startVisualizer
E/flutter ( 5453):
E/flutter ( 5453): #2 AudioPlayer._setPlatformActive.
E/flutter ( 5453):
E/flutter ( 5453):
What is the reproducible project - is it it the provided example?
@ryanheise
sorry, i didn't copy files properly. now its working fine. 👍🏼
one last question, does this work on IOS?
thank you so much.
As per the above comment, I am asking people to test it on iOS the let me know whether or not it's working for them
@ryanheise
I can confirm that 1e03327 example from the visualizer
branch works on iOS (simulator).
Is it possible to visualize the amplitude instead of the "frequency"? Something more like the SoundCloud visualizer.
Great project by the way, keep it up!
This is very nice! I was just into creating some audio visualization stuff in a production app using this plugin and I couldn't keep but wonder, is the sending of samples to Dart working?
This would help me a lot as this would save me from having to decode my input file twice (one in just_audio and other in native code to retrieve the raw samples) to show some FFT derived visualizations.
I can work with the samples already, as I also ported a Rust based FFT implementation with good enough performance for my POC.
Hi @Piero512 , yes sample data is working on iOS and Android, while FFT is currently on Android only (but you could do your own FFT in Dart I suppose). Have you tried the example as mentioned above?
No, sorry, I haven't. Will check once I find free time. Mind linking it directly (or do you have a FAQ/summary) on the top post?
Other flutter FFT implementation for just reference.
https://pub.dev/packages/audio_visualizer (Seems abandoned)
https://github.com/Eittipat/audio_visualizer
https://pub.dev/packages/flutter_visualizers (Seems abandoned)
https://github.com/iamSahdeep/FlutterVisualizers
Thanks, @dfdgsdfg . FYI, just_audio already captures the samples on Android and iOS, and does the FFT on Android but not on iOS. So of these two packages above, flutter_visualizers is now obsolete since just_audio does all that it does plus iOS, while audio_visualizer could still be useful to add the FFT layer on top of just_audio's iOS sample capture.
But probably it would be better to run the FFT in the native code. Not only will it be faster, but it will also be more accurate since the sample data that is used for visualization is typically at a much lower resolution.
Although in any case the plan is still to leave this visualizer branch as an unofficial branch (still usable, but you must use it as a git dependency) until I can redo it as an AVAudioEngine-based implementation.
Is there a visualization option for viewing the sound in it's entirety, I don't know the exact term but this is often seen in audio editing apps like ableton..
If not is there an output in the api I can use to generate that ?
@cedvdb no, the visualizer is intentionally a low res view of the samples specifically for the use case of realtime visualizers used in audio player apps (i.e. to visualise what you are "currently" hearing). If we transport the samples in full resolution over the platform channels in realtime, I suspect we'll hit a performance limit of Flutter's platform channels. There are some developments with FFI which may allow this down the track but for now that is not a goal. If you are building an app like ableton, then that is also not a current goal since just_audio focuses on playing audio and not editing (but who knows what the future may hold).
Now, your use case to view the entire soundwave in its entirely would best be handled by its own plugin. You would basically have to parse and decode the entire file which may take quite a while depending on the length of the audio. For example, on some Android phones without optimised decoders, it might take several minutes to decode an entire file. This is generally not a problem for playback because normally a decoder only needs to decode "just in time".
I could create such a plugin if people want this, but I don't know how many people would want it so I don't know whether it is worth the effort. As you can expect, I am already quite stretched developing my current plugins.
Can I ask, do you just want to be able to visualise the sound wave of the entire file, and be able to view different parts of that sound wave depending on there the user seeks to? For that use case, you don't actually need full resolution either. The actual sample rate of most audio is much higher than what the human eye needs to visualise a span of audio that fits within the width of a mobile screen. So such a plugin would probably have an option for how many samples per pixel or something like that.
The use case I had was to let the user record his voice and then clip the record where he wanted to (the visualizer would help with that). However after some more research on what I wanted to create that might not be necessary for reasons I won't explain.
You often see such visualizer, in a low resolutions version, in players such as soundcloud players and the likes, so I'm not sure it should take a minute to process ? Maybe there is a way to "skip" parts of the file depending on how big it is and the precision required. You can only fit so many data on a screen so there must be a precision aspect, or zoom aspect to it. Excuse my lack of vocabulary in this domain, I don't know it very well.
I agree that this could be done by another package but I would not do it currently. There are so many things that could be done for sound but it remains to be seen how successful dart is going to be. For example on the web, if I recall correctly they have a good API for creating synthethized sounds as well (sin waves etc) and adding effects. This makes it theoritically possible to create an ableton clone. However this is a playback library, not a synthethizer library. All those things related to sound are more niche and I'm not sure there is a demand for it, except from a few.
To clarify:
- A visualizer continuously captures a split "moment" (e.g. 100 milliseconds) and supports the creation of visual animations that respond in real time to the audio signal. This can be done in real time because we we tap into the audio decoder WHILE it's playing audio and allow the app to convert those samples into some realtime animation.
- A waveform display captures a much longer range segment of audio, typically spread along an x-axis. Since you typically want to be able to see the wave form AHEAD of the current position that you're hearing, it is not enough to just tap into the audio decoder. You need to actually scan ahead and decode parts that you're not playing.
These two things are fundamentally different, and it does not make much sense to shoehorn the second feature into just_audio, but it does make sense to make it a separate plugin since after all it will need to use its own decoder. It won't be able to share the decoder with the one just_audio currently uses for real-time decoding. just_audio's decoder essentially happens at a pace in line with the current playback speed, and for a real-time visualizer, that's exactly the same thing. But for a waveform display, you typically want a decoder to operate at a much faster rate. Well, basically, as fast as possible, so that you can see ahead, not just see the current 100ms window.
Anyway, let's not debate the technical side of things, as I will make the implementation decisions. In this case, I would put it in a separate plugin because that would be easier. The question I put to you in my previous comment was, do you want a waveform display? If so, I can create that. But first, how many other people would want it?
Note that the way apps like sound cloud actually work is that because the decoding process is very expensive, they do it once when the audio file is imported and then the cache the resulting waveform image. Once cached, you then essentially have random access to any point in the waveform allowing you to jump to and zoom into any segment instantly to display it.
Yes I understood you the first time
The question I put to you in my previous comment was, do you want a waveform display? If so, I can create that. But first, how many other people would want it?
Yes but I can live without.
I have created a separate issue for the waveform display: #507
If you are interested in that feature, please vote on it over there.
Awesome work, @Eittipat ! I've left some comments over on the PR, and we can discuss it there. I feel the same way about the fragility of the whole thing when I wrote the waveform visualizer, and it definitely needs a lot of testing before I could feel confident about merging it into stable, or at least ensuring that when the visualizer is not running, that the rest of the just_audio functionality will be unaffected (one complication to that is that the tap may be useful to activate for other features besides the visualizer, such as audio panning.). But so long as this branch exists, people who absolutely need the visualizer can still use it (and test it :-) ).
I like that you've modified the iOS FFT data to match the Android format. Something similar is necessary also for the waveform visualizer data, but unfortunately the Android documentation doesn't actually tell us exactly how this waveform data has been scaled and I've just done a simple linear scale approximation.
Hi @ryanheise, thanks for the great plugin at first and I'm looking forward to having this Audio Visualizer.
I got another question for you, I have some Live Stream Audio and I'm using the HlsAudioSource
and it is working perfectly but I need to create the wave effect like in Google Meeting when you talk it shows the decibel level as Wave effect (you can see from the video)
so Is there any way to achieve this, I only need to know currently playing audio decibel level
(as converted to double
, from 0.0 to 1.0 for example, It would be great to have Stream<double> decibelLevelStream
for showing basic sound changes)
I thought I could achieve this by getting the Live Stream Audio data with the HTTP package as a byte array and somehow get the decibel level of current data after that pass that data to your just_audio
package for being able to play.
I have seen your StreamAudioSource
, LockCachingAudioSource
classes and implementations, is it possible to have currently playing decibel level with your package? If there are some walkarounds please let me know
Screen.Recording.2021-10-13.at.11.44.27.mov
This visualizer should allow you to do that if you listen to either the waveform or the fft data and look at the amplitude or magnitude.
onPlay() async{
_justPlayer.startVisualizer(
captureRate: 10000,
captureSize: 1024
).then((value){
_justPlayer.visualizerWaveformStream.listen((event) {
print("VISUALISER $event ${event.samplingRate} ${event.data}");
});
_justPlayer.visualizerFftStream.listen((event) {
print('FFTVISUALISER $event ${event.samplingRate} ${event.data}');
});
});
await _justPlayer.play();
}
onPause() async{
_justPlayer.stopVisualizer();
await _justPlayer.stop();
}
I am trying to visualize audio information. But visualizerStreams doesn't share any data. So that streams didn't print any value, but audio already playing
The API for starting the visualizer is not currently ideal, but I would suggest looking at the example (in this repo) to see the order of initialisation that works.
This visualizer should allow you to do that if you listen to either the waveform or the fft data and look at the amplitude or magnitude.
Hey @ryanheise, Can you give some more details to look at the amplitude or magnitude of the data, I have looked at these models; VisualizerWaveformCapture
, VisualizerFftCapture
can't see these fields.
How can I reach these values?
@onatcipli take your time with the example since it shows how to reach this data (and also displays the data).
When I try to play full mp3 audio from network, visualizer works good, but when I try to stream fft or waveform from audio-stream, event always null, do you know how to fix it?
_justPlayer.visualizerWaveformStream.listen((event) {
print("VISUALISER $event ");
});
_justPlayer.visualizerFftStream.listen((event) {
print('FFTVISUALISER $event ');
});
Would this issue be reproducible if I modified the official example with your URL? If so, what URL can I plug in?
Would this issue be reproducible if I modified the official example with your URL? If so, what URL can I plug in?
Thanks for reply. http://radiogi.sabr.com.tr:8001/voice_stream_128
Would this issue be reproducible if I modified the official example with your URL? If so, what URL can I plug in?
Sorry, I forgot to say, that problem becomes only on iOS. On android audio-stream visualization working
So just to clarify, that's a yes to my first question? Regarding iOS, are you trying @Eittipat 's pull request mentioned 12 comments up?
So just to clarify, that's a yes to my first question? Regarding iOS, are you trying @Eittipat 's pull request mentioned 12 comments up?
1st question. I think it should reproduce if u will use this URL.
Yes I am using Etipat package
just_audio:
git:
url: https://github.com/Eittipat/just_audio.git
ref: visualizer
path: just_audio
@ryanheise @Eittipat Don't you know how to fix problem with visualizer on iOS device, because audio stream visualizing not working. by this link http://radiogi.sabr.com.tr:8001/voice_stream_128
I also added NSMicrophoneUsageDescription to info.plist, and request permission for microphone. when open app, but id didn't help me, Fft and WaveForm streams don't send any data
Hello @zatovagul,
I will look into this issue this weekend.
@ryanheise seems "processTap" does not execute when using http://radiogi.sabr.com.tr:8001/voice_stream_128
I don't know much about MTAudioProcessingTapCallbacks, so I leave it to you.
@ryanheise Sorry, how can I use this package with visualizer and equalizer. When I am using the last version, there is no visualizer functionality. But when I am trying to use Ettipats visualizer branch there is no AndroidEqualizer functionality???
I'm sorry the visualizer
branch is a bit behind master
. I had intended to merge @Eittipat 's FFT implementation first and then bring it in line with master
, however there are still some copyright issues to sort out and that needs to be resolved first.
@Eittipat , would you be willing to merge any conflicts if I brought the visualizer
branch up to date?
@ryanheise Yes I would
I've just merged the latest code into the visualizer
branch, including the equalizer.
@ryanheise I've updated my pull request (#546). I also solved the copyright issue. ^ ^
@ryanheise I've updated my pull request (#546). I also solved the copyright issue. ^ ^
http://radiogi.sabr.com.tr:8001/voice_stream_128
but visualizer still not working with this link.
And with this link also: https://broadcast.golos-istini.ru/voice_live_64
Hopefully this weekend I can do some testing.
Sorry, can you help me.
I got this issue
Invalid description in the "just_audio" pubspec on the "just_audio_platform_interface" dependency: "../just_audio_platform_interface" is a relative path, but this isn't a local pubspec.
575 | ╷
576 | 13 │ path: ../just_audio_platform_interface
577 | │ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
578 | ╵
I solved it locally, but now I am trying to use Ci/Cd and got this error repeatedly
My pubspec.yaml
dependencies:
just_audio:
git:
# url: https://github.com/Eittipat/just_audio.git
url: https://github.com/ryanheise/just_audio.git
ref: visualizer
path: just_audio
dependency_overrides:
just_audio_platform_interface:
git:
# url: https://github.com/Eittipat/just_audio.git
url: https://github.com/ryanheise/just_audio.git
ref: visualizer
path: just_audio_platform_interface
Unfortunately for CI at the moment I think you'll have to fork the repo, make the necessary changes so that dependency overrides aren't necessary, and then use your fork as a dependency. Of course that's not ideal. This branch will still exist as its own branch for quite a while before being considered stable enough to merge into the master branch and publish on pub.dev, but perhaps I should do something similar to what I did with audio_service when working on the year long development of an experimental branch: basically, I'd make it so the pubspec.yaml files in git refer to just_audio_platform_interface via git references rather than relative paths within the repository, and have the alternative path-based dependencies still there and commented out (because the plugin developers still need to work based on those). Anyway, for now though I suggest the fork approach.
I found this https://developer.apple.com/forums/thread/45966
It says "The MTAudioProcessingTap is not available with HTTP live streaming".
That's why @zatovagul got nothing when playing from http://radiogi.sabr.com.tr:8001/voice_stream_128
However, I found some good news but I have not looked into it yet.
https://stackoverflow.com/questions/16833796/avfoundation-audio-processing-using-avplayers-mtaudioprocessingtap-with-remote
The trick is to KVObserve the status of the AVPlayerItem; when it's ready to play
That sounds familiar... I thought I was already doing something like that where ensureTap
is called within observeValueForKeyPath
.
Hi all, @Eittipat 's PR is now merged into the visualizer
branch which adds an iOS implementation of FFT. Thanks to @Eittipat 's work this now reaches feature parity between Android and iOS. This has now also been symlinked to macOS which also appears to work correctly.
For anyone who was already using the FFT visualizer on Android, note that I also just changed the plugin to convert the platform data from Uint8List
to Int8List
which is more appropriate for FFT, and added some convenience methods to extract the magnitude and phase out of the raw data. The example has been updated to do this with a new FFT widget demo. If anyone can write a better FFT visualizer, please feel welcome to. (e.g. I haven't done any smoothing of the data.)
This is still not ready to be merged into the public release. I think some improvements should be made on when the Android implementation prompts the user for permissions, and on the iOS side the TAP code should be reviewed and possibly refactored to allow for future uses of the TAP.
Any Idea on when the visualizer will be done
Any Idea on when the visualizer will be done
Did u try to use it? It works in a lot of situations
@ryanheise I updated your library and change my code for Int8List, but it's still not working
For example with this link https://broadcast.golos-istini.ru/voice_64
I will try to fix it in native code
I just don't understand how to use it in IOS Native code(
I just don't understand how to use it in IOS Native code(
I've already looked at it. It doesn't work. I think you have to wait for the AVAudioEngine version (which is still in the early stage - #334)
Curious what the current plans are for the visualizer branch. Is it still planned to be merged in or is it now waiting for #784 before further updates?
Hi @spakanati
No it is not waiting for #784 , probably going forward there will be both the current AVQueuePlayer-based and the AVAudioEngine-based implementions available since they may end up supporting different feature sets.
What this branch is waiting on is a finalisation of the API (particularly for requesting permissions and also for starting/stopping the visualizer), and also a code review and perhaps code refactoring on the iOS side to handle the TAP code more cleanly.
I would be a bit nervous about just merging this TAP code until it has been well tested, so I think this branch would remain here as the way for people to experiment with the visualizer until the final code has been tested and I am confident that it will not break anything.
Of course to help speed this up, people are welcome to help on any of the above points, either through code or by contributing thoughts/ideas through discussion.
Thanks for the clarification! I've been able to use the visualizer
branch successfully on both iOS and Android with mp3 network files, but I just ran into the HLS issue mentioned above, so that's why I was wondering about the AVAudioEngine-based implementation. It sounds like only the AVAudioEngine implementation will be able to support the visualizer when using HLSAudioSource?
As far as the permissions, I agree that it might be common to want more control over the timing of the request, especially because a microphone record request is a little confusing/jarring for users. This was pretty easy for me to get around, though -- I just did my own permission request before ever calling player.startVisualizer
, so I was able to show my own messaging. I'd guess a lot of people are already handling permission requests for other parts of their app, so one option could be to remove the permission handling entirely from the visualizer and just list the necessary permissions in the docs.
First and foremost, I would like to express my sincere gratitude for the considerable effort and dedication you have devoted to this project alongside the rest of the contributors.
Since this issue is still open, I have made a tiny test to see how it will behave using the /example associated with the branch, I have identified several issues that I would like to bring to your attention.
- Bug audio sound dissapears
adding the following code to add a stop button for testing.
IconButton(
icon: const Icon(Icons.stop),
onPressed: () {
player.stopVisualizer();
},
),
at line 288 in example_visualizer.dart
Click stop while audio is playing, then click pause then click play, it will play without a sound.
if it didnt happen try different approach as sometimes it will not happen, try when paused then click stop (not pause) few times then try to play again for example. it will play without sound. also strangely if it plays without a sound and call player.stopVisualizer(); by clicking stop, it will play the sound.
- Crash
changing to the following code
IconButton(
icon: const Icon(Icons.stop),
onPressed: () {
player.stop();
},
),
It will crash the app when the song is playing and visualizer running and stop called.
changing to the following code
IconButton(
icon: const Icon(Icons.stop),
onPressed: () {
player.stopVisualizer();
player.stop();
},
),
It will crash the app as the stopVisualizer is not finished before stop called
changing to the following code
IconButton(
icon: const Icon(Icons.stop),
onPressed: () {
player.stopVisualizer();
await Future.delayed((const Duration(seconds: 2)), (){});
player.stop();
},
),
it will work as the stopVisualizer had the time to finish before stop called.
crashes log:
terminal)
"Restarted application in 490ms.
ensureTap tracks:1
3
get visualizerCaptureSize -> 1024
Lost connection to device.
Exited"
Thanks @karrarkazuya , this is exactly the sort of feedback I was hoping for, since this branch is quite experimental and can't be merged until it is sufficiently tested and becomes stable.
Since you didn't mention which platform you were testing on, could you confirm which one that is? I would guess iOS or macOS since the Tap is an Apple concept.
The test was actually made on iOS simulator as showing in the terminal log, however since you mentioned this now I have also tested on iOS simulator, iOS device (iPhone XR), and real Android Device (SD 8Gen 1)
The results were as following
On the Android device:
Audio bug does not exists
Crash does not exists
On iOS Device:
Audio bug does not exist
Crash exists in same mentioned behavior
log
"3
get visualizerCaptureSize -> 1024
- thread #41, name = 'ClientProcessingTapManager', stop reason = EXC_BAD_ACCESS (code=1, address=0xe488a3720)
frame #0: 0x000000019c1b6e5c libobjc.A.dylibobjc_retain + 16 libobjc.A.dylib
objc_retain:
-> 0x19c1b6e5c <+16>: ldr x17, [x17, #0x20]
0x19c1b6e60 <+20>: tbz w17, #0x2, 0x19c1b6e18 ; ___lldb_unnamed_symbol1362
0x19c1b6e64 <+24>: tbz w16, #0x0, 0x19c1b6e40 ; ___lldb_unnamed_symbol1362 + 40
0x19c1b6e68 <+28>: lsr x17, x16, #55
Target 0: (Runner) stopped.
Lost connection to device.
Exited"
On iOS simulator:
Audio bug exists in same mentioned behavior (even with different build)
Crash exists in same behavior
log
"
Thread 47 Crashed:: AUDeferredRenderer-0x15c4671d0
0 libobjc.A.dylib 0x105ca5454 objc_retain + 16
1 just_audio 0x1059ced04 processTap + 528 (AudioPlayer.m:476)
2 MediaToolbox 0x113cdacd4 aptap_AudioQueueProcessingTapCallback + 216
3 AudioToolbox 0x115c19c54 AQProcessingTap::DoCallout(unsigned int&, AudioTimeStamp&, unsigned int&, AudioBufferList&, std::__1::optionalcaulk::mach::os_workgroup_managed&) + 252
4 AudioToolbox 0x115c19a18 AudioQueueObject::PerformTapInternal(AudioTimeStamp&, unsigned int&, unsigned int&, std::__1::optionalcaulk::mach::os_workgroup_managed&) + 140
5 AudioToolbox 0x115c1a0fc AudioQueueObject::PerformProcessingTap(int ()(void, unsigned int&, AudioTimeStamp const&, unsigned int, AudioBufferList&, double&, unsigned int&), void*, AudioTimeStamp&, unsigned int&, AudioBufferList&, unsigned int&, std::__1::optionalcaulk::mach::os_workgroup_managed&) + 176
6 AudioToolbox 0x115ba6da8 MEMixerChannel::TapDownstream(void*, unsigned int*, AudioTimeStamp const*, unsigned int, unsigned int, AudioBufferList*) + 96
7 libEmbeddedSystemAUs.dylib 0x153714674 ausdk::AUInputElement::PullInput(unsigned int&, AudioTimeStamp const&, unsigned int, unsigned int) + 172
8 libEmbeddedSystemAUs.dylib 0x15367a668 std::__1::__function::__func<AUDeferredRenderer::Producer::Producer(AUDeferredRenderer&, caulk::thread::attributes const&)::$_1, std::__1::allocator<AUDeferredRenderer::Producer::Producer(AUDeferredRenderer&, caulk::thread::attributes const&)::$_1>, void ()>::operator()() + 516
9 caulk 0x1155d0364 caulk::concurrent::details::messenger_servicer::check_dequeue() + 96
10 caulk 0x1155cfe68 caulk::concurrent::details::worker_thread::run() + 48
11 caulk 0x1155cfecc void* caulk::thread_proxy<std::__1::tuple<caulk::thread::attributes, void (caulk::concurrent::details::worker_thread::)(), std::__1::tuplecaulk::concurrent::details::worker_thread* > >(void) + 48
12 libsystem_pthread.dylib 0x1b18384e4 _pthread_start + 116
13 libsystem_pthread.dylib 0x1b18336cc thread_start + 8
"
This comment doesn't seem to work for me. I'm still getting an error on pub get
:
Resolving dependencies...
Error on line 19, column 11: Invalid description in the "just_audio" pubspec on the "just_audio_platform_interface" dependency: "../just_audio_platform_interface" is a relative path, but this isn't a local pubspec.
╷
19 │ path: ../just_audio_platform_interface
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
╵
pub get failed
This is in my pubspec.yml:
name: cgr_player
description: A new Flutter project.
publish_to: 'none' # Remove this line if you wish to publish to pub.dev
version: 1.0.1+1
environment:
sdk: '>=2.12.0 <3.0.0'
dependencies:
flutter:
sdk: flutter
audio_session: ^0.1.14
# just_audio: ^0.9.36
just_audio:
git:
url: https://github.com/ryanheise/just_audio.git
ref: visualizer
path: just_audio
# just_audio_background: ^0.0.1-beta.11
just_audio_background:
git:
url: https://github.com/ryanheise/just_audio.git
ref: visualizer
path: just_audio_background
cupertino_icons: ^1.0.2
dependency_overrides:
just_audio_platform_interface:
git:
url: https://github.com/ryanheise/just_audio.git
ref: visualizer
path: just_audio_platform_interface
dev_dependencies:
flutter_test:
sdk: flutter
flutter_lints: ^2.0.0
flutter:
uses-material-design: true
Can somebody help me setup this branch? Thanks!
I think this is because on the visualizer
branch, the pubspec of just_audio is using a local reference to the aforementioned package:
https://github.com/ryanheise/just_audio/blob/visualizer/just_audio/pubspec.yaml#L18
It should probably be using a dependency_override instead.
I am not sure what the best way to proceed until this gets addressed in some way is, either clone this repo localy instead of using a git url, or fork this repo and fix the pubspec files I think.
That's correct, there is a chicken and egg problem with developing plugins within the federated plugin architecture that is quite inconvenient to deal with. As long as this branch is in development and hasn't been published, it will continue to be inconvenient. Running a local dependency definitely works, that's obviously what I do, as a plugin developer.
I should probably bump up the priority of this branch so that it gets released. In order to do that, I need to look at two things:
- A good way to manage requesting permissions. (Suggestions welcome)
- Need to review the TAP code on iOS so that it will play nicely with other potential features that use the TAP.
I am just getting started in Dart, but couldn't you specify the local path inside a dependency_override
instead, so that it doesn't affect when using this package as a dependency?
You could try it and if you find something that would be lower maintenance, you would be welcome to make a pull request.
Makes sense!
Btw one thing I came across, I'm not sure how relevant it is or if it should be mentioned in the docs anywhere perhaps:
You need the RECORD_AUDIO
permission on Android even if you're analyzing audio files (i.e. not using the microphone at all). Otherwise, you will not get any analysis data.
That is true, the example shows this, but I haven't written the documentation yet until I finalise how the permission API will actually work. I think rather than it being initiated by startVisualizer
, there should be a separate API, and perhaps even a separate plugin would be more appropriate. I welcome feedback on which of these options is preferred.
In the latest commit, permission handling is separated from the plugin, so your app can start the permission request flow at a suitable time before starting the visualizer. I've updated the example and the docs.
The remaining issue before merging is to review the TAP code mentioned earlier. Extra eyes on it are welcome. E.g.
- Is the code that enables/disables the TAP processor correct?
- Is the code as written suitable to allow for future uses of the TAP such as audio panning?
Unfortunately I have no idea about the TAP processor, but we just ran into an issue when trying to use this branch with background audio on Android. We're getting an error:
E/flutter (21878): [ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: UnimplementedError: visualizerWaveformStream has not been implemented.
E/flutter (21878): #0 AudioPlayerPlatform.visualizerWaveformStream (package:just_audio_platform_interface/just_audio_platform_interface.dart:82:5)
E/flutter (21878): #1 AudioPlayer._setPlatformActive.subscribeToEvents (package:just_audio/just_audio.dart:1406:20)
E/flutter (21878): #2 AudioPlayer._setPlatformActive.setPlatform (package:just_audio/just_audio.dart:1526:7)
I don't quite understand where those are even supposed to be implemented, so any tips on how to approach this would be welcome. I will also try simply not subscribing to those events in subscribeToEvents
.
I tried to implement the missing methods. This is as far as I got:
I'm not sure if this is correct or if there's anything missing. I haven't actually tested this properly, as we are actually moving away from using the visualizer and doing offline pre-processing to generate spectral analysis of our audio files instead.
I don't quite understand where those are even supposed to be implemented, so any tips on how to approach this would be welcome. I will also try simply not subscribing to those events in
subscribeToEvents
.
Are you using just_audio_background? If so, you're getting the error because just_audio_background hasn't implemented that part of the platform interface. If you look inside that plugin's code, you'll see it already implements two of the other event streams, so the implementation of this new event stream would be like that:
class _JustAudioPlayer extends AudioPlayerPlatform {
final eventController = StreamController<PlaybackEventMessage>.broadcast();
final playerDataController = StreamController<PlayerDataMessage>.broadcast();
...
@override
Stream<PlaybackEventMessage> get playbackEventMessageStream =>
eventController.stream;
@override
Stream<PlayerDataMessage> get playerDataMessageStream =>
playerDataController.stream;
...
}
The implementation should provide all the visualizer event to the main plugin via this 3rd stream that should be overridden.
The permission handling change has been working well for me. Is there a recommended way to use this branch in a project that also targets web (or a path for web in general if the branch is hopefully close to merging)? I understand the visualizer isn't implemented yet for web, but all playback breaks on web because of calls to unimplemented visualizerWaveformStream even if startVisualizer is never used.