Can it run in webasm?
mcclure opened this issue · 12 comments
I would like to write Rust programs that can compile to either native desktop or to be embedded in a web page.
CPAL has a WebAssembly backend, but the web-audio-api crate is appealing because it would potentially allow my Rust code to use the full feature set of WebAudio. I can imagine a world where when compiled for the wasm platform, web-audio-api devolves into a set of bindings for WebAudio. That would be very useful (especially for apps that have both desktop and web builds). I assume this is not the case currently because I do not find it mentioned in the documentation and the README mentions spec divergences (although not major ones*). Has this ever been considered?
If it already works, it should be more clearly documented.
If it doesn't work, I think it should be considered.
(I see the README already documents a set of bindings in the other direction— to allow use of WebAudio from nodejs. So this implies the implementation is so close it could map directly to WebAudio wasm-bindgen or map with a small amount of glue code.)
Hey,
Thanks for the feedback, I'm personally not really sure how to answer that complex question and maybe @orottier will have a different insight...
On my side, I didn't try and I'm not really sure this is a good idea. The facts are (IMOO) 1. audio processing requires a dedicated (high priority) thread, i.e. the kind of thread you will never be allowed to access with wasm-bindgen as I understand it, 2. from what I know the only way to access such thread within browsers is to use the AudioWorklet node. Therefore it seems that you would end up doing something like: build some Rust code to WASM to run it inside an AudioWorletNode and add some glue code, to finally make it work and look like the "native" Web Audio API... which looks like quite convoluted, since you could just use the native API of the browser...
However, as I understand it, what you would like to achieve is to write some Rust code that could run into the browser given "some transformation" (let me know if I'm mistaking here). I think the best approach to do that right now would be to "transpile" - rather than "compile" - the Rust code into JS and let the JS runtime (and Web Audio implementations) do their job.
I personally did that kind of transpiling manually to port the Rust examples to JS for the Node.js wrapper, and I'm quite confident that it could be automatised to a great extent with simple regex. I could try to make some POC gist if needed.
Not really sure it answers your question, but in any case that's an interesting discussion, let us know!
Hi @mcclure,
This library is currently not suitable for a webassembly target. As @b-ma outlined, we use dedicated thread for various purposes which won't play nicely in the browser's webassembly runtime. And indeed you would have to glue the produced samples back to the browser's true AudioContext, which involves a lot of overhead probably.
Something that may interest you is this example https://github.com/jakkosdev/sonars/blob/master/sonars/src/sound.rs where code is slightly duplicated to run using web bindings or native target.
I could even envision a crate that bridges the gap between the subtle API differences of web_sys
versus our library. But this is not on my short term roadmap!
Thanks for the explanations.
I think sound.rs probably gives me what I want. It's not so much that I need this library to support webasm, as I want to target one interface and get the same behavior from that interface on web and desktop.
I could even envision a crate that bridges the gap between the subtle API differences of web_sys versus our library. But this is not on my short term roadmap!
Say I'm interested in creating this (I have a project in the next few months for which it may be useful). If I make an attempt, would this issue be an appropriate place to ask followup questions about the API differences?
Say I'm interested in creating this (I have a project in the next few months for which it may be useful). If I make an attempt, would this issue be an appropriate place to ask followup questions about the API differences?
Definitely! I will leave this issue open, feel free to ping us anytime
I am so confused, i created a project with main purpose to work on the web, but it's is misleading to call it web-audio, i was very convinced it had web-sys bindings underneath?
what's the recommended approach? the user create web-sys bindings clones for all of the method calls?
Also, when you said wasm-bindgen would not work well, i don't understand why, do you refer to the web-sys bindings?
(ok i just saw the sound.rs file), but man, that's sad, that complexity should be inside the methods, not have the user branch out on top of each method.
I was super enthusiastic and you did a great job with desktop wise, so i got my hopes up. The web-sys api also seems to be so slightly different (returns Result, and requires Options as arguments,, while this crate doesn't), which makes it incompatible for a single branch code.
I am so confused, i created a project with main purpose to work on the web, but it's is misleading to call it web-audio, i was very convinced it had web-sys bindings underneath?
I guess it is indeed misleading when you fail to read the basic description of this crate.
what's the recommended approach? the user create web-sys bindings clones for all of the method calls?
If you only intend to run on the web, you won't need this crate. You can use https://rustwasm.github.io/wasm-bindgen/examples/web-audio.html
Also, when you said wasm-bindgen would not work well, i don't understand why, do you refer to the web-sys bindings?
(ok i just saw the sound.rs file), but man, that's sad, that complexity should be inside the methods, not have the user branch out on top of each method.
I was super enthusiastic and you did a great job with desktop wise, so i got my hopes up. The web-sys api also seems to be so slightly different (returns Result, and requires Options as arguments,, while this crate doesn't), which makes it incompatible for a single branch code.
As mentioned earlier, we are welcome for someone to work on a crate to bridge these differences
I intend to have cross platform audio.
i have been building my own wrappers over Panner/GainNode (for basic spatial audio), and i have a few more feedback:
I would like to see a crate merge the differences, because my wrappers over the narrow functionality is 600 lines (adding web-sys) while it should be 350, making it possible, but not scalable for the user to do more interesting things with nodes.
I might not be the best to implement the wrappers over web-sys and web-audio-api, since currently i am just using and testing a subset of the features, but i was pondering how to do it, and while the apis names are equivalent (nice!), i had some challenges in practice:
- The nodes in web-audio-api are not clonable (but it looks like it could?)
- There are mutable methods in web-audio-api such as set_loop(&mut self, value), while the web-sys node methods are not mutable.
(i am currently using Arc Mutex over every node, which is kind of clunky)
That makes it more restrictive from the outside to merge the differences (and also creates a doubled layer, needing maintenance)
It would really nice to have this crate work from the inside to become a cross platform solution, in the standards of Wgpu/winit. Good luck!
I have implemented an engine in TypeScript and WebAudio, and I'm thinking of re-implementing with you library so could have the ability to use it as native code also.
As I'm reading this issue, I'm not sure if the problem with WebAudio is to expose all the nodes one by one to WebAudio again, or with the WebAudio in general.
To do a more specific question, if I build a lib with web-audio-api-rs and I want to expose the whole lib as a single AudioWorklet, is doable with a sane and easy way?
Thanks for bringing this up again. I took a bit of time and managed to get it to work in WASM - here be dragons though!
Clone this repo for the demo: https://github.com/orottier/wasm-web-audio-rs
It should work out of the box, and uses the branch https://github.com/orottier/web-audio-api-rs/tree/feature/wasm where I botchered some of the features to make it work in WASM (timings, event loop, deallocator thread)
It needs a lot of work, mainly on performance, binary size and bringing back the killed features. But I'm interested to hear how it works for you.
To do a more specific question, if I build a lib with web-audio-api-rs and I want to expose the whole lib as a single AudioWorklet, is doable with a sane and easy way?
This won't be possible right away. The online AudioContext presumes it needs to emit its own audio constantly. An OfflineAudioContext cannot be used to deliver blocks of audio while maintaining the full audio graph state. You'll have to Frankenstein the inner workings of our rust lib and convert that to a wasm web audio module - it won't be simple.
@orottier Thanks for your fast response, I'll try a small prototype to see if this could work for me.
Thanks. Be sure to track the progress of https://github.com/orottier/wasm-web-audio-rs and https://github.com/orottier/web-audio-api-rs/tree/feature/wasm because I have just applied some patches to make the binary size smaller. Run cargo update
to fetch the changes.
crate that bridges the gap between the subtle API differences of web_sys versus our library
I have been using a small subset of web audio api for making games
I ended up making a crate that bridges the gap this way but only implements the features I needed.
Not really documented or anything but in case anyone is interested: https://github.com/geng-engine/web-audio-api