Building out the Accelerate / BNNS backend
pgarz opened this issue · 1 comments
Hi, thank you so much for making this project! Also been loving your papers on it. This has been all great progress on stuff I was hoping would happen. I'm also a Stanford grad and focused on AI during my time there.
I've been using the GuitarML project to use that made plug-in set up to train my own model that can be loaded in the plug-in that I can then use as a full-on plugin tool in Logic. So far I've had great results for distortion effects! However, I'm hoping to experiment more architecturally to improve on complex, time-dependent effects. It would be super cool to build it out some sort of real-time Transformer layer or experiment with other ideas like VQ-VAE, WaveNet-like architectures, Diffusion models, Fourier Transform tricks etc.
From some detective work, it seems Apple is trying to extend Pytorch to compile into Accelerate/BNNS for the Apple platforms: https://jobs.apple.com/en-us/details/200265506/accelerating-pytorch-on-macs-with-bnns
With that in mind, it seems like a good idea to build out the Accelerate backend to reap the benefits of building models in Pytorch. For now, I also think Accelerate / BNNS might be the easiest for me to extend since it already has BNNS that incorporates useful functional layers.
However, from my digging, the docs seem to be either be in Swift or C++ https://developer.apple.com/documentation/accelerate/bnns. How did you get the project set up to use Accelerate from a C++ interface? Are all of Apple's Swift libraries available as a C++ interface in Xcode? Or maybe I can get away with using the easier-to-read Swift interface? Any other tips on getting started with building out the Accelerate backend would be much appreciated!
Thanks, glad you're enjoying the project! Extending the library to handle more complex networks like the ones you mentioned is definitely one of my goals for the project.
I haven't used BNNS directly, though it definitely seems pretty powerful. Instead, the Accelerate backend directly calls methods from the vDSP library. Even though the documentation is technically in Swift/Objective-C, it can usually be translated to C++ without too much trouble. For example, from looking at this page on vDSP_vadd
, UnsafePointer<Float>
would be translated to const float *
in C++, and so on from there.
As far as "setting up" the project to link with Accelerate, it's all handles in the CMake configuration, with the key line being target_link_libraries(RTNeural PUBLIC "-framework Accelerate")
.
At the moment, the Accelerate backend is not supported in the RTNeural compile-time API, so maybe trying to get that working could be a cool place to start. When I was putting together the compile-time API, I found that supporting the other 3 backends was quite a lot of work as it was, plus I wasn't using the Accelerate backend as much since most of my projects have pretty strict cross-platform constraints.
Anyway, definitely curious about any specific things you'd like to work on, and I'd be happy to help out wherever I can!