What is the capability scope of this driver?
Joel-De opened this issue · 8 comments
Question not an Issue
Got the installation working on Linux which is great, but I was wondering, what is the scope of what the driver will allow you to do? It seems that most of the software located in the Ryzen SW repository is strictly for windows (hard dependency on .dll, presence of .bat files etc.). Is there any plan on porting some of those tutorials to work with the Linux driver OR any way to get them working at the moment? Ideally it'd be nice to have a quick way to test executing some model via the ONNX runtime from a python script, similar to what they have available for windows at the moment. Any pointers to places where I can read more about the capabilities of this driver would be greatly appreciated too!
The current Ryzen AI SW release supports Windows only. The Linux support is under development. The examples and tutorials with ONNXRT will be ported to Linux. Stay tuned.
Thanks, I currently have some kernels written for AIE's (made using the Vitis toolchain for the VCK190). Is there a tutorial to get that code/kernel running on the NPU on Linux? Is that possible at the moment or will we need to wait for a future release?
Please wait for a future release with Linux support.
@Joel-De you could look at https://github.com/Xilinx/mlir-air and https://github.com/Xilinx/mlir-aie
@Joel-De you could look at https://github.com/Xilinx/mlir-air and https://github.com/Xilinx/mlir-aie
Yeah, found this a couple days ago, it's pretty much exactly what I was looking for. Seems to be relatively stable as well with the linux driver which is nice😄
There is also https://discord.gg/URq8HjVq you might be interested in.
Does Ryzen AI use the same AI Engine Tiles as the Versal FPGAs do? It doesn't have to be perfect 1:1 compatibility. llama.cpp requires custom kernels to dequantize parameters right before the matrix matrix and matrix vector multiplications. Xilinx has extensive documentation on how to write custom AI Engine kernels for their FPGAs and I am hoping that at least some of that documentation is going to be relevant for Ryzen AI.
Does Ryzen AI use the same AI Engine Tiles as the Versal FPGAs do? It doesn't have to be perfect 1:1 compatibility.
@Vorlent Yes. The current RyzenAI uses 5x4 tiles of AIE-ML aka AIE2
- AIE2/AIE-ML architecture https://docs.xilinx.com/r/en-US/am020-versal-aie-ml/Overview
- AIE2/AIE-ML C++ API https://www.xilinx.com/htmldocs/xilinx2023_2/aiengine_ml_intrinsics/intrinsics/
I do not think that the usual Vitis tools cannot be used publicly to program RyzenAI yet,
but there are some open-source projects related to AIE which are interesting to program RyzenAI, like: