ONNC/onnc

NVDLA compatible operators

Opened this issue · 2 comments

Hello,

looking at the code for the NVDLA backend, I noticed that "only" the following operators are registered in the "RegisterLowers" function:
Relu, Sum, MaxPool, AveragePool, LRN, Reshape, Softmax, Concat
Why are others, such as Add or BatchNormalization which are in the Vanilla backend, not there for NVDLA?
Is the code emitter for them just not ready or is it a compatibility problem with the NVDLA architecture itself?
If I manually add the missing operators, the compiler actually runs fine with a model using a previously unsupported operator. Is it just generating an unusable loadable?

If you add a missing operator by yourself, you need to extend the codeEmit pass to support the new operator and test it on the NVDLA virtual platform. Each backend may have its own runtime. In the NVDLA case, it’s runtime implementation is in the UMD/KMD library. Even if you generate a loadable, you still need to test it in VP to make sure it’s a valid loadable.

Thanks for your reply.
I don't know if you can answer this or not, but I may as well ask:
how did you write the existing CodeEmitVisitor::visit functions for NVDLA since the nvidia compiler and the loadable format isn't open source? Did you reverse engineer the compiler's behavior and tested on the virtual platform? Or did I miss some Nvidia docs about the loadable format and the operators implementations in the NVDLA hardware?