Is it custom operator supported?
Closed this issue · 4 comments
Is it unbuilt-in operators of TVM supported?If it is, then in which function dose the work to generate the computes and schedules?How about a custom operator?
If you want to support a custom operator that unbuilt in TVM, you can reference
3rdparty/blazerml-tvm/python/tvm/relay/frontend/onnx.py
TPAT call the from_onnx interface of tvm. Cuda source code will be generated by relay.build in TVM.
If you interested in this, you can read from this function.
python/cuda_kernel.py -> function: CudaKernel::run -> "relay.build"
If you want to support a custom operator that unbuilt in TVM, you can reference
3rdparty/blazerml-tvm/python/tvm/relay/frontend/onnx.py
TPAT call the from_onnx interface of tvm. Cuda source code will be generated by relay.build in TVM. If you interested in this, you can read from this function.
python/cuda_kernel.py -> function: CudaKernel::run -> "relay.build"
That is to say TPAT can’t generate the computes and schedules automaticly for the unbuilt-in operators of TVM, is it in the future plan?
If you want to support a custom operator that unbuilt in TVM, you can reference
3rdparty/blazerml-tvm/python/tvm/relay/frontend/onnx.py
TPAT call the from_onnx interface of tvm. Cuda source code will be generated by relay.build in TVM. If you interested in this, you can read from this function.
python/cuda_kernel.py -> function: CudaKernel::run -> "relay.build"
That is to say TPAT can’t generate the computes and schedules automaticly for the unbuilt-in operators of TVM, is it in the future plan?
Yes, we will support all of onnx operators in the future. But it not soon.
thanks, I see.