Triton Adsbrain Backend
The triton backend for adsbrain users to run their C++ based models on triton
server. It assumes that 1) triton-adsbrain server is used with AB_IN_OUT_RAW=1
and AB_REQUEST_TYPE=ADSBRAIN_BOND
; 2) the model takes one string input and the
input name in config.pbtxt is assumed to be raw_input
in the shape of [1]
; 3)
the model only returns one string output in the shape of [1]
and the output name
in config.pbtxt will not be used; 4) the input and output data will be in the
format of raw_format
, which means no metadata info required for input or output.
This backend is targeted for the triton server r22.05_ab
.
How to build
-
Create an ABO docker container
-
Run
./install_deps.sh
to install the dependent libraries and tools -
Run
./build.sh
to build the project
How to implement the model for the adsbrain backend
- Compile the adsbrain backend and copy
adsbrain_backend.h
andlibtriton_backend.so
to your project; - Derive
class AdsbrainInferenceModel
to implement the model-specific logic; - Implement the C API
CreateInferenceModel(...)
to create the model instance; - Compile the C++ model inference code into a shared library and put it and all the dependent shared libraies to the model serving directory;
- Update
config.pbtxt
to use the adsbrain backend and specify the shared library name and required parameters.