sdcb/PaddleSharp

Onnx configuration.

Opened this issue · 1 comments

grinay commented

Hello @sdcb. I wanted to inquire about something. I'm in the process of transitioning from GPU to CPU, and for this, I've built a custom version of Paddle tailored for CPU usage, incorporating ONNX runtime. Here are the cmake configuration flags I used:

cmake \
    -DWITH_GPU=OFF \
    -DWITH_MKL=ON \
    -DWITH_MKLDNN=ON \
    -DWITH_ONNXRUNTIME=ON \
    -DWITH_AVX=ON \
    -DWITH_PYTHON=OFF \
    -DWITH_TESTING=OFF \
    -DWITH_ARM=OFF \
    -DWITH_NCCL=OFF \
    -DWITH_RCCL=OFF \
    -DON_INFER=ON \
    ..

Following this, I employed the setup as follows:

var recognitionModel = LocalRecognizationModel.EnglishV3;
var config = PaddleDevice.Onnx();
var predictor = recognitionModel.CreateConfig().Apply(config).CreatePredictor();

I'm uncertain if this is the correct method for configuring ONNX. While it seems to function, I'm not completely sure if it's actually running the ONNX version. I didn't perform any explicit model conversions and couldn't locate any ONNX files on the machine where this code is executed. Could you offer some guidance? Should I carry out an explicit model conversion and then instantiate a recognition model using this ONNX file, or is the code I have already sufficient, with Paddle handling the rest?

Additional into:
I found in a logs:
image
It seems to be converting by itself.
However where can I find this model? is there ay way to set the caching folder for it? As I running everything in Lambda, and don't want to consume time for conversion each lambda call, and store it in cache. May you advice?

sdcb commented

It's happening in memory, you can't find this model.