Model Not Found Issue With Custom Model
hariravi opened this issue · 3 comments
Hi All,
When using a custom model, I obtain the "Can't find the model file!" error ... I am using libTorch1.4.0, and can see that my file model.pt has the appropriate target membership, based on my initial googling/research the issue might be with my actual pytorch model (using a ResNet18 model). Thank you very much for your help.
private lazy var module: TorchModule = {
if let filePath = Bundle.main.path(forResource: "model", ofType: "pt"),
let module = TorchModule(fileAtPath: filePath) {
return module
} else {
fatalError("Can't find the model file!")
}
}()
I think I am figuring it out ... I ran the trace_model.py, but it appears my output has changed ... initially my output was 2 values where I would compute a softmax to obtain probabilities, but now it is an array of length 1000 ...
One final comment, maybe it has something to do with this, from the TorchModule.mm file?
try {
at::Tensor tensor = torch::from_blob(imageBuffer, {1, 3, 224, 224}, at::kFloat);
torch::autograd::AutoGradMode guard(false);
at::AutoNonVariableTypeMode non_var_type_mode(true);
auto outputTensor = _impl.forward({tensor}).toTensor();
float* floatBuffer = outputTensor.data_ptr();
if (!floatBuffer) {
return nil;
}
NSMutableArray* results = [[NSMutableArray alloc] init];
for (int i = 0; i < 1000; i++) {
[results addObject:@(floatBuffer[i])];
}
return [results copy];
} catch (const std::exception& exception) {
NSLog(@"%s", exception.what());
}
return nil;
}
A sample output in python for instance would be
[.7, -.1] ... do I have to modify the .mm file? I am using the same one as the HelloWorld example
Guys think I got it, sorry about this! It was the mm file I see the floatBuffer is the tensorflow output!