pytorch/ios-demo-app

Swift UI framework for object detection D2Go

Opened this issue · 1 comments

May I check if the object detection code in D2Go is runnable over the swift UI framework? I modified the existing D2GO project by removing delegate, storyboard files and created new ones for swift UI framework. The backend codes in Inference and Utils folder remain unchanged. When I tried to load a picture and run the model inference, the model does not give me the correct outputs.

Okay it seems that I had to define a class level variable for the pixel buffer before handing it over to c++ side. I think the variable gets deallocated of the stack when the inference code is called leading to memory problems when torch::from_blob() is called.

import SwiftUI

struct ContentView: View {
    var inferencer = ObjectDetector()
    @State var pixelBuffer : [Float32] = [] // have to declare pixel buffer here if not wont work properly
    
    private func runInference() {
        //perform action
        let image = UIImage(named: "test1.png")!
        let resizedImage = image.resized(to: CGSize(width: CGFloat(PrePostProcessor.inputWidth), height: CGFloat(PrePostProcessor.inputHeight)))
        self.pixelBuffer = resizedImage.normalized()!
        DispatchQueue.global().async {
            guard let outputs = self.inferencer.module.detect(image: &self.pixelBuffer) else {
                return
            }
            print(outputs)
        }
    }
   
    var body: some View {
        Button(action: {
            runInference()
        }) {
            Text("Submit Drawing").bold()
            }
        }