DBNet

The Pytorch implementation is DBNet.

How to Run

    1. generate .wts

    Download code and model from DBNet and config your environments.

    Go to filetools/predict.py, set --save_wts as True, then run, the DBNet.wts will be generated.

    Onnx can also be exported, just need to set --onnx as True.

    1. cmake and make
    mkdir build
    cd build
    cmake ..
    make
    cp /your_wts_path/DBNet.wts .
    sudo ./dbnet -s             // serialize model to plan file i.e. 'DBNet.engine'
    sudo ./dbnet -d  ./test_imgs // deserialize plan file and run inference, all images in test_imgs folder will be processed.
    

For windows

https://github.com/BaofengZan/DBNet-TensorRT

Todo

  • 1. In common.hpp, the following two functions can be merged.

    ILayer* convBnLeaky(INetworkDefinition *network, std::map<std::string, Weights>& weightMap, ITensor& input, int outch, int ksize, int s, int g, std::string lname, bool bias = true) 
    ILayer* convBnLeaky2(INetworkDefinition *network, std::map<std::string, Weights>& weightMap, ITensor& input, int outch, int ksize, int s, int g, std::string lname, bool bias = true)
  • 2. The postprocess method here should be optimized, which is a little different from pytorch side.

  • 3. The input image here is resized to 640 x 640 directly, while the pytorch side is using letterbox method.