facebookresearch/d2go

Training on a custom dataset

mathrb opened this issue · 2 comments

Hello,

I've got a custom dataset, consisting of annotated document pages that follows the COCO format.
I've been able to train a model using detectron2, and d2go.
For d2go, I've used this configuration file as starting point: faster_rcnn_fbnetv3g_fpn.
The model works great, similar to detectron2.
I've tried to quantize it using both approach:

  1. export to int8
  2. QAT, using the d2go trained model weights and qat_faster_rcnn_fbnetv3a_C4.yaml, I changed the BASE to the model configuration I trained (based on faster_rcnn_fbnetv3g_fpn). Trained for 5k iterations with a low LR (0.0001) compared to the d2go training (0.16)

Export to int8

It creates a new model, way faster, but poor results. I tried with torchscript and torchscript_int8, got poor results for both predictor types.
Here are some logs from the export process that could help on the root cause:
Using un-unified arch_def for ARCH "FBNetV3_G_fpn" (without scaling)
The logs below, I do have a lot of those TracerWarning:

detectron2/layers/nms.py:15: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert boxes.shape[-1] == 4

QAT

Results in a new model, which seems identical to the pre trained model. In this case, the inference time has not changed at all, neither the size of the model. I'm wondering what's wrong in this case. Maybe my understanding of the QAT is wrong.

Any help would be appreciated

Hello, please tell me how to quantize the trained model with custom data into int8. I have been failing

@yuzhuhua everything is available in d2go.tools.exporter
Once you've got your model training on your data, you can use the exporter to export it torchscript and torchscript_int8
here's the link to the documentation: https://github.com/facebookresearch/d2go/tree/main/demo#export-to-torchscript--int8-model