Xilinx/finn

TopK is not converted to LabelSelect_hls

pbk20191 opened this issue · 3 comments

dev branch: e188b4c

Quick summary

I have my simple mnist model, I want to have TopK post processing for it.
However TopK node is not converted to LabelSelect_hls during estimation, but advanced_example does convert it as expected.

Details

Please add to the following sections to describe the bug as accurately as possible.

Steps to Reproduce

I followed the step to and pre & post processing which is explained on 4_advanced_builder_settings
add ToTensor Division and TopK post processing than I run estimation step including two custom step like below.

from finn.util.pytorch import ToTensor
from qonnx.transformation.merge_onnx_models import MergeONNXModels
from qonnx.core.modelwrapper import ModelWrapper
from qonnx.core.datatype import DataType
import finn.builder.build_dataflow as build
from qonnx.transformation.insert_topk import InsertTopK

def custom_step_add_pre_proc(model: ModelWrapper, cfg: build.DataflowBuildConfig):
    ishape = model.get_tensor_shape(model.graph.input[0].name)
    # preprocessing: torchvision's ToTensor divides uint8 inputs by 255
    preproc = ToTensor()
    bo.export_qonnx(preproc, torch.randn(ishape), "preproc.onnx", opset_version=12)
    preproc_model = ModelWrapper("preproc.onnx")
    # set input finn datatype to UINT8
    preproc_model.set_tensor_datatype(preproc_model.graph.input[0].name, DataType["UINT8"])
    # merge pre-processing onnx model with cnv model (passed as input argument)
    model = model.transform(MergeONNXModels(preproc_model))
    return model

def custom_step_add_post_proc(model: ModelWrapper, cfg: build.DataflowBuildConfig):
    model = model.transform(InsertTopK(k=1))
    return model
estimates_output_dir = "output_estimates_only"

#Delete previous run results if exist
if os.path.exists(estimates_output_dir):
    shutil.rmtree(estimates_output_dir)
    print("Previous run results deleted!")


cfg_estimates = build.DataflowBuildConfig(
    output_dir          = estimates_output_dir,
    mvau_wwidth_max     = 80,
    target_fps          = 10000,
    synth_clk_period_ns = 10.0,
    # fpga_part           = "xc7z020clg400-1",
    board = "Pynq-Z1",
    shell_flow_type = "vivado_zynq",
    steps               = [custom_step_add_pre_proc, custom_step_add_post_proc] + build_cfg.estimate_only_dataflow_steps,
    generate_outputs=[
        build_cfg.DataflowOutputType.ESTIMATE_REPORTS,
    ]
)
%%time
build.build_dataflow_cfg(model_file, cfg_estimates)

run estimation build step with the model I attached.

Expected behavior

TopK node should be converted to LabelSelect_hls

Actual behavior

TopK node is not converted to LabelSelect_hls and discarded on further step.

step_convert_to_hw onnx
step_convert_to_hw onnx

Additional context

model without pre & post processing
mnist_model.zip
model with pre & post processing
processed_model.zip
estimation result
output_estimates_only.zip

My original model's output end was not Integer and that caused this issue.
After changing the model output to be integer, everything goes well.

I have the same issue and I solved it by quantizing the output node, but then verification fails. How exactly did you convert the model output to integer?

I have the same issue and I solved it by quantizing the output node, but then verification fails. How exactly did you convert the model output to integer?

Well, my solution is same as yours.
I specify output quantization of my model's last layer, which will be the input of TopK Node.

forward["conv1"] = ```
forward["relu1"] = ```
forward["pool1"] = ```
forward["conv2"] =```
forward["relu2"] = ```
forward["pool2"] = ```
forward["flatten"] = ````
forward["fc1"] =  ```
forward["relu3"]  = ````
# output_quant is None by default in brevitas
forward["fc2"] = qnn.QuantLinear(4 * channel_multiplier, 10, bias=True, weight_bit_width=weight_bit_width, output_quant=Int8ActPerTensorFloat)