onnx/optimizer

Multiple ONNX optimizer tests fail on big-endian

JonTriebenbach opened this issue · 4 comments

These tests were run on s390x. s390x is big-endian architecture.

Example failure log from pytest:

_________________________________________________ TestOptimizer.test_fuse_pad_into_conv_1d _________________________________________________

self = <optimizer_test.TestOptimizer testMethod=test_fuse_pad_into_conv_1d>

    def test_fuse_pad_into_conv_1d(self):  # type: () -> None
        pad = helper.make_node(
            "Pad",
            ["X", "Pads"],
            ["P"],
            mode="constant"
        )
        conv = helper.make_node("Conv", ["P", "Y"], ["Z"])
        graph = helper.make_graph(
            [pad, conv],
            "test",
            [helper.make_tensor_value_info("X", TensorProto.FLOAT, (1, 5, 30)),
             helper.make_tensor_value_info("Pads", TensorProto.INT64, (6,)),
             helper.make_tensor_value_info("Y", TensorProto.FLOAT, (16, 5, 32))],
            [helper.make_tensor_value_info("Z", TensorProto.FLOAT, (1, 16, 1))],
            [helper.make_tensor("Pads", TensorProto.INT64,
             dims=(6,),
             vals=np.array([0, 0, 1, 0, 0, 1]).astype(np.int64).tobytes(),
             raw=True)])
        optimized_model = self._optimized(graph, ["fuse_pad_into_conv"])
    
        assert len(list(optimized_model.graph.node)) == 1
        assert optimized_model.graph.node[0].op_type == "Conv"
        assert optimized_model.graph.node[0].attribute[0].name == "pads"
>       assert list(optimized_model.graph.node[0].attribute[0].ints) == [1, 1]
E       AssertionError: assert [720575940379...7594037927936] == [1, 1]
E         At index 0 diff: 72057594037927936 != 1
E         Use -v to get the full diff

optimizer_test.py:1038: AssertionError

Notice the big integers: 720575940379. Viewing the number in hex shows an endian swapping issue.

Attached is the patch file for fixing the issue in the testcase helper.py code. The helper.py code needs to account for generating the raw data on a big endian architecture while the ONNX internals always expects little endian values.

A pull request can be submitted if needed.

Attaching corrected patch below.

There is an issue with this helper.py change. Additional investigation underway.

Corrected patch attached.

onnx-optimizer-test.patch.zip

@JonTriebenbach : Can you send a PR instead?
Onnx optimizers are being moved to a standalone repo under onnx tree https://github.com/onnx/optimizer
If the change is specific to optimizers please open a PR there otherwise open a PR in this repo

Transferring this issue to onnx\optimizer repo