fabio-sim/LightGlue-ONNX

Error when building the end2end onnx file.

Closed this issue · 1 comments

I fixed the height and width to static shape,only keep num_keypoints0 and num_matches0 dynamic.
1.run:
python export.py --img_size 512 --extractor_type superpoint --end2end --dynamic
get : superpoint_lightglue_end2end.onnx

2.then:
trtexec --onnx=superpoint_lightglue_end2end.onnx

&&&& RUNNING TensorRT.trtexec [TensorRT v8601] #
[03/05/2024-12:26:26] [I] === Model Options ===
[03/05/2024-12:26:26] [I] Format: ONNX
[03/05/2024-12:26:26] [I] Model: superpoint_lightglue_end2end-sim.onnx
[03/05/2024-12:26:26] [I] Output:
[03/05/2024-12:26:26] [I] === Build Options ===
[03/05/2024-12:26:26] [I] Max batch: explicit batch
[03/05/2024-12:26:26] [I] Memory Pools: workspace: default, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default
[03/05/2024-12:26:26] [I] minTiming: 1
[03/05/2024-12:26:26] [I] avgTiming: 8
[03/05/2024-12:26:26] [I] Precision: FP32
[03/05/2024-12:26:26] [I] LayerPrecisions:
[03/05/2024-12:26:26] [I] Layer Device Types:
[03/05/2024-12:26:26] [I] Calibration:
[03/05/2024-12:26:26] [I] Refit: Disabled
[03/05/2024-12:26:26] [I] Version Compatible: Disabled
[03/05/2024-12:26:26] [I] TensorRT runtime: full
[03/05/2024-12:26:26] [I] Lean DLL Path:
[03/05/2024-12:26:26] [I] Tempfile Controls: { in_memory: allow, temporary: allow }
[03/05/2024-12:26:26] [I] Exclude Lean Runtime: Disabled
[03/05/2024-12:26:26] [I] Sparsity: Disabled
[03/05/2024-12:26:26] [I] Safe mode: Disabled
[03/05/2024-12:26:26] [I] Build DLA standalone loadable: Disabled
[03/05/2024-12:26:26] [I] Allow GPU fallback for DLA: Disabled
[03/05/2024-12:26:26] [I] DirectIO mode: Disabled
[03/05/2024-12:26:26] [I] Restricted mode: Disabled
[03/05/2024-12:26:26] [I] Skip inference: Disabled
[03/05/2024-12:26:26] [I] Save engine:
[03/05/2024-12:26:26] [I] Load engine:
[03/05/2024-12:26:26] [I] Profiling verbosity: 0
[03/05/2024-12:26:26] [I] Tactic sources: Using default tactic sources
[03/05/2024-12:26:26] [I] timingCacheMode: local
[03/05/2024-12:26:26] [I] timingCacheFile:
[03/05/2024-12:26:26] [I] Heuristic: Disabled
[03/05/2024-12:26:26] [I] Preview Features: Use default preview flags.
[03/05/2024-12:26:26] [I] MaxAuxStreams: -1
[03/05/2024-12:26:26] [I] BuilderOptimizationLevel: -1
[03/05/2024-12:26:26] [I] Input(s)s format: fp32:CHW
[03/05/2024-12:26:26] [I] Output(s)s format: fp32:CHW
[03/05/2024-12:26:26] [I] Input build shapes: model
[03/05/2024-12:26:26] [I] Input calibration shapes: model
[03/05/2024-12:26:26] [I] === System Options ===
[03/05/2024-12:26:26] [I] Device: 0
[03/05/2024-12:26:26] [I] DLACore:
[03/05/2024-12:26:26] [I] Plugins:
[03/05/2024-12:26:26] [I] setPluginsToSerialize:
[03/05/2024-12:26:26] [I] dynamicPlugins:
[03/05/2024-12:26:26] [I] ignoreParsedPluginLibs: 0
[03/05/2024-12:26:26] [I]
[03/05/2024-12:26:26] [I] === Inference Options ===
[03/05/2024-12:26:26] [I] Batch: Explicit
[03/05/2024-12:26:26] [I] Input inference shapes: model
[03/05/2024-12:26:26] [I] Iterations: 10
[03/05/2024-12:26:26] [I] Duration: 3s (+ 200ms warm up)
[03/05/2024-12:26:26] [I] Sleep time: 0ms
[03/05/2024-12:26:26] [I] Idle time: 0ms
[03/05/2024-12:26:26] [I] Inference Streams: 1
[03/05/2024-12:26:26] [I] ExposeDMA: Disabled
[03/05/2024-12:26:26] [I] Data transfers: Enabled
[03/05/2024-12:26:26] [I] Spin-wait: Disabled
[03/05/2024-12:26:26] [I] Multithreading: Disabled
[03/05/2024-12:26:26] [I] CUDA Graph: Disabled
[03/05/2024-12:26:26] [I] Separate profiling: Disabled
[03/05/2024-12:26:26] [I] Time Deserialize: Disabled
[03/05/2024-12:26:26] [I] Time Refit: Disabled
[03/05/2024-12:26:26] [I] NVTX verbosity: 0
[03/05/2024-12:26:26] [I] Persistent Cache Ratio: 0
[03/05/2024-12:26:26] [I] Inputs:
[03/05/2024-12:26:26] [I] === Reporting Options ===
[03/05/2024-12:26:26] [I] Verbose: Disabled
[03/05/2024-12:26:26] [I] Averages: 10 inferences
[03/05/2024-12:26:26] [I] Percentiles: 90,95,99
[03/05/2024-12:26:26] [I] Dump refittable layers:Disabled
[03/05/2024-12:26:26] [I] Dump output: Disabled
[03/05/2024-12:26:26] [I] Profile: Disabled
[03/05/2024-12:26:26] [I] Export timing to JSON file:
[03/05/2024-12:26:26] [I] Export output to JSON file:
[03/05/2024-12:26:26] [I] Export profile to JSON file:
[03/05/2024-12:26:26] [I]
[03/05/2024-12:26:26] [I] === Device Information ===
[03/05/2024-12:26:26] [I] Selected Device: NVIDIA GeForce RTX 3060
[03/05/2024-12:26:26] [I] Compute Capability: 8.6
[03/05/2024-12:26:26] [I] SMs: 28
[03/05/2024-12:26:26] [I] Device Global Memory: 12050 MiB
[03/05/2024-12:26:26] [I] Shared Memory per SM: 100 KiB
[03/05/2024-12:26:26] [I] Memory Bus Width: 192 bits (ECC disabled)
[03/05/2024-12:26:26] [I] Application Compute Clock Rate: 1.777 GHz
[03/05/2024-12:26:26] [I] Application Memory Clock Rate: 7.501 GHz
[03/05/2024-12:26:26] [I]
[03/05/2024-12:26:26] [I] Note: The application clock rates do not reflect the actual clock rates that the GPU is currently running at.
[03/05/2024-12:26:26] [I]
[03/05/2024-12:26:26] [I] TensorRT version: 8.6.1
[03/05/2024-12:26:26] [I] Loading standard plugins
[03/05/2024-12:26:26] [I] [TRT] [MemUsageChange] Init CUDA: CPU +213, GPU +0, now: CPU 217, GPU 3632 (MiB)
[03/05/2024-12:26:30] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +1219, GPU +266, now: CPU 1511, GPU 3898 (MiB)
[03/05/2024-12:26:30] [W] [TRT] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
[03/05/2024-12:26:30] [I] Start parsing network model.
[03/05/2024-12:26:30] [I] [TRT] ----------------------------------------------------------------
[03/05/2024-12:26:30] [I] [TRT] Input filename: superpoint_lightglue_end2end-sim.onnx
[03/05/2024-12:26:30] [I] [TRT] ONNX IR version: 0.0.8
[03/05/2024-12:26:30] [I] [TRT] Opset version: 17
[03/05/2024-12:26:30] [I] [TRT] Producer name: pytorch
[03/05/2024-12:26:30] [I] [TRT] Producer version: 2.1.2
[03/05/2024-12:26:30] [I] [TRT] Domain:
[03/05/2024-12:26:30] [I] [TRT] Model version: 0
[03/05/2024-12:26:30] [I] [TRT] Doc string:
[03/05/2024-12:26:30] [I] [TRT] ----------------------------------------------------------------
[03/05/2024-12:26:30] [W] [TRT] onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[03/05/2024-12:26:30] [W] [TRT] onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clamped
[03/05/2024-12:26:30] [I] Finished parsing network model. Parse time: 0.137893
[03/05/2024-12:26:31] [I] [TRT] Graph optimization time: 0.206916 seconds.
[03/05/2024-12:26:31] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored.
[03/05/2024-12:26:45] [I] [TRT] Detected 2 inputs and 4 output network tensors.
[03/05/2024-12:28:05] [E] Error[1]: autotuning: CUDA error 2 allocating 549757912573-byte buffer: out of memory
[03/05/2024-12:28:05] [E] Error[1]: [codeGenerator.cpp::compileGraph::894] Error Code 1: Myelin (autotuning: CUDA error 2 allocating 549757912573-byte buffer: out of memory)
[03/05/2024-12:28:05] [E] Engine could not be created from network
[03/05/2024-12:28:05] [E] Building engine failed
[03/05/2024-12:28:05] [E] Failed to create engine from model or file.
[03/05/2024-12:28:05] [E] Engine set up failed

Code 1: Myelin (autotuning: CUDA error 2 allocating 549757912573-byte buffer: out of memory)

Released improved export script