You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I successfully converted a YOLOv7 ONNX model to a TRT engine and achieved correct detection using DeepStream 6.3 on an RTX 3060. However, when I used the same ONNX file on an RTX 4090 with DeepStream 7.0, the TRT engine was successfully generated, but the detection results were abnormal (bounding boxes were misplaced). Could you please explain why this happens?
Updated test results show that when using DeepStream 7.0 with CUDA 12.2 on an RTX 3060, the ONNX-converted TRT engine performs detection perfectly. The environment and program are identical to those used with the RTX 4090. Why is this happening?
The text was updated successfully, but these errors were encountered:
Are you exporting the model to ONNX with the updated exporter? I did some updates on the code some days ago. Please try to export it again with the new files.
I successfully converted a YOLOv7 ONNX model to a TRT engine and achieved correct detection using DeepStream 6.3 on an RTX 3060. However, when I used the same ONNX file on an RTX 4090 with DeepStream 7.0, the TRT engine was successfully generated, but the detection results were abnormal (bounding boxes were misplaced). Could you please explain why this happens?
Updated test results show that when using DeepStream 7.0 with CUDA 12.2 on an RTX 3060, the ONNX-converted TRT engine performs detection perfectly. The environment and program are identical to those used with the RTX 4090. Why is this happening?
The text was updated successfully, but these errors were encountered: