TensorRT not working for custom trained models #8878
Replies: 5 comments 8 replies
-
Hello! Thanks for reaching out and providing detailed information about the issue you're encountering with TensorRT and your custom trained model. 🌟 It seems like you've already tried a few troubleshooting steps, which is great. Given the details, it might indeed be related to the
If you continue to face issues, it would be helpful to share the specific error messages you're encountering during the Here's a quick example of how you might enable verbose logging during export: from ultralytics import YOLO
# Load your custom trained model
model = YOLO('path/to/your/custom_model.pt')
# Export with verbose logging
model.export(format='engine', verbose=True) We're here to help, so feel free to share any additional details or error logs! 🛠 |
Beta Was this translation helpful? Give feedback.
-
Hey I checked the matrix compatibility and the thing is for pre-trained models I am getting the speedup using TensorRT, but not working on custom-trained models. `import time model = YOLO('models/tennis_racket_detect.pt') tensorrt_engine_path = "models/tennis_racket_detect.engine" start_time = time.time() #Attaching verbose output and the following error message when trying the model |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Hi, I encountered the same issue. The solution I found is to specify task='segment' when defining the YOLO model. I suspect the cause of this error is that different tasks have distinct parsing methods for model results. Specifying the wrong task may lead to incorrect parsing of class indices. |
Beta Was this translation helpful? Give feedback.
-
I had set up TensorRT 8.6.1 on my Ubuntu 22.04 with CUDA 12.2 and cudNN 9.0.0
I used this to speed up my yolo models and as given in the documentation I was able to achieve significant boost for the image of bus given and when I tried the same for a pre-trained model for object detection it worked the same but when I attempt to try it on a custom trained model for the same purpose of object detection it creates the .engine file but it fails to execute it and throws errors.
My code is quite straightforward: I simply read the path of the video file, generate the .engine files, and then attempt to run model.predict(video_path). I've tried various combinations of parameters, including running model() with stream=True or using model.predict(), with or without saving the video but the problem persists. I did check the issues and in one similar issues someone mentioned that specifying the task helps, but it did not in my case.
Based on observation, I feel it is an issue with .export() for custom trained models.
Beta Was this translation helpful? Give feedback.
All reactions