Inference Pipeline adding latency #468
Unanswered
MubashirWaheed
asked this question in
Q&A
Replies: 2 comments 12 replies
-
Hi @MubashirWaheed, I'm transferring this to |
Beta Was this translation helpful? Give feedback.
1 reply
-
Hi @MubashirWaheed, could you please first clarify what we see in the screenshot attached. It claims:
Answering on the question about alternative - yes there are plenty of options u may use:
I am happy to help, but I am struggling to understand whole context. |
Beta Was this translation helpful? Give feedback.
11 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I trained a model using custom dataset on top of
yolov8l-seg
.Here is the code that I am running
I am passing the arguments as command line arguments. I was only getting 6-7 fps so I converted the
best.pt
to tensorrt weightsThis should have improved the fps but didn't. I am getting 18-20 fps when not not using inference pipeline.
what is causing this issue?
In this thread I discussed the problem: https://discuss.roboflow.com/t/tensorrt-converted-weights-not-working-with-supervision/6135/20
supervision version: 0.21.0
inference-gpu : 0.11.0
Beta Was this translation helpful? Give feedback.
All reactions