How to get detection and tracking data from each frame of the stream (like box location, id, class, etc.)? #1653
Replies: 2 comments
-
@SenchaBrest see https://docs.ultralytics.com/modes/track for details. Object tracking is a task that involves identifying the location and class of objects, then assigning a unique ID to The output of tracker is the same as detection with an added object ID. Available TrackersThe following tracking algorithms have been implemented and can be enabled by passing The default tracker is BoT-SORT. TrackingUse a trained YOLOv8n/YOLOv8n-seg model to run tracker on video streams. !!! example ""
As in the above usage, we support both the detection and segmentation models for tracking and the only thing you need to ConfigurationTrackingTracking shares the configuration with predict, i.e
TrackerWe also support using a modified tracker config file, just copy a config file i.e
Please refer to ultralytics/tracker/cfg |
Beta Was this translation helpful? Give feedback.
-
I don't know if this is the right way, but you can get information like this. model = YOLO("yolov8n.pt")
results = model.track(source=src, tracker="botsort.yaml", verbose=False, stream=True)
for r in results:
for tracker in detect_model.predictor.trackers:
print(tracker.__dict__) # tracker information
print()
for strack in tracker.tracked_stracks:
print(strack.__dict__) # Each obeject information Output: Tracker information
Obeject information
|
Beta Was this translation helpful? Give feedback.
-
Hello. I have been working on my project which was based on yolov5. There, using yolov5, I detected objects on the stream and tracked them using the tracking algorithm. I recently decided to upgrade to a newer version, yolov8, which already has botsort and bytetrack. Due to the fact that there are already built-in algorithms for tracking, I will not need to embed extraneous ones.
However, I can't figure out how to do it. I need to know the information after detection and tracking from each frame and enter it into the database, but I can't find how to do it, because for me the function call scheme inside the repository looks very confusing.
Could you tell me how to get detection and tracking data from each frame of the stream (like box location, id, class, etc.)
Beta Was this translation helpful? Give feedback.
All reactions