Hi,
When projecting the dataset's 3D annotations into the corresponding 2D camera images using the parameters provided in the YAML files, I have encountered problems in achieving the correct bbox alignment.
Could you please provide guidance, documentation, or ideally a code snippet showing the intended workflow for coordinate transformations and projection of the camera data?
Specifically, how to correctly use camera parameters like camera cords, intrinsic, and the object location, extent, and angle to map 3D world boxes to 2D image coordinates.
Additionally, any details regarding the camera-based collaboration baselines would be very helpful.
Thanks,
Hi,
When projecting the dataset's 3D annotations into the corresponding 2D camera images using the parameters provided in the YAML files, I have encountered problems in achieving the correct bbox alignment.
Could you please provide guidance, documentation, or ideally a code snippet showing the intended workflow for coordinate transformations and projection of the camera data?
Specifically, how to correctly use camera parameters like camera cords, intrinsic, and the object location, extent, and angle to map 3D world boxes to 2D image coordinates.
Additionally, any details regarding the camera-based collaboration baselines would be very helpful.
Thanks,