You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I'm attempting to project the 3D points from a mesh generated by Meshroom's Texturing node onto each of the original images I captured. However, the projected points are not aligning correctly with the images. Here is the script i use:
importosimportjsonimportnumpyasnpimportcv2importmatplotlib.pyplotaspltimportopen3daso3d#Pathssfm_file_path='/media/harddrive/Meshroom-2023.3.0//MeshroomCache_second/StructureFromMotion/8720d431bab1584ae97a70208601c784e5013a97/cameras.sfm'# Replace with your actual pathpoint_cloud_path='/media/harddrive/Meshroom-2023.3.0/MeshroomCache_second/Texturing/355a5b90b3d75ed169f9400aa0c30852407d3d9e/texturedMesh.obj'# Point cloud file pathimages_dir='/media/harddrive/new_dataset/new_photos'# Directory containing your imagesoutput_dir='/media/harddrive/new_dataset/calib'# Directory to save calibration filesos.makedirs(output_dir, exist_ok=True)
#Load cameras.sfmwithopen(sfm_file_path, 'r') asf:
data=json.load(f)
#Load point cloudmesh=o3d.io.read_triangle_mesh(point_cloud_path)
points=np.asarray(mesh.vertices) # Shape: (N, 3)#Extract intrinsicsintrinsic_data=data['intrinsics'][0] # Assuming one set of intrinsics for all imagesfocal_length=float(intrinsic_data['focalLength']) # In mmprincipal_point= [
float(intrinsic_data['principalPoint'][0]), # In mmfloat(intrinsic_data['principalPoint'][1]) # In mm
]
width, height=float(intrinsic_data['width']), float(intrinsic_data['height']) # In pixelssensor_width=float(intrinsic_data['sensorWidth']) # In mmsensor_height=float(intrinsic_data['sensorHeight']) # In mm#Compute fx and fy in pixelsfx= (focal_length/sensor_width) *widthfy= (focal_length/sensor_height) *height#Convert principal point offsets from mm to pixelscx=principal_point[0] +width/2cy=principal_point[1] +height/2#Construct intrinsic matrix (K)K=np.array([
[fx, 0, cx],
[0, fy, cy],
[0, 0, 1]
])
#Build a mapping from pose IDs to image filenamespose_to_image_map= {}
forviewindata.get('views', []):
pose_id=view.get('poseId') orview.get('value', {}).get('poseId')
ifpose_idisNone:
continuepath=view.get('path') orview.get('value', {}).get('path')
ifpathisNone:
continueimage_file_name=os.path.basename(path)
pose_to_image_map[pose_id] =image_file_name#Iterate over each poseforposeindata['poses']:
pose_id=pose['poseId']
# Get corresponding image filenameimage_filename=pose_to_image_map.get(pose_id)
image_path=os.path.join(images_dir, image_filename)
image=cv2.imread(image_path)
#Extract rotation matrix and camera center from poserotation_values= [float(x) forxinpose['pose']['transform']['rotation']]
R_c2w=np.array(rotation_values).reshape(3, 3) # Rotation from camera to worldC=np.array([float(x) forxinpose['pose']['transform']['center']]).reshape(3, 1) # Camera center in world coordinates#Compute rotation from world to camera coordinatesR_w2c=R_c2w.T#Compute translation vector t = -R_w2c * Ct=-np.dot(R_w2c, C).reshape(1, 3)
extrinsic_matrix=np.hstack((R_w2c, t.T)) # Shape: (3, 4)# Compute projection matrixP=K @ extrinsic_matrix# Shape: (3, 4)#Project points onto image planepoints_homogeneous=np.hstack((points, np.ones((points.shape[0], 1)))) # Shape: (N, 4)projected_points= (P @ points_homogeneous.T).T# Shape: (N, 3)#Normalize to get pixel coordinatesprojected_points[:, 0] /=projected_points[:, 2]
projected_points[:, 1] /=projected_points[:, 2]
#Extract pixel coordinatesu=projected_points[:, 0]
v=projected_points[:, 1]
#Visualize projectionsplt.figure(figsize=(10, 8))
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.scatter(u, v, s=0.5, c='red', alpha=0.5)
plt.title(f'Projection on Image {pose_id}')
plt.axis('off')
plt.show()
Unfortunately, the results are incorrect. I’m unsure whether the issue lies with the extrinsic and intrinsic parameters or the point cloud from the mesh. I’ve tried various transformations on the point cloud, but the projected points remain inaccurate. This is one of the closest results I’ve managed to achieve:
Meshroom Version: 2023.3.0
Thank you in advance for your time!
The text was updated successfully, but these errors were encountered:
Janudis
changed the title
[question]Projecting Meshroom 3D Mesh Points onto Images
[question] Projecting Meshroom 3D Mesh Points onto Images
Nov 7, 2024
Hello,
I'm attempting to project the 3D points from a mesh generated by Meshroom's Texturing node onto each of the original images I captured. However, the projected points are not aligning correctly with the images. Here is the script i use:
Unfortunately, the results are incorrect. I’m unsure whether the issue lies with the extrinsic and intrinsic parameters or the point cloud from the mesh. I’ve tried various transformations on the point cloud, but the projected points remain inaccurate. This is one of the closest results I’ve managed to achieve:
Meshroom Version: 2023.3.0
Thank you in advance for your time!
The text was updated successfully, but these errors were encountered: