Hi all,
I'd like to get 3D object coordinates (Metashape coordinates are fine) from a set of image coordinates (2D). As far as I can see, there are three options:
1) Use the 'pickPoint' function of the 'model' (or 'denseCloud') class. With 'cam' being on of the cameras in the chunk and image x,y coordinates as 'img_x', img_y':
pt_2d = Metashape.Vector([img_x, img_y])
pt_3d = chunk.model.pickPoint(cam.center, cam.unproject(pt_2d))
2) We can use the depth map of the camera:
depth = chunk.depth_maps[cam].image()
depth_val = depth[img_y, img_x][0]
pt_2d = Metashape.Vector([img_x,img_y])
img_3d = cam.unproject(pt_2d)
ray = img_3d - cam.center
ray /= ray.norm()
pt_3d = cam.center + depth_val* ray
3) Or we can render a depth image from the model and use it as in 2).
I prefere to use either method 2) or 3) as it is considerably faster for a lot of points (the depth map has to be extracted only once for one camera)
The results from 3) and 1) should be identical but they are not. Multiple points on a flat surface are also flat in 3D by method 1). For method 2) or 3) however, they are not.
Questions:
#1: Are the rendered depth images / maps corrected for image distortions or do I have to undistort them?
#2: What is missing in method 2)/3) to get the correct results?
Thank you.