It would be nice if Metashape could accept per-frame depth information for cameras that are able to provide it. The Intel RealSense, Microsoft Kinect, iPhone/iPad Pro, and many more can log per-frame depth maps. If Metashape were able to accept this data as input it could have two major benefits:
1. Get the scale of the reconstruction correct
2. Provide more accurate results in general