Hi
I was wondering if anyone could help - I'm a bit confused about different methods for exporting the depth maps. I need to get the depth value of the model rendered from the perspective of each camera in the dataset (i.e z in "real distance" rather than as a 0-255 greyscale image). I have tried directly exporting the depth maps as .exr files with this code (for example for the first camera)
chunk = Metashape.app.document.chunk
camera = chunk.cameras[0]
depth = chunk.depth_maps[camera].image()
depth.save(camera.label + ".exr")
However when i open this is in python the resulting array is not the same dimension as the image from the camera (it's (1500, 2000) rather than (3000, 4000)), so i'm not exactly sure what these depth maps show?
I have also tried using the renderDepth function as so
depth = chunk.model.renderDepth(camera.transform, camera.sensor.calibration)
from the 3D model, and also
depth = chunk.dense_cloud.renderDepth(camera.transform, camera.sensor.calibration)
from the dense cloud (which would be preferable as this means i don't have to generate the mesh)
When i do depth.save(camera.label + '.tif') though it seems like the images are blank, and i'm not able to open them in Python.
Just wondering if anyone could help with clarifying what i need to get the real-world depth values for every camera? Ideally straight from the existing depth maps or dense cloud, to avoid extra processing making the mesh. Sorry i think there are several posts already answering this but i wasn't able to understand finally which method i needed!
Thanks in advance!