Hey Alexey,
I have looked at the sources you linked. However these two descriptions tell only how this can be accomplished using the GUI, and I am trying to write this all in Python (maybe I should have posted this question in the other forum to begin with).
From the manual it says that I should set a slave in the calibration tab. The closest thing I have found in the Python API is the option to set sensor.master
, but I have not been able to find an "Adjust location" method in Python. Is this method of sensor.master
the correct way to go about this? After setting this master band I tried to perform an alignment:
chunk.matchPhotos(accuracy = Metashape.HighAccuracy,
generic_preselection = True,
reference_preselection = True,
keypoint_limit = 40000,
tiepoint_limit = 10000)
chunk.alignCameras()
But it didn't seem to give me the roto-translation I was looking for. When I check
for sensor in chunk.sensors
sensor.rotation
I just get the identity matrix for all of the sensors.
Essentially what I am missing is a roto-translation matrix for each of my three cameras/sensors, where the master camera/sensor would have the idenity matrix and the slave cameras/sensors would have a roto-translation matrix that explains the offset from the master.
Perhaps you could outline the steps (or point me towards a useful resource that would guide me) I would need to take to get the relative orientations (rotation and translation)? I already have found this for the individual images in chunk.cameras[0].transform.rotation
.