Dear Alexey,
I am trying to use quick_layout.py from
https://github.com/agisoft-llc/metashape-scripts/tree/master/srcto to use imported eo as adjusted values in Metashape and then proceed with Build Point Cloud....
First, I commented out call to
estimate_rotation_matrices so as to force script to calculate camera orientations from reference.rotation values for each camera
def align_cameras(chunk, min_latitude, min_longitude):
if chunk.transform.scale is None:
chunk.transform.scale = 1
chunk.transform.rotation = ps.Matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
chunk.transform.translation = ps.Vector([0, 0, 0])
i, j, k = get_chunk_vectors(min_latitude, min_longitude) # i || North
# estimate_rotation_matrices(chunk, i, j)
for c in chunk.cameras:
if c.transform is not None:
continue
location = c.reference.location
if location is None:
continue
chunk_coordinates = wgs_to_chunk(chunk, location)
fi = c.reference.rotation.x + 90
fi = math.radians(fi)
roll = math.radians(c.reference.rotation.z)
pitch = math.radians(c.reference.rotation.y)
So I create a project, import images from folder and then I import calculated eo (exterior orientation) from external program as Id,X,Y,Z,omega,phi,kappa file see first few lines:
imageName longitude latitude altitude Omega Phi Kappa
IMG_160729_071349_0000_RGB.JPG 6.54973871 46.52091500 501.22916947 5.07035207 -5.87397304 -15.07640933
IMG_160729_071351_0001_RGB.JPG 6.54985939 46.52115285 503.46431583 11.93882557 -1.83617858 -17.08473448
IMG_160729_071353_0002_RGB.JPG 6.54998798 46.52140961 503.51254484 11.13636527 -2.33842385 -9.54246526
Since I have omega,phi,kappa as orientation angles I convert to Yaw, pitch, roll as script considers orientation as YPR... see attached screen copy
Now, I run script and I expect the adjusted eo parameters to be equal to imported reference eo.... but the yaw, pitch and roll is a little different.. see 2nd screen capture....
Why is this so^? something I am not getting....l
Never mind... Problem Solved! just found a script that does just this
import Metashape as PhotoScan, math
chunk = PhotoScan.app.document.chunk
T = chunk.transform.matrix
for camera in chunk.cameras:
yaw, pitch, roll = math.pi * camera.reference.rotation / 180.
sinx = math.sin(pitch)
cosx = math.cos(pitch)
Rx = PhotoScan.Matrix([[1, 0, 0], [0, cosx, -sinx], [0, sinx, cosx]])
siny = math.sin(roll)
cosy = math.cos(roll)
Ry = PhotoScan.Matrix([[cosy, 0, siny], [0, 1, 0], [-siny, 0, cosy]])
sinz = math.sin(-yaw)
cosz = math.cos(-yaw)
Rz = PhotoScan.Matrix([[cosz, -sinz, 0], [sinz, cosz, 0], [0, 0, 1]])
R = Rz * Rx * Ry
coord = camera.reference.location
coord = chunk.crs.unproject(coord)
m = chunk.crs.localframe(coord)
R = PhotoScan.Matrix( [[m[0,0],m[0,1],m[0,2]], [m[1,0],m[1,1],m[1,2]], [m[2,0],m[2,1],m[2,2]]]).t() * R * PhotoScan.Matrix().Diag((1, -1, -1))
row = list()
for j in range(0, 3):
row.append(PhotoScan.Vector(R.row(j)))
row[j].size = 4
row[j].w = coord[j]
row.append(PhotoScan.Vector([0, 0, 0, 1]))
M = PhotoScan.Matrix([row[0], row[1], row[2], row[3]])
camera.transform = T.inv() * M
chunk.updateTransform()
Sorry
