Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - lfreguete

Pages: [1]
Ah! I see what the coding is doing.
It just uses now the transformation matrices between the 2 orientation systems.

I very much appreciate yours so throughout attention to this issue! It was a huge help indeed!

Do you mind if I share here the files I have gotten so far with the current code?
The ypk output I got seem to be reasonable. I am not familiar with the metashape functions so I am not sure what is the math behind the Metashape.utils.mat2ypr(Metashape.utils.opk2mat(c.reference.rotation)) .
From the manual, I got that opk2mat calculates to world rotation values, but I am not sure if this takes on account the defined crs ( as it is done in the code) or if it considers a default crs.
Would you know more details about it?

Here I am sharing the input,output and code files.

I actually found another way of solving it. This way I was able to import the cameras coordinates and convert the opk to ypr.
Here I am sharing a snippet of the code

chunk.euler_angles = Metashape.EulerAngles.EulerAnglesOPK
# chunk.importCameras(path = os.path.join(path,filename), format = Metashape.CamerasFormatOPK)
crs = Metashape.CoordinateSystem("EPSG::6440") = crs
chunk.importReference(path=os.path.join(path,filename), format=Metashape.ReferenceFormatCSV, columns='nxyzabc', crs = crs, delimiter=' ', create_markers=False)

for c in chunk.cameras:
    c.reference.rotation = Metashape.utils.mat2ypr(Metashape.utils.opk2mat(c.reference.rotation))   
    c.reference.location = c.reference.location

I would like to add that the imported camera coordinates are in a specific crs. Does importCamera take that in consideration?

Good evening, Mr. Pelletier.

Thank you for your suggestion.
My situation is the following: I was able to import the cameras' external parameters file and have the orientation angles as OPK. But for the next steps of my workflow, I need those angles in YPK.
I used the function yaw, pitch, roll = Metashape.utils.mat2ypr(Metashape.utils.opk2mat(Metashape.Vector((omega, phi, kappa))).t()) but I am not very sure the outputs are right,
For example, when importing the images, the first camera presents a Ywas = 142 while the output from this conversion gives 124. I know that the first yaw value is before the camera optimization, but even though it shouldn't have such a difference. Should it?

Is this the correct way to go?

Thanks a lot.

Good afternoon, Mr. Pasumansky.
I was able to run this function, but when I try to check what are the current camera rotation values, the values are the same of before.
If in my Reference Panel I have values of pitch,Yas and roll and try to import the opk file, after running the camera.reference.rotation, the variables are still in YPR.

How can I convert my opk to the YPR?

Hello adams,

You can use the following function Python function from the Console pane to load the reference data using OPK data: = "d:/file.txt", format = "opk")

I am having hte same problem here.
I had a file imported from Pix4D with all the coordinates of the calibrated cameras. However, the camera orientation are in opk.
I tried to use this solution
yaw, pitch, roll = PhotoScan.utils.mat2ypr(PhotoScan.utils.opk2mat(Metashape.Vector((omega, phi, kappa))).t())

but the resulting angles are way far from what is expected to be. I am trying the solution form this topic now, but having issues with it (error message in the attachments)
Omega is a list of all the cameras' omegas or just one value?

Hello dellagiu,

Using the following script you should be able to generate camera transformation matrix from OPK data:

Code: [Select]
import math, PhotoScan

#omega, phi, kappa - in radians
#X, Y, Z - coordinate information about camera position in units of the corresponding coordinate system

T = chunk.transform.matrix
v_t = T * PhotoScan.Vector( [0, 0, 0, 1] )
v_t.size = 3
m =
m = m * T
s = math.sqrt(m[0, 0] **2 + m[0,1] **2 + m[0,2] **2) #scale factor

sina = math.sin(0 - omega)
cosa = math.cos(0 - omega)
Rx = PhotoScan.Matrix([[1, 0, 0], [0, cosa, -sina], [0, sina, cosa]])
sina = math.sin(0 - phi)
cosa = math.cos(0 - phi)
Ry = PhotoScan.Matrix([[cosa, 0, sina], [0, 1, 0], [-sina, 0, cosa]])
sina = math.sin(0 - kappa)
cosa = math.cos(0 - kappa)
Rz = PhotoScan.Matrix([[cosa, -sina, 0], [sina, cosa, 0], [0, 0, 1]])

t = PhotoScan.Vector([X, Y, Z])
t =

m =
m = PhotoScan.Matrix([ [m[0,0], m[0,1], m[0,2]], [m[1,0], m[1,1], m[1,2]], [m[2,0], m[2,1], m[2,2]] ])

R = m.inv() * (Rz * Ry * Rx).t()  * PhotoScan.Matrix().diag([1, -1, -1])

Tr = PhotoScan.Matrix([ [R[0,0], R[0,1], R[0,2], t.x], [R[1,0], R[1,1], R[1,2], t.y], [R[2,0], R[2,1], R[2,2], t.z], [0, 0, 0, 1]])

camera.transform = chunk.transform.matrix.inv() * Tr * (1. / s)

Pages: [1]