Forum

Author Topic: How to convert Euler angles to Yaw, Pitch, & Roll (what are PS's conventions?)  (Read 17442 times)

dellagiu

  • Newbie
  • *
  • Posts: 10
    • View Profile
I have Euler angles (omega, phi, and kappa) for camera position, but I see that Photoscan requires Yaw, Pitch, & Roll. See the attached image for reference of how these Euler angles are defined.

I have exported these values from SOCET SET as:
omega           3.137187395607711
phi               -11.083974477310051
kappa           17.113750627601377

After spending sometime online, it is still not clear to me how to make this conversion. Can someone point me in the right direction? What conventions does PhotoScan Pro use to define Yaw, Pitch, & Roll?
« Last Edit: May 28, 2015, 08:02:11 PM by dellagiu »

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 13289
    • View Profile
Hello dellagiu,

Using the following script you should be able to generate camera transformation matrix from OPK data:


Code: [Select]
import math, PhotoScan

#omega, phi, kappa - in radians
#X, Y, Z - coordinate information about camera position in units of the corresponding coordinate system


T = chunk.transform.matrix
v_t = T * PhotoScan.Vector( [0, 0, 0, 1] )
v_t.size = 3
m = chunk.crs.localframe(v_t)
m = m * T
s = math.sqrt(m[0, 0] **2 + m[0,1] **2 + m[0,2] **2) #scale factor

sina = math.sin(0 - omega)
cosa = math.cos(0 - omega)
Rx = PhotoScan.Matrix([[1, 0, 0], [0, cosa, -sina], [0, sina, cosa]])
sina = math.sin(0 - phi)
cosa = math.cos(0 - phi)
Ry = PhotoScan.Matrix([[cosa, 0, sina], [0, 1, 0], [-sina, 0, cosa]])
sina = math.sin(0 - kappa)
cosa = math.cos(0 - kappa)
Rz = PhotoScan.Matrix([[cosa, -sina, 0], [sina, cosa, 0], [0, 0, 1]])
 

t = PhotoScan.Vector([X, Y, Z])
t = chunk.crs.unproject(t)

m = chunk.crs.localframe(t)
m = PhotoScan.Matrix([ [m[0,0], m[0,1], m[0,2]], [m[1,0], m[1,1], m[1,2]], [m[2,0], m[2,1], m[2,2]] ])


R = m.inv() * (Rz * Ry * Rx).t()  * PhotoScan.Matrix().diag([1, -1, -1])

Tr = PhotoScan.Matrix([ [R[0,0], R[0,1], R[0,2], t.x], [R[1,0], R[1,1], R[1,2], t.y], [R[2,0], R[2,1], R[2,2], t.z], [0, 0, 0, 1]])

camera.transform = chunk.transform.matrix.inv() * Tr * (1. / s)
Best regards,
Alexey Pasumansky,
Agisoft LLC

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 13289
    • View Profile
And then to get yaw, pitch, roll angles from camera transform you can use the following:

Code: [Select]
v_t = T.mulp(camera.center)
m = chunk.crs.localframe(v_t)

R = m * T * camera.transform * PhotoScan.Matrix().diag([1, -1, -1, 1])
row = list()
for j in range (0, 3):
row.append(R.row(j))
row[j].size = 3
row[j].normalize()
R = PhotoScan.Matrix([row[0], row[1], row[2]])

if R[2, 1] > 0.999:
yaw = math.atan2(R[1, 0], R[0, 0])
pitch = math.pi / 2
roll = 0
elif R[2, 1] < -0.999:
yaw = math.atan2(R[1, 0], R[0, 0])
pitch = -math.pi / 2
roll = 0
else:
yaw = math.atan2(-R[0, 1], R[1, 1])
roll = math.atan2(-R[2, 0], R[2, 2])
pitch  = math.asin(R[2, 1])

if yaw > 0:
yaw -= 2 * math.pi

yaw = -180 / math.pi
pitch *= 180 / math.pi
roll *= 180 / math.pi
Best regards,
Alexey Pasumansky,
Agisoft LLC

Diego

  • Full Member
  • ***
  • Posts: 167
    • View Profile
Hi Alexey,

But these angles actually used in the adjustment?, if so, then why not work directly with omega, phi and kappa, which is the standard for photogrammetry?, this really improve the accuracy of triangulation for those who work with data GNSS/IMU high precision.

Test scripts to generate an error, that requirement is to be used?

Best regards,

Diego

dellagiu

  • Newbie
  • *
  • Posts: 10
    • View Profile
Thank you Alexey -- this code is helpful.

I am also interested in Diego's question.

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 13289
    • View Profile
Hello Diego,

If the angles are loaded to the Reference pane, they are not used for alignment process (only for pair preselection, if Ground Altitude is defined in the Reference pane settings).
But the provided code can be used to apply existing extrinsic orientation to position camera, if additionally the camera calibration is loaded it is possible to Build Point cloud via Tools Menu or Python. It means that the camera positions wouldn't be adjusted by PhotoScan. But such approach is only applicable if the positions and orientation angles are known precisely.
So actually the code works as a part of Import Cameras functionality related to the extrinsic orientation only.


camera and chunk variables should be defined before using the code. Also for the first script XYZ and OPK should be defined.
Best regards,
Alexey Pasumansky,
Agisoft LLC

dellagiu

  • Newbie
  • *
  • Posts: 10
    • View Profile
Hi Alexey,

I have tried to use your script, and have also read through the most recent documentation for the PhotoScan Python API. I get the following error when running this code:

Traceback (most recent call last):
  File "/Users/dellagiu/Desktop/photoscan_ypr.py", line 7, in <module>
    T = chunk.transform.matrix
NameError: name 'chunk' is not defined
>>>

I have defined the "chunk" using the name of my photo chunk, but still continue to get this error. Please advise

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 13289
    • View Profile
Hello dellagiu,

Chunk can be defined by its number (leading zero) in the document like
chunk = doc.chunks[3], where doc = PhotoScan.app.document

But or active chunk you can use chunk = doc.chunk assignment.
Best regards,
Alexey Pasumansky,
Agisoft LLC

Roy

  • Newbie
  • *
  • Posts: 19
    • View Profile
Hi alexey.

i'm wondering what is the difference between the script you added to this topic (conversion from opk to ypr)
to the script you once wrote in a different topic in which you wrote that using the following code transform from opk to ypr

yaw, pitch, roll = PhotoScan.utils.mat2ypr(PhotoScan.utils.opk2mat(PhotoScan.Vector((omega, phi, kappa))).t())

what is the difference?

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 13289
    • View Profile
Hello Roy,

opk2mat and mat2ypr functions were added to Python API only recently.

Also I'd like to note that in case you need to take into account the meridian convergence, the script may be a little bit more complicated.
Best regards,
Alexey Pasumansky,
Agisoft LLC

uop360

  • Newbie
  • *
  • Posts: 18
    • View Profile
Could you please provide a full script?
I managed to add at the beginning:
import math, PhotoScan

doc = PhotoScan.app.document
chunk = doc.chunk

but there is still problem with line 16:
sina = math.sin(0 - omega)
2016-04-11 12:08:39 NameError: name 'omega' is not defined

 


Hello dellagiu,

Using the following script you should be able to generate camera transformation matrix from OPK data:


Code: [Select]
import math, PhotoScan

#omega, phi, kappa - in radians
#X, Y, Z - coordinate information about camera position in units of the corresponding coordinate system


T = chunk.transform.matrix
v_t = T * PhotoScan.Vector( [0, 0, 0, 1] )
v_t.size = 3
m = chunk.crs.localframe(v_t)
m = m * T
s = math.sqrt(m[0, 0] **2 + m[0,1] **2 + m[0,2] **2) #scale factor

sina = math.sin(0 - omega)
cosa = math.cos(0 - omega)
Rx = PhotoScan.Matrix([[1, 0, 0], [0, cosa, -sina], [0, sina, cosa]])
sina = math.sin(0 - phi)
cosa = math.cos(0 - phi)
Ry = PhotoScan.Matrix([[cosa, 0, sina], [0, 1, 0], [-sina, 0, cosa]])
sina = math.sin(0 - kappa)
cosa = math.cos(0 - kappa)
Rz = PhotoScan.Matrix([[cosa, -sina, 0], [sina, cosa, 0], [0, 0, 1]])
 

t = PhotoScan.Vector([X, Y, Z])
t = chunk.crs.unproject(t)

m = chunk.crs.localframe(t)
m = PhotoScan.Matrix([ [m[0,0], m[0,1], m[0,2]], [m[1,0], m[1,1], m[1,2]], [m[2,0], m[2,1], m[2,2]] ])


R = m.inv() * (Rz * Ry * Rx).t()  * PhotoScan.Matrix().diag([1, -1, -1])

Tr = PhotoScan.Matrix([ [R[0,0], R[0,1], R[0,2], t.x], [R[1,0], R[1,1], R[1,2], t.y], [R[2,0], R[2,1], R[2,2], t.z], [0, 0, 0, 1]])

camera.transform = chunk.transform.matrix.inv() * Tr * (1. / s)

Vitalijus

  • Newbie
  • *
  • Posts: 1
    • View Profile
Hey Alexej,

I just started testing Agisoft. We are Surveying-engineering company. Could you explain in a simple manner how to transform OPK degrees to YPR angles? I have no programming skills, how do you run that script?

Thank you!

Best,
Vitalijus


Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 13289
    • View Profile
Hello Vitalijus,

You can use the following function Python function from the Console pane to load the reference data using OPK data:
Code: [Select]
PhotoScan.app.document.chunk.importCameras(path = "d:/file.txt", format = "opk")At the moment, .importCameras() Python API function assumes that each line contains the following information in the same order:
Quote
camera_label x-coord y-coord z-coord omega phi kappa

So using this function (with the proper filepath, of course) will automatically convert OPK angles and load them to the Reference pane as YPR.
Best regards,
Alexey Pasumansky,
Agisoft LLC

lena

  • Newbie
  • *
  • Posts: 15
    • View Profile
hallo!
I just wrote in console this script
PhotoScan.app.document.chunk.importCameras(path = "d:/e:/Arbeit/Sellin/5_Berechnung-3D/RNY.txt", format = "opk")
the pass is right for sure
but it stil cant run script
I am doing it through
<>run python script
and direct from console it also doesnt work
My photoscan version is  1.2.6



Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 13289
    • View Profile
Hello lena,

I suggest to double check the path, for me "d:/e:/" looks strange.
Best regards,
Alexey Pasumansky,
Agisoft LLC