Forum

Author Topic: Animation camera track from projected coordinates in Python API  (Read 1529 times)

geojules

  • Newbie
  • *
  • Posts: 4
    • View Profile
Animation camera track from projected coordinates in Python API
« on: February 14, 2023, 08:13:58 PM »
Hello,

I am trying to generate an animation camera track that follows the path of a handheld laser scanner that we used to scan a cave site. The laser scanner generates a trajectory file that can be saved as a .txt file with projected x,y,z coordinates (NAD83 UTM 13N, EPSG::6342)  and rotation quaternions. See sample below.

Code: [Select]
"X","Y","Z","OffsetTime","q0","q1","q2","q4"
555945.841,35552877.520,1078.980,1674768640.000,0.493,0.168,-0.099,0.848
555945.590,35552877.232,1079.757,1674768640.000,0.461,0.058,-0.149,0.873
555945.610,35552876.400,1080.018,1674768640.000,0.536,0.025,-0.083,0.840
555945.459,35552875.548,1079.931,1674768640.000,0.574,0.153,-0.024,0.804
555945.283,35552874.697,1079.918,1674768640.000,0.420,0.060,0.013,0.906
555945.252,35552873.831,1079.888,1674768640.000,0.350,0.089,0.024,0.932
555945.503,35552872.997,1079.866,1674768640.000,0.113,0.033,0.083,0.990
555946.015,35552872.297,1079.894,1674768640.000,0.105,0.014,0.134,0.985
555946.452,35552871.546,1079.845,1674768640.000,0.073,0.048,0.039,0.995
555947.101,35552870.964,1079.816,1674768640.000,0.003,0.050,0.055,0.997

I have some code to read the txt file, create and transform new cameras, but I can't seem to figure out how to get the transformation between projected coordinates to local chunk coordinates to work correctly. See attached code below. Based on code by Agisoft forum users Tom2L (https://www.agisoft.com/forum/index.php?topic=14522.0) and Paulo (https://www.agisoft.com/forum/index.php?topic=11313.0).

Code: [Select]
# Import necessary modules
import Metashape
import csv

# Define function to transform quaternion into rotation matrix
def euler_from_quaternion(x, y, z, w):
   
    t0 = +2.0 * (w * x + y * z)
    t1 = +1.0 - 2.0 * (x * x + y * y)
    roll_x = math.atan2(t0, t1)*(180/math.pi)

    t2 = +2.0 * (w * y - z * x)
    t2 = +1.0 if t2 > +1.0 else t2
    t2 = -1.0 if t2 < -1.0 else t2
    pitch_y = math.asin(t2)*(180/math.pi)

    t3 = +2.0 * (w * z + x * y)
    t4 = +1.0 - 2.0 * (y * y + z * z)
    yaw_z = math.atan2(t3, t4)*(180/math.pi)

    return -yaw_z, roll_x, -pitch_y


doc = Metashape.app.document
chunk = Metashape.app.document.chunk


if chunk.transform:
T = chunk.transform.matrix
crs = chunk.crs
else:
T = PhotoScan.Matrix().diag([1,1,1,1])
print("Script started")

track = chunk.addCameraTrack()
track.label = "Camera Track"
keyframes = list()

# Get path to GeoSLAM Trajectory .TXT file
path_trajectory = Metashape.app.getOpenFileName("GeoSLAM Trajectory .TXT File:")

with open(path_trajectory, 'r') as csvfile:
traj = csv.reader(csvfile, delimiter=',')

# Skip header
next(traj)

# Loop through trajectory file and create key frames
for line in traj:

# Get position vector from GeoSLAM trajectory coordinates
projCoords = Metashape.Vector([float(line[0]), float(line[1]), float(line[2])])

# Get position in LSE (Local Space Euclidean) geocentric coordinates
lseCoords = crs.unproject(projCoords)

# Get 4x4 transformation matrix from LSE to arbitrary local frame
m = crs.localframe(lseCoords)

# Convert quaternion rotation (q1 to q4) from GeoSLAM into euler angles ((yaw, pitch, roll)
orientation = euler_from_quaternion(float(line[4]), float(line[5]), float(line[6]), float(line[7]))

# Convert euler angles vector into a 3x3 rotation matrix
R = Metashape.Utils.ypr2mat(orientation)

# Build 3x3 rotation matrix
R = Metashape.Matrix([[m[0,0],m[0,1],m[0,2]], [m[1,0],m[1,1],m[1,2]], [m[2,0],m[2,1],m[2,2]]]).t() * R * Metashape.Matrix().Diag((1, -1, -1))

# Build 4x4 transformation matrix
row = list()
for j in range(0, 3):
row.append(Metashape.Vector(R.row(j)))
row[j].size = 4
row[j].w = lseCoords[j]
row.append(Metashape.Vector([0, 0, 0, 1]))

# Concatante rows into a Metashape matrix object
M = Metashape.Matrix([row[0], row[1], row[2], row[3]])

pos = chunk.addCamera()

# Set camera as keyframe type
pos.type = Metashape.Camera.Type.Keyframe

# Transform camera based on full transformation matrix convert back to projected coordinates
pos.transform = T.inv() * M

# Add to list of keyframes
keyframes.append(pos)

track.keyframes = keyframes
chunk.camera_track = track
print("Done")
« Last Edit: February 16, 2023, 07:16:37 PM by geojules »

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 14840
    • View Profile
Re: Animation camera track from projected coordinates in Python API
« Reply #1 on: February 16, 2023, 08:24:45 PM »
Hello geojules,

I have tried to create a script without looking at your code. Can you check it on your data and let me know, if the animation track has been created as expected?


Code: [Select]
import Metashape, math

def quaternion_to_rotation(q):
# q -- camera attitude quaternion
# default rotation of camera: x axis points to the right, y axis points down, z axis points towards the viewing direction
s = 1.0 / q.norm2()
R = Metashape.Matrix([  [1 - 2.0 * s * (q.y * q.y + q.z * q.z),     2.0 * s * (q.x * q.y - q.z * q.w),     2.0 * s * (q.x * q.z + q.y * q.w)],
[    2.0 * s * (q.x * q.y + q.z * q.w), 1 - 2.0 * s * (q.x * q.x + q.z * q.z),     2.0 * s * (q.y * q.z - q.x * q.w)],
[    2.0 * s * (q.x * q.z - q.y * q.w),     2.0 * s * (q.y * q.z + q.x * q.w), 1 - 2.0 * s * (q.x * q.x + q.y * q.y)]])
return R


def pos_and_quat_to_transform(p, q):
# p -- camera center position
# q -- camera attitude quaternion
return Metashape.Matrix.Translation(p) * Metashape.Matrix.Rotation(quaternion_to_rotation(q))

def import_transformations_internal(filename):
# file line format:
#"X","Y","Z","OffsetTime","q0","q1","q2","q4"
#555945.841,35552877.520,1078.980,1674768640.000,0.493,0.168,-0.099,0.848

doc = Metashape.app.document
chunk = doc.chunk
crs = chunk.crs
T = chunk.transform.matrix

animation = chunk.addCameraTrack()
animation.label = "From quaternions"
frames = []

line_num = 0
fin = open(filename, "rt")
for line in fin:
if not line_num: #skipt first line
line_num += 1
continue

line_num += 1
tokens = line.split(",")
if (len(tokens) < 8):
print("Not enough tokens at line " + str(line_num) + ": " + line)
continue
label = tokens[3]
values = []
try:
for i in range(8):
values.append(float(tokens[i]))
except:
if (line_num != 1):
print("Non-float value: " + tokens[i + 1] + ", at line " + str(line_num))
continue
p = Metashape.Vector([values[0], values[1], values[2]])
p = T.inv().mulp(chunk.crs.unproject(p))
q = Metashape.Vector([values[4], values[5], values[6], values[7]])

frame = chunk.addCamera()
frame.type = Metashape.Camera.Type.Keyframe
frame.transform = pos_and_quat_to_transform(p, q)
frames.append(frame)

animation.keyframes = frames
chunk.camera_track = animation
print("Script finished")

fin.close()

def import_transformations():
filename = Metashape.app.getOpenFileName(filter = "*.txt")
if (filename != "" and filename is not None):
import_transformations_internal(filename)
else:
print(filename)

label = "Custom menu/Import animation from quaternions"
Metashape.app.addMenuItem(label, import_transformations)
print("To execute this script press {}".format(label))
Best regards,
Alexey Pasumansky,
Agisoft LLC

geojules

  • Newbie
  • *
  • Posts: 4
    • View Profile
Re: Animation camera track from projected coordinates in Python API
« Reply #2 on: February 17, 2023, 07:03:58 PM »
Hello Alexey,

Thank you for your quick response. I have tested your code and it looks like I still do not get the expected animation track. It is strange because the Z coordinate is correct, the X coordinate is close, and the Y coordinate is completely off. With the code I posted I get the same X, Y, Z position results that your code produced, although the Yaw, Pitch, Roll values are different from your code. See below.

Original trajectory:
Code: [Select]
"X","Y","Z","OffsetTime","q0","q1","q2","q4"
555945.841,35552877.520,1078.980,1674768640.000,0.493,0.168,-0.099,0.848
555945.590,35552877.232,1079.757,1674768640.000,0.461,0.058,-0.149,0.873
555945.610,35552876.400,1080.018,1674768640.000,0.536,0.025,-0.083,0.840
555945.459,35552875.548,1079.931,1674768640.000,0.574,0.153,-0.024,0.804
555945.283,35552874.697,1079.918,1674768640.000,0.420,0.060,0.013,0.906

Animation track - your code:
Code: [Select]
X, Y, Z, Yaw, Pitch, Roll
587023.5559, -9997964.943, 1078.980, 130.331, 57.053, -155.904
587023.1608, -9997964.943, 1079.757, 130.975, 59.558, -140.959
587023.1783, -9997964.943, 1080.018, 108.977, 52.275, -161.268
587022.9296, -9997964.943, 1079.931, 111.422, 50.400, -175.798
587022.6419, -9997964.943, 1079.918, 98.777, 72.267, -178.592

Animation track - my code:
Code: [Select]
X, Y, Z, Yaw, Pitch, Roll
587023.5559, -9997964.943, 1078.980, 270.140, 60.326, -22.491
587023.1608, -9997964.943, 1079.757, 282.288, 54.191, -13.807
587023.1783, -9997964.943, 1080.018, 276.524, 64.676, -7.526
587022.9296, -9997964.943, 1079.931, 261.808, 72.186, -15.877
587022.6419, -9997964.943, 1079.918, 265.738, 49.995, -5.612
« Last Edit: February 17, 2023, 07:15:47 PM by geojules »

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 14840
    • View Profile
Re: Animation camera track from projected coordinates in Python API
« Reply #3 on: February 20, 2023, 02:36:09 PM »
Hello geojules,

Can you please check that the import coordinates are correct? As it seems that the Y value should be 5,552,877 instead of 35,552,877 meters. Does the script works properly, if you use modified Y values on input? Or, probably, it should be 3,552,877 meters?
« Last Edit: February 20, 2023, 03:40:18 PM by Alexey Pasumansky »
Best regards,
Alexey Pasumansky,
Agisoft LLC

geojules

  • Newbie
  • *
  • Posts: 4
    • View Profile
Re: Animation camera track from projected coordinates in Python API
« Reply #4 on: February 21, 2023, 07:19:39 PM »
Alexey, I double checked and the coordinates are correct. Those values are valid for the NAD83 UTM 13N projection. Regardless, the script should be able to transform the coordinates, and they should match, even if they were incorrect.

Any thoughts on what might be going on here? This is the same issue I ran into with my code.

Paulo

  • Hero Member
  • *****
  • Posts: 1320
    • View Profile
Re: Animation camera track from projected coordinates in Python API
« Reply #5 on: February 21, 2023, 09:19:29 PM »
Hello geojules,

definitely your Y coordinates or northing are way off. They should be in the 3.5 million m range and not 35 Million! Removing a 5 from your Y coordinates and I placed these as markers and they fall right near a Cave in NM. see following...
Best Regards,
Paul Pelletier,
Surveyor

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 14840
    • View Profile
Re: Animation camera track from projected coordinates in Python API
« Reply #6 on: February 21, 2023, 09:28:35 PM »
Hello geojules,

Please see, where to the marker points, if I use 3552877 for Y value (location.jpg) and 35552877 (location_wrong.jpg). In both cases coordinate system is set to EPSG::6342.

With so wrong coordinate that is outside of the system area of use any further conversions, like YPR estimation would be also wrong. So I would suggest to check, if you get valid result of your or my script using the file with the corrected input coordinates.
Best regards,
Alexey Pasumansky,
Agisoft LLC

geojules

  • Newbie
  • *
  • Posts: 4
    • View Profile
Re: Animation camera track from projected coordinates in Python API
« Reply #7 on: February 22, 2023, 02:02:38 AM »
Alexey, you were exactly right, the northing coordinates were wrong (and yes, thank you Paulo for catching that mistake as well). It was a typo in my PDAL transformation filter to take the trajectory from CloudCompare to a projected coordinate system. Sorry for missing that initially.

Wrong PDAL code:
Code: [Select]
{
"type": "filters.transformation",
"matrix": "1  0  0  555900  0  1  0  35552800  0  0  1  0  0  0  0  1"
}

Correct PDAL code:
Code: [Select]
{
"type": "filters.transformation",
"matrix": "1  0  0  555900  0  1  0  3552800  0  0  1  0  0  0  0  1"
}

But with your code and the correct coordinates the trajectory looks good. Thank you for your help!