Forum

Author Topic: Rotate Panorama Programmatically  (Read 2752 times)

tristeng

  • Newbie
  • *
  • Posts: 7
    • View Profile
Rotate Panorama Programmatically
« on: January 13, 2021, 08:25:40 PM »
Using the panorama tutorial as a reference: https://agisoft.freshdesk.com/support/solutions/articles/31000148830

We currently use Hugin to generate spherical panoramas and it is always able to successfully orient the panorama such that the horizon is level. Occasionally the process will fail, so I decided to try Metashape, but am running into issues where the model is slightly rotated such that the horizon is no longer level.

Step 4 in the above tutorial does show how we can correct this manually, but I would like to be able to do this programmatically, and have figured out that you can rotate upon Panorama export (Tasks.ExportPanorama.rotation), but my issue is determining the rotation matrix to apply to the model to get it aligned correctly. My question is how can I determine this rotation using the Python API?

For our datasets, they are captured by drones, so I believe I should be able to at least determine the up-down vector of the camera's position, and then rotate the model such that this vector is aligned with one of Metashape's axes. I see two cases:

1. if the drone only took horizontal images (gimbal angle is 0), then I would want to select 2 camera view vectors that are about 90 degrees to each other and perform an operation to get a vector that is orthogonal to both (cross product I think?)
2. most of the time, the drone will have taken horizontal images as well as images where the gimbal angle is non-zero (up to -90 degrees) so in this case I think I should be able to add all the vectors together which should zero out the horizontal components and leave me with a vector that is pointing up/down

Or is there an easier way to get this? Does the camera station have its orientation based on all the camera positions?

Thanks,

Tristen

tristeng

  • Newbie
  • *
  • Posts: 7
    • View Profile
Re: Rotate Panorama Programmatically
« Reply #1 on: January 14, 2021, 11:22:29 PM »
Wasn't able to figure this out - I was attempting to use the pitch angle to correct the angle about the x-axis (requires imagery that has the gimbal angle and that metashape supports, DJI works).

In some datasets, it appeared the first aligned image could be used to get the original DJI gimbal angle:

Code: [Select]
camera = find_first_aligned_camera(chunk)
# 180 about z so my image was right side up, and then first image gimbal angle (subtract 90 to get the original DJI angle)
task.rotation = Metashape.Utils.euler2mat(Metashape.Vector([180, camera.reference.rotation[1]-90, 0]))

and in other cases this assumption was wrong, but it did appear that the gimbal angle's from other images could be used. I was not able to determine how the model was generated from the images such that it was being rotated about x. Any clues would be helpful, otherwise I think my original theory of determining an up/down vector from the cameras might work, but unfortunately my matrix math isn't what it used to be.

tristeng

  • Newbie
  • *
  • Posts: 7
    • View Profile
Re: Rotate Panorama Programmatically
« Reply #2 on: January 16, 2021, 04:06:39 AM »
Went with my original idea of summing the view vectors of each camera and this seems to have put me on the right track. This algorithm assumes that the imagery is either horizontal and pointing towards the ground - some that point up slightly are OK, but if your dataset also has sky (i.e. full image coverage), then this algorithm would not work. It also assumes that the image set covers 360 degrees such that the x and y portions of the camera view vectors will essentially cancel each other out and only leave a vector that points vertically.

After the images have been loaded, assigned to a station camera group, matched and aligned, you can then sum all the camera view vectors (ignore non-aligned cameras), by transforming the point (0, 0, 1) from camera coords to world space - this gives you a vector in world space of where the camera is pointing. Once you have summed all those vectors, it should be pointing directly down in model space. You can then determine a rotation matrix from a world axis to this vector and apply that to the export task.

I still see some undulations so its not perfect, but at least I have a starting point that I can iterate on.

The other thing I noticed, is there almost always is 1 camera that is pointing along an axis - so I might revise my algorithm to find this camera, and then pull out the gimbal pitch from this camera as my rotation value. Since we rely on DJI drones, we will have the gimbal angles in the image metadata, so this second algorithm wouldn't work for any drones that don't record this data - or any image sets where Metashape doesn't have the camera reference rotation data.

JLM

  • Newbie
  • *
  • Posts: 2
    • View Profile
Re: Rotate Panorama Programmatically
« Reply #3 on: January 10, 2023, 03:03:03 PM »
thank to your post and others that I digged
I think I got to something not too bad that could work

basically, it aligns with the nadirest picture orientation
then when for the export it applies a 90° rotation

requirement: at least one picture should point to the ground


Code: [Select]
############ export panorama
###Find the NADIRest picture
# initialize variables to store the lowest pitch and yaw values
lowest_pitch = float("inf")
lowest_yaw = float("inf")
lowest_pitch_yaw_camera = None

# iterate over the aligned cameras in the chunk
T = chunk.transform.matrix
aligned_cameras = [camera for camera in chunk.cameras if camera.transform]
for camera in aligned_cameras:
    # check if the pitch of the current camera is lower than the lowest pitch so far
    if camera.reference.rotation[1]< lowest_pitch:
        # update the lowest pitch and yaw values
        lowest_pitch = camera.reference.rotation[1]
        lowest_yaw = camera.reference.rotation[0]
        # update the camera with the lowest pitch and yaw
        lowest_pitch_yaw_camera = camera
    # if the pitch of the current camera is equal to the lowest pitch so far, check the yaw value
    elif camera.reference.rotation[1] == lowest_pitch:
        # if the yaw of the current camera is lower than the lowest yaw so far, update the lowest yaw value and camera
        if camera.reference.rotation[0] < lowest_yaw:
            lowest_yaw = camera.reference.rotation[0]
            lowest_pitch_yaw_camera = camera

###Get the right orientation
#From Alexey Pasumansky:
# moves the coordinate system origin to the center of the selected camera
# and transforms the orientation of the system, according to the NADIREST camera orientation.
camera = lowest_pitch_yaw_camera
T = chunk.transform.matrix
origin = (-1) * camera.center
R1 = Metashape.Matrix().Rotation(camera.transform.rotation() * Metashape.Matrix().Diag([1, -1, 1]))
origin = R1.inv().mulp(origin)
chunk.transform.matrix = T.scale() * Metashape.Matrix().Translation(origin) * R1.inv()
Metashape.app.update()


# export panorama by rotating it 90°
rot = Metashape.Utils.euler2mat(Metashape.Vector([0, 90, 0]))
exppano = Metashape.Tasks.ExportPanorama()
exppano.path = export_dir + "/panorama.jpg"
#change height&width according to your need
exppano.height = 8000
exppano.width = 16000
exppano.rotation = rot
exppano.apply(chunk)