Forum

Author Topic: Rotate Panorama Programmatically  (Read 1442 times)

tristeng

  • Newbie
  • *
  • Posts: 7
    • View Profile
Rotate Panorama Programmatically
« on: January 13, 2021, 08:25:40 PM »
Using the panorama tutorial as a reference: https://agisoft.freshdesk.com/support/solutions/articles/31000148830

We currently use Hugin to generate spherical panoramas and it is always able to successfully orient the panorama such that the horizon is level. Occasionally the process will fail, so I decided to try Metashape, but am running into issues where the model is slightly rotated such that the horizon is no longer level.

Step 4 in the above tutorial does show how we can correct this manually, but I would like to be able to do this programmatically, and have figured out that you can rotate upon Panorama export (Tasks.ExportPanorama.rotation), but my issue is determining the rotation matrix to apply to the model to get it aligned correctly. My question is how can I determine this rotation using the Python API?

For our datasets, they are captured by drones, so I believe I should be able to at least determine the up-down vector of the camera's position, and then rotate the model such that this vector is aligned with one of Metashape's axes. I see two cases:

1. if the drone only took horizontal images (gimbal angle is 0), then I would want to select 2 camera view vectors that are about 90 degrees to each other and perform an operation to get a vector that is orthogonal to both (cross product I think?)
2. most of the time, the drone will have taken horizontal images as well as images where the gimbal angle is non-zero (up to -90 degrees) so in this case I think I should be able to add all the vectors together which should zero out the horizontal components and leave me with a vector that is pointing up/down

Or is there an easier way to get this? Does the camera station have its orientation based on all the camera positions?

Thanks,

Tristen

tristeng

  • Newbie
  • *
  • Posts: 7
    • View Profile
Re: Rotate Panorama Programmatically
« Reply #1 on: January 14, 2021, 11:22:29 PM »
Wasn't able to figure this out - I was attempting to use the pitch angle to correct the angle about the x-axis (requires imagery that has the gimbal angle and that metashape supports, DJI works).

In some datasets, it appeared the first aligned image could be used to get the original DJI gimbal angle:

Code: [Select]
camera = find_first_aligned_camera(chunk)
# 180 about z so my image was right side up, and then first image gimbal angle (subtract 90 to get the original DJI angle)
task.rotation = Metashape.Utils.euler2mat(Metashape.Vector([180, camera.reference.rotation[1]-90, 0]))

and in other cases this assumption was wrong, but it did appear that the gimbal angle's from other images could be used. I was not able to determine how the model was generated from the images such that it was being rotated about x. Any clues would be helpful, otherwise I think my original theory of determining an up/down vector from the cameras might work, but unfortunately my matrix math isn't what it used to be.

tristeng

  • Newbie
  • *
  • Posts: 7
    • View Profile
Re: Rotate Panorama Programmatically
« Reply #2 on: January 16, 2021, 04:06:39 AM »
Went with my original idea of summing the view vectors of each camera and this seems to have put me on the right track. This algorithm assumes that the imagery is either horizontal and pointing towards the ground - some that point up slightly are OK, but if your dataset also has sky (i.e. full image coverage), then this algorithm would not work. It also assumes that the image set covers 360 degrees such that the x and y portions of the camera view vectors will essentially cancel each other out and only leave a vector that points vertically.

After the images have been loaded, assigned to a station camera group, matched and aligned, you can then sum all the camera view vectors (ignore non-aligned cameras), by transforming the point (0, 0, 1) from camera coords to world space - this gives you a vector in world space of where the camera is pointing. Once you have summed all those vectors, it should be pointing directly down in model space. You can then determine a rotation matrix from a world axis to this vector and apply that to the export task.

I still see some undulations so its not perfect, but at least I have a starting point that I can iterate on.

The other thing I noticed, is there almost always is 1 camera that is pointing along an axis - so I might revise my algorithm to find this camera, and then pull out the gimbal pitch from this camera as my rotation value. Since we rely on DJI drones, we will have the gimbal angles in the image metadata, so this second algorithm wouldn't work for any drones that don't record this data - or any image sets where Metashape doesn't have the camera reference rotation data.