Forum

Author Topic: Geetting Rotation and Transformation Matrixes out to do a Perspective Warp  (Read 4236 times)

s093294

  • Newbie
  • *
  • Posts: 23
    • View Profile
Hello guys. (new here)

In case of low overlap of aerial photos I have created an alternative stitching of images that I would like to use with the bundle adjusted images coming out of photoscan.

Result of low overlap results in blurry lines of lines that should have been straight. Instead we are not creating a orto photo but instead we are sthiching the images by doing a perspective warp of the images to fit the plane of the earth by finding the 4 corners in world coordinate system of each image and warp them to this coordinate system.

So my queestion is, can I by python scripting get out the transformation/rotatition matrix of each image such I can project
[0,0,1], [imHeight,0,1], [imHeight,imWidth,1] and [0,imWidth,1] to world coordinate system?




s093294

  • Newbie
  • *
  • Posts: 23
    • View Profile
sofar i got to this:

tl = a_camera.calibration.unproject([0,0])
tl.size=4
tl[3]=1
tl_geo = currentChunk.transform * a_camera.transform * tl
tl_geo.size=3

>>> crs.project(tl_geo)
Vector([11.779803954459014, 55.226185881874194, 523.0758500489611])

but the coordinate in google map dont fit the image corner sadly. The altitude should also be something like 30-40 range.

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 13210
    • View Profile
Hello s093294,

PhotoScan uses altitude above ellipsoid, while in GoogleEarth is seems to be above the mean sea level.
Best regards,
Alexey Pasumansky,
Agisoft LLC

s093294

  • Newbie
  • *
  • Posts: 23
    • View Profile
Thank you . I am new to use photoscan and is just learning so your answers are much appreciated.

I tried to write my own bundle adjuster and mapping system but realized the complexity and now trying to just see if I can extend photoscan with the some python scripts for the stuff I need.

Do the code posted above look correct?

i am trying to write a script that gets coordinates of image corners for each image to use with our existing tiling and mapping stuff.

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 13210
    • View Profile
Hello s093294,

So do you wish to find coordinates of intersection of the ray from the camera center through image corners with the ground surface?
Best regards,
Alexey Pasumansky,
Agisoft LLC

s093294

  • Newbie
  • *
  • Posts: 23
    • View Profile
Ye thats exactly what I need.

I have a tiling process where we create XYZ (google tiles) based on image corner coordinates. So if I can find the corners with photoscan, i will be able to compare results with our own stuff which at this point is not at a satisfying state.

s093294

  • Newbie
  • *
  • Posts: 23
    • View Profile
I have found a way but it involves looping over all the faces (worked okay in a model of 10 images).

In our own setup we used to do this by simply solving the linear system when camera projection is know and the height. I can see why this is smarter as it dont need the height as its found by the model here. Its abit shower but more robust, so i think i am happy with this for now.

Code was adapted from one of your other posts.


steps = [(0,0), (camera.width,0),(camera.width,camera.height),(0,camera.height)]

for x,y in steps:
   
   point = PhotoScan.Vector([x, y])
   point = camera.calibration.unproject(point)
   point = camera.transform.mulv(point)
   vect = point
   p = PhotoScan.Vector(camera.center)

   for face in faces:

      v = face.vertices
   
      E1 = PhotoScan.Vector(vertices[v[1]].coord - vertices[v[0]].coord)
      E2 = PhotoScan.Vector(vertices[v[2]].coord - vertices[v[0]].coord)
      D = PhotoScan.Vector(vect)
      T = PhotoScan.Vector(p - vertices[v[0]].coord)
      P = cross(D, E2)
      Q = cross(T, E1)
   
      result = PhotoScan.Vector([Q * E2, P * T, Q * D]) / (P * E1)
   
      if (0 < result[1]) and (0 < result[2]) and (result[1] + result[2] <= 1):
         t = (1 - result[1] - result[2]) * vertices[v[0]].coord
         u = result[1] * vertices[v[1]].coord
         v_ = result[2] * vertices[v[2]].coord
         
         res = chunk.transform.mulp(u + v_ + t)
         res = chunk.crs.project(res)
         print("{:>04d}".format(x + 1) + "\t" + "{:04d}".format(y + 1) + "\t" + "{:.4f}".format(res[0]) + "\t" + "{:.4f}".format(res[1]) + "\t" + "{:.4f}".format(res[2]) + "\n")         
         break #finish when the first intersection is found