Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - callahb

Pages: [1] 2
1
General / Re: Camera coordinates to world coordinates using 4x4 matrix
« on: December 07, 2016, 05:42:35 PM »
My error. The values are indeed the same. So this confirms that the Geocentric coordinates do indeed honor the Ellipsoid from the Projected CRS.

2
General / Re: Camera coordinates to world coordinates using 4x4 matrix
« on: December 06, 2016, 05:27:23 PM »
Alexey,

Thank you for the assistance, this seems to work very well.

I'm having a little trouble understanding the geocentric coordinate system used by PhotoScan and it's relationship to the Chunk CRS.

Is the geocentric coordinate system supposed to honor the ellipsoid from the Chunk CRS if the chunk is georeferenced? For example, if my Chunk was georeferenced using NAD83(2011) does the geocentric coordinate system use the GRS80 ellipsoid?

If so, the values appear to be offset from those generated from the US National Geodetic Survey tool XYZ Conversion. https://www.ngs.noaa.gov/TOOLS/XYZ/xyz.html

Can you help explain this?

Thanks!

3
General / Re: Camera coordinates to world coordinates using 4x4 matrix
« on: December 05, 2016, 07:57:30 PM »
Hi Alexey,

What is the "T" object here? Its not referenced in the script.

Code: [Select]
vect = T.mulv(camera.center - marker.position) * scale

4
General / Re: Camera coordinates to world coordinates using 4x4 matrix
« on: December 05, 2016, 05:12:48 PM »
I just can't seem to get this to work. Perhaps Alexey is correct in that there is something wrong with the Depth since the reported value doesn't correspond to the real depth from the camera which I have measured in the field.

5
General / Re: Camera coordinates to world coordinates using 4x4 matrix
« on: December 01, 2016, 06:10:52 PM »
It seems that removing the scale from the depth like below does not help:
Code: [Select]

#depth_scaled[x,y] = (depth[x,y][0] * scale, ) #old method

depth_scaled[x,y] = (depth[x,y][0] , ) #new method


Did I implement this incorrectly?

6
General / Re: Camera coordinates to world coordinates using 4x4 matrix
« on: December 01, 2016, 05:13:53 PM »
Hi Alexey,

Have you had a chance to think about this issue yet? Thanks!

7
General / Re: Camera coordinates to world coordinates using 4x4 matrix
« on: November 30, 2016, 07:17:36 PM »
Yes, it was a typo, sorry. Here's the full code:

Code: [Select]
#Compare Marker Coords Calculated from Depth Versus Estimated From Model Alignment
import PhotoScan

chunk = PhotoScan.app.document.chunk #active chunk
scale = chunk.transform.scale
camera = chunk.cameras[0] #first camera in the chunk
depth = chunk.model.renderDepth(camera.transform, camera.sensor.calibration) #unscaled depth
depth_scaled = PhotoScan.Image(depth.width, depth.height, " ", "F32")
v_min = 10E10
v_max = -10E10

#create array of distance values
for y in range(depth.height):
    for x in range(depth.width):
        depth_scaled[x,y] = (depth[x,y][0] * scale, )
       
marker = chunk.markers[0] #first marker
x0,y0 = marker.projections[camera].coord

image_x = int(x0) #marker pixel coords in first camera
image_y = int(y0)

depth = depth_scaled[image_x,image_y][0] #marker distance from camera
focal_length = chunk.sensors[0].calibration.f #camera focal length

cX = chunk.sensors[0].calibration.width/2 #image center x
cY = chunk.sensors[0].calibration.height/2 #image center y

u = image_x - cX #col offset from image center
v = image_y - cY #row offset from image center

cam_X = u*depth/focal_length #camera coordinates X
cam_Y = v*depth/focal_length #camera coordinates Y

cam_matrix = camera.transform
cam_calib = camera.sensor.calibration
chunk_matrix = chunk.transform.matrix
point_2D = PhotoScan.Vector([image_x, image_y])
vect = cam_calib.unproject(point_2D) #vector in camera crs
ray = cam_matrix.mulv(vect) #ray in chunk crs

#find vector/ray intersection with depth plane in front of camera, not sure if this is correct
t = (depth-camera.center[2])/ray[2] #determine parameter t
point_chunk = camera.center - t * ray #find point

point_geoc = chunk_matrix.mulp(point_chunk)
point_crs = chunk.crs.project(point_geoc)
print ('point_crs: ',point_crs)

marker_est = chunk.crs.project(chunk_matrix.mulp(marker.position)) #compute projected coordinates (estimated)
print ('Estimated marker crs coords: ',marker_est)

8
General / Re: Camera coordinates to world coordinates using 4x4 matrix
« on: November 30, 2016, 05:13:33 PM »
Hi Alexey,

I've made a little progress but I seem to be stuck again. So far I have the following:

Code: [Select]
cam_matrix = camera.transform
cam_calib = camera.sensor.calibration
chunk_matrix = chunk.transform.matrix
point_2D = PhotoScan.Vector([image_x, image_y])
vect = cam_calib.unproject(point_2D) #vector in camera crs
ray = cam_matrix.mulv(vect) #ray in chunk crs

#find vector/ray intersection with depth plane in front of camera, not sure if this is correct
t = (depth-camera.center[2])/ray[2] #determine parameter t
point_chunk = camera.center - t * ray #find point

point_chunk = PhotoScan.Vector([point_chunk_x,point_chunk_y,point_chunk_z])
point_geoc = chunk_matrix.mulp(point_chunk)
point_crs = chunk.crs.project(point_geoc)


When I test the above with an existing Marker where I already know the [image_x, image_y] and the XYZ in the project CRS, the values don't match. Can you tell me where I've gone wrong?

9
General / Re: Camera coordinates to world coordinates using 4x4 matrix
« on: November 29, 2016, 06:24:00 PM »
I am working on an external application/process where I present the user a 2D oblique image. They click on the image and a 3D projected coordinate is generated for the pixel location of the click.

I am using Photoscan to align all of the photos and thus create the necessary transforms to do the pixel to 3D coordinate conversion and to create the depth images as well.

10
General / Re: Camera coordinates to world coordinates using 4x4 matrix
« on: November 29, 2016, 05:38:26 PM »
Alexey,

I've been having difficulty getting a decent result when doing the conversion from local camera coordinates to CRS coordinates. I create X,Y,Z in a camera local coordinate system from [u,v,depth]. So the coord system X is to the right, Y is down, and Z along the optical axis.

Code: [Select]
cam_vector = PhotoScan.Vector([cam_X, cam_Y, cam_Z]) #camera local coordinate system
cam_matrix = camera.transform
chunk_matrix = chunk.transform.matrix
coord_internal = cam_matrix.mulp(cam_vector)
coord_geoc = chunk.transform.matrix.mulp(coord_internal)
coord_proj = chunk.crs.project(coord_geoc)

If I use this method starting with the pixel x,y from a Marker (a well known point), the ending Marker X,Y,Z in CRS coords is typically very far off. Can you help?

11
General / Marker Coords Calculated from Depth Versus From Alignment
« on: November 25, 2016, 10:08:32 PM »
I want to be able to compute projected CRS X,Y,Z from an image pixel coordinates (x,y). I dug around and found CV formulas for doing this and also found some nice code in this forum for generating much of the necessary pieces. I put together the following code to compare the difference between the Estimated Marker Coordinates in a CRS to those computed using the CV formulas (i.e. x = uZ/f). 

The differences were much greater than I expected, especially in the marker's Z value. I would have anticipated some variation due to things like not using an undistorted photo and bundle adjustment but I'd like to understand the difference more fully if someone can explain it.

Below values are in iFT:

CV formula crs coords (1 cam):  Vector([595046.6607458998, 1078727.1350278414, 215.01010265754007])
Estimated marker crs coords:  Vector([595033.9718577337, 1078666.6464214365, 241.21929559608662])

Code: [Select]
#Compare Marker Coords Calculated from Depth Versus Estimated From Model Alignment
import PhotoScan

chunk = PhotoScan.app.document.chunk #active chunk
scale = chunk.transform.scale
camera = chunk.cameras[0] #first camera in the chunk
depth = chunk.model.renderDepth(camera.transform, camera.sensor.calibration) #unscaled depth
depth_scaled = PhotoScan.Image(depth.width, depth.height, " ", "F32")
v_min = 10E10
v_max = -10E10

#create array of distance values
for y in range(depth.height):
    for x in range(depth.width):
        depth_scaled[x,y] = (depth[x,y][0] * scale, )

       
marker = curr_chunk.markers[0] #first marker
x0,y0 = marker.projections[camera].coord

x1 = int(x0) #marker pixel coords in first camera
y1 = int(y0)

cam_Z = depth_scaled[x1,y1][0] #marker distance from camera

focal_length = chunk.sensors[0].calibration.f #camera focal length

cX = chunk.sensors[0].calibration.cx #camera center
cY = chunk.sensors[0].calibration.cy

u = x1 - cX #col offset from image center
v = y1 - cY #row offset from image center

cam_X = u*cam_Z/focal_length #camera coordinates X
cam_Y = v*cam_Z/focal_length #camera coordinates Y

cam_vector = PhotoScan.Vector([cam_X, cam_Y, cam_Z]) #convert to vector

chunk_matrix = chunk.transform.matrix
coord_geoc = chunk_matrix.mulp(cam_vector) #geocentric coordinates
coord_proj = chunk.crs.project(coord_geoc) #projected coordinates

print ('CV formula crs coords (1 cam): ',coord_proj)

coord_proj2 = chunk.crs.project(chunk_matrix.mulp(marker.position)) #compute projected coordinates (estimated)
print ('Estimated marker crs coords: ',coord_proj2)


12
General / Re: Camera coordinates to world coordinates using 4x4 matrix
« on: November 23, 2016, 08:43:21 PM »
That's it! Thank You!


13
General / Re: Camera coordinates to world coordinates using 4x4 matrix
« on: November 23, 2016, 08:16:38 PM »
OK Thank you.

I'm still a little confused after reviewing the values in the XML <transform> and the
Code: [Select]
Chunk.transform.matrix
because the values are different.  Here are the results:

-<transform>
<rotation>1.2920001487202448e-001 6.1813988403410725e-001 7.7537761118268644e-001 9.8298306585324424e-001 -1.8280649955923373e-001 -1.8057573609335055e-002 1.3058196048009565e-001 7.6451610021320004e-001 -6.3123964079577122e-001</rotation>
<translation>-2.4914770625398587e+006 -3.7988877821658887e+006 4.4618952101149326e+006</translation>
<scale>1.4135476110541292e+000</scale>
</transform>

>>> print (PhotoScan.app.document.chunk.transform.matrix)
Matrix([[0.18263037237050816, 0.8737701563736887, 1.096033169952144, -2491477.0625398587],      [1.3894933644435172, -0.25840569073712255, -0.025525240036909657, -3798887.7821658887],  [0.18458381828340392, 1.0806799070687882, -0.892287286249529, 4461895.210114933], [0.0, 0.0, 0.0, 1.0]])


14
General / Re: Camera coordinates to world coordinates using 4x4 matrix
« on: November 23, 2016, 05:44:24 PM »
Thank you, that looks like it would pretty easy using the Photoscan python tools.

I'd like to understand the formulas behind the functions. I just found an older post you'd written about this and you wrote the formula as:

"transform vector from camera coordinate system into world's coordinates. M x Vc = Vw (M - camera rotation matrix, Vc - vector in camera's coordinates, Vw - vector in world's coordinates)"

Based on your message, is the Vw in the geocentric coordinate system if I used M from the XML file? What is the formula to convert Vw to a projected coordinate system?

15
General / Camera coordinates to world coordinates using 4x4 matrix
« on: November 23, 2016, 05:19:44 PM »
I want to convert camera coordinates to world coordinates.

I have values in the camera coordinate system as X Y Z and I have the 4x4 matrix from the Photoscan XML camera file. Do I just multiply the 4x4 matrix (M) by the camera vector 4x1 matrix (Pc) to yield a world coordinate or is there more to do?

Pages: [1] 2