1
General / Inversing brown-conrady equations (for camera distortion)
« on: February 25, 2021, 01:21:19 AM »
Hi,
I am working on an application where i need to project 2d image pixel coordinates into the 3D camera coordinate system. I understand that this can be done with the metashape .unproject method, for example:
However I would like to understand the equations behind this reverse projection. In particular i have radial and tangental distortion parameters which i need to include. I understand and can replicate the forward projection from camera crs to pixel, as below, using the equations for frame cameras at the end of the documentation https://www.agisoft.com/pdf/photoscan-pro_1_4_en.pdf. Using this i get the point (100, 100) as expected.
I also understand that i can invert the intrinsic matrix K in the final stage, and have also separately saved depth maps so i can later use the z values. Really it is the section where you apply the brown-conrady distortion equations which i'm unsure how to perform in reverse. Would it be possible to give any insight into how this is performed in the .unproject method? I have parameters f, cx, cy, k1, k2, k3, p1 and p2 for this particular camera model.
Thanks in advance anyone who might be able to advise!
I am working on an application where i need to project 2d image pixel coordinates into the 3D camera coordinate system. I understand that this can be done with the metashape .unproject method, for example:
Code: [Select]
chunk = Metashape.app.document.chunk
camera = chunk.cameras[0] # test using first camera
cam_calib = camera.sensor.calibration # camera 0 calibration values
test_pixel = Metashape.Vector([100, 100]) # random point in image pixel coordinates
X, Y, Z = cam_calib.unproject(test_pixel) # result using unproject function
However I would like to understand the equations behind this reverse projection. In particular i have radial and tangental distortion parameters which i need to include. I understand and can replicate the forward projection from camera crs to pixel, as below, using the equations for frame cameras at the end of the documentation https://www.agisoft.com/pdf/photoscan-pro_1_4_en.pdf. Using this i get the point (100, 100) as expected.
Code: [Select]
test_pixel = Metashape.Vector([100, 100]) # random point in image pixel coordinates
X, Y, Z = cam_calib.unproject(test_pixel) # result using unproject function
# Test projection from camera crs to test pixel...
print('Inputing ' + str(X) + ',' + str(Y) + ',' + str(Z))
print('Expecting ' + str(test_pixel))
x = X/Z
y = Y/Z
### distortion corrections
r = math.sqrt(x**2 + y**2)
x_rad = x*(1 + cam_calib.k1*r**2 + cam_calib.k2*r**4 + cam_calib.k3*r**6 + cam_calib.k4*r**8)
x_tang = (cam_calib.p1*(r**2 + 2*x**2) + 2*cam_calib.p2*x*y)*(1 + cam_calib.p3*r**2 + cam_calib.p4*r**4)
y_rad = y*(1 + cam_calib.k1*r**2 + cam_calib.k2*r**4 + cam_calib.k3*r**6 + cam_calib.k4*r**8)
y_tang = (cam_calib.p1*(r**2 + 2*y**2) + 2*cam_calib.p2*x*y)*(1 + cam_calib.p3*r**2 + cam_calib.p4*r**4)
x_dash = x_rad + x_tang
y_dash = y_rad + y_tang
###
# Equivalent to multiplying x_dash, y_dash by intrinsic matrix K
u = cam_calib.width*0.5 + cam_calib.cx + x_dash*cam_calib.f + x_dash*cam_calib.b1 + y_dash*cam_calib.b2
v = cam_calib.height *0.5 + cam_calib.cy + y_dash*cam_calib.f
# i.e K = [[f + b1, b2, cx + width*0.5], [0, fy, cy + height*0.5], [0,0,1]]
# so [u,v] = K.mulp(x_dash, y_dash)
print('Got: ' + str(u) + ',' + str(v))
I also understand that i can invert the intrinsic matrix K in the final stage, and have also separately saved depth maps so i can later use the z values. Really it is the section where you apply the brown-conrady distortion equations which i'm unsure how to perform in reverse. Would it be possible to give any insight into how this is performed in the .unproject method? I have parameters f, cx, cy, k1, k2, k3, p1 and p2 for this particular camera model.
Thanks in advance anyone who might be able to advise!