1
Python and Java API / Re: How can I compute the area that a camera is covering?
« on: January 20, 2017, 10:45:00 PM »
Hello,
I've tried the script from a previous posting (quoted below) to obtain the four corner coordinates of the image footprint, but the script is exporting 10 vertices instead of 4. My camera width/height = 3456 x 2304. Also the footprint coordinates do not seem correct when displayed on the ground. Is this a projection issue? Any ideas? Thank you!!
For example:
I've tried the script from a previous posting (quoted below) to obtain the four corner coordinates of the image footprint, but the script is exporting 10 vertices instead of 4. My camera width/height = 3456 x 2304. Also the footprint coordinates do not seem correct when displayed on the ground. Is this a projection issue? Any ideas? Thank you!!
For example:
Code: [Select]
FileName Pixel x Pixel y Vertex Lon Vertex Lat Vertex Alt
IMG_2667.JPG 1 1 219570.17726818 611774.69670128 855.638
IMG_2667.JPG 1001 1 219563.20451736 611767.92325199 855.856
IMG_2667.JPG 2001 1 219555.44606790 611762.07288836 856.364
IMG_2667.JPG 3456 1 219542.80837430 611763.63372321 858.517
IMG_2667.JPG 3456 1001 219539.55699424 611775.10267100 856.824
IMG_2667.JPG 3456 2304 219538.20162071 611780.67793972 856.743
IMG_2667.JPG 2456 2304 219539.68925572 611781.62191410 856.337
IMG_2667.JPG 1456 2304 219541.26649972 611782.79998516 856.049
IMG_2667.JPG 1 2304 219543.54260737 611784.81929548 855.836
IMG_2667.JPG 1 1304 219547.23769201 611783.38569132 855.838
IMG_2668.JPG 1 1 219559.12016914 611759.48322563 856.301
IMG_2668.JPG 1001 1 219550.10793864 611756.67986985 856.490
IMG_2668.JPG 2001 1 219539.66037803 611767.18872259 858.610
IMG_2668.JPG 3456 1 219531.09831331 611766.29381422 859.306
IMG_2668.JPG 3456 1001 219535.20783330 611778.50147168 858.160
IMG_2668.JPG 3456 2304 219536.29827064 611782.09493826 857.551
IMG_2668.JPG 2456 2304 219537.66672029 611781.72625022 856.782
IMG_2668.JPG 1456 2304 219539.51068606 611781.74005899 856.361
IMG_2668.JPG 1 2304 219542.53686453 611782.04697940 855.919
IMG_2668.JPG 1 1304 219544.99532538 611778.70517363 855.878
Hello everybody !
I would like to compute the area covered for each of my cameras. I have already tried to use the code given as an example. I added the loop to go through each camera of my chunk. I also tried to select only the four corners. As a result i got ... nothing. It seems like it doesn't find any intersections with the faces of my mesh. Therefore I tried to add some pixels. I got some results but it is not satisfying as I don't get the vertices positions corresponding to my corners. I attached the txt file that I have at the end so taht you can see for yourself.
During the process I printed the position of the pixel I am using. It seems like I go through all of my corners [(0,0), (6015,0), (6015,399),(0,3999)] as my sensor.width=6016 and my sensor.height=4000. Nevertheless I don't get any vertices coordinates.
It is always for the same pixels that I manage to have intersections but never the four corners. I don't know what I am doing wrong. I barely changed the code from Alexey. I have tried many things but I don't have any ideas left. This is why I would like to know if there is anybody who met the same problem.
Here is the code:Code: [Select]import time
import PhotoScan
def cross(a, b):
result = PhotoScan.Vector([a.y*b.z - a.z*b.y, a.z*b.x - a.x*b.z, a.x*b.y - a.y *b.x])
return result
print("Script started")
#cam_index = PhotoScan.app.getInt("Input camera index (starting from zero): ") Manual selection of the camera
save_path = PhotoScan.app.getSaveFileName("Specify output file:")
t0 = time.time()
file = open(save_path, "wt")
file.write('FileName Pixel x Pixel y Vertex Lon Vertex Lat Vertex Alt\n') # Header
doc = PhotoScan.app.document
chunk = doc.chunk
model = chunk.model
faces = model.faces
vertices = model.vertices
for camera in chunk.cameras:
sensor = camera.sensor
print(camera) #camera label
step = 1000 #bigger value - faster processing.
steps = list(zip(list(range(0, sensor.width - 1, step)), [0]*((sensor.width - 1)// step)))
steps.extend( list(zip([sensor.width - 1]*((sensor.height - 1) // step), list(range(0, sensor.height - 1, step)))) )
steps.extend( list(zip(list(range((sensor.width - 1), 0, -step)), [sensor.height - 1]*((sensor.width - 1)// step))))
steps.extend( list(zip([0]*((sensor.height - 1) // step), list(range(sensor.height - 1, 0, -step)))) )
# Selection of the four corners:
#ltop_corner=PhotoScan.Vector([0, 0]) # left top corner
#rtop_corner=PhotoScan.Vector([sensor.width - 1, 0]) # right top corner
#rbottom_corner=PhotoScan.Vector([sensor.width - 1, sensor.height - 1]) # right bottom corner
#lbottom_corner=PhotoScan.Vector([0, sensor.height - 1]) # left bottom corner
#
# List of the four corners
#steps=[ltop_corner,rtop_corner,rbottom_corner,lbottom_corner]
print(steps)
for x,y in steps:
point = PhotoScan.Vector([x, y])
point = sensor.calibration.unproject(point)
point = camera.transform.mulv(point)
vect = point
p = PhotoScan.Vector(camera.center)
for face in faces:
v = face.vertices
E1 = PhotoScan.Vector(vertices[v[1]].coord - vertices[v[0]].coord)
E2 = PhotoScan.Vector(vertices[v[2]].coord - vertices[v[0]].coord)
D = PhotoScan.Vector(vect)
T = PhotoScan.Vector(p - vertices[v[0]].coord)
P = cross(D, E2)
Q = cross(T, E1)
result = PhotoScan.Vector([Q * E2, P * T, Q * D]) / (P * E1)
if (0 < result[1]) and (0 < result[2]) and (result[1] + result[2] <= 1):
t = (1 - result[1] - result[2]) * vertices[v[0]].coord
u = result[1] * vertices[v[1]].coord
v_ = result[2] * vertices[v[2]].coord
res = chunk.transform.matrix.mulp(u + v_ + t)
res = chunk.crs.project(res)
#file.write( "{:>04d}".format(x + 1) + "\t" + "{:04d}".format(y + 1) + "\t" + "{:.8f}".format(res[0]) + "\t" + "{:.8f}".format(res[1]) + "\t" + "{:.4f}".format(res[2]) + "\n")
file.write("%s""\t""%d""\t""%d""\t""%.08f""\t""%.08f""\t""%.3f \n" % (camera.label, x+1,y+1,res[0],res[1],res[2]))
break #finish when the first intersection is found
file.close()
t1 = time.time()
t1 -= t0
t1 = float(t1)
print("Script finished in " + "{:.2f}".format(t1) + " seconds.")
I am not sure I have understood everything about the code, especially this step:Code: [Select]result = PhotoScan.Vector([Q * E2, P * T, Q * D]) / (P * E1)
if (0 < result[1]) and (0 < result[2]) and (result[1] + result[2] <= 1):
t = (1 - result[1] - result[2]) * vertices[v[0]].coord
u = result[1] * vertices[v[1]].coord
v_ = result[2] * vertices[v[2]].coord
If someone feels like explaining it to me, I would be very grateful. I am quite new to Photoscan so I don't really know if I should post my question in this topic or somewhere else.
Best wishes,
Simon