10
« on: June 28, 2016, 11:07:13 AM »
Hello everybody !
I would like to compute the area covered for each of my cameras. I have already tried to use the code given as an example. I added the loop to go through each camera of my chunk. I also tried to select only the four corners. As a result i got ... nothing. It seems like it doesn't find any intersections with the faces of my mesh. Therefore I tried to add some pixels. I got some results but it is not satisfying as I don't get the vertices positions corresponding to my corners. I attached the txt file that I have at the end so taht you can see for yourself.
During the process I printed the position of the pixel I am using. It seems like I go through all of my corners [(0,0), (6015,0), (6015,399),(0,3999)] as my sensor.width=6016 and my sensor.height=4000. Nevertheless I don't get any vertices coordinates.
It is always for the same pixels that I manage to have intersections but never the four corners. I don't know what I am doing wrong. I barely changed the code from Alexey. I have tried many things but I don't have any ideas left. This is why I would like to know if there is anybody who met the same problem.
Here is the code:
import time
import PhotoScan
def cross(a, b):
result = PhotoScan.Vector([a.y*b.z - a.z*b.y, a.z*b.x - a.x*b.z, a.x*b.y - a.y *b.x])
return result
print("Script started")
#cam_index = PhotoScan.app.getInt("Input camera index (starting from zero): ") Manual selection of the camera
save_path = PhotoScan.app.getSaveFileName("Specify output file:")
t0 = time.time()
file = open(save_path, "wt")
file.write('FileName Pixel x Pixel y Vertex Lon Vertex Lat Vertex Alt\n') # Header
doc = PhotoScan.app.document
chunk = doc.chunk
model = chunk.model
faces = model.faces
vertices = model.vertices
for camera in chunk.cameras:
sensor = camera.sensor
print(camera) #camera label
step = 1000 #bigger value - faster processing.
steps = list(zip(list(range(0, sensor.width - 1, step)), [0]*((sensor.width - 1)// step)))
steps.extend( list(zip([sensor.width - 1]*((sensor.height - 1) // step), list(range(0, sensor.height - 1, step)))) )
steps.extend( list(zip(list(range((sensor.width - 1), 0, -step)), [sensor.height - 1]*((sensor.width - 1)// step))))
steps.extend( list(zip([0]*((sensor.height - 1) // step), list(range(sensor.height - 1, 0, -step)))) )
# Selection of the four corners:
#ltop_corner=PhotoScan.Vector([0, 0]) # left top corner
#rtop_corner=PhotoScan.Vector([sensor.width - 1, 0]) # right top corner
#rbottom_corner=PhotoScan.Vector([sensor.width - 1, sensor.height - 1]) # right bottom corner
#lbottom_corner=PhotoScan.Vector([0, sensor.height - 1]) # left bottom corner
#
# List of the four corners
#steps=[ltop_corner,rtop_corner,rbottom_corner,lbottom_corner]
print(steps)
for x,y in steps:
point = PhotoScan.Vector([x, y])
point = sensor.calibration.unproject(point)
point = camera.transform.mulv(point)
vect = point
p = PhotoScan.Vector(camera.center)
for face in faces:
v = face.vertices
E1 = PhotoScan.Vector(vertices[v[1]].coord - vertices[v[0]].coord)
E2 = PhotoScan.Vector(vertices[v[2]].coord - vertices[v[0]].coord)
D = PhotoScan.Vector(vect)
T = PhotoScan.Vector(p - vertices[v[0]].coord)
P = cross(D, E2)
Q = cross(T, E1)
result = PhotoScan.Vector([Q * E2, P * T, Q * D]) / (P * E1)
if (0 < result[1]) and (0 < result[2]) and (result[1] + result[2] <= 1):
t = (1 - result[1] - result[2]) * vertices[v[0]].coord
u = result[1] * vertices[v[1]].coord
v_ = result[2] * vertices[v[2]].coord
res = chunk.transform.matrix.mulp(u + v_ + t)
res = chunk.crs.project(res)
#file.write( "{:>04d}".format(x + 1) + "\t" + "{:04d}".format(y + 1) + "\t" + "{:.8f}".format(res[0]) + "\t" + "{:.8f}".format(res[1]) + "\t" + "{:.4f}".format(res[2]) + "\n")
file.write("%s""\t""%d""\t""%d""\t""%.08f""\t""%.08f""\t""%.3f \n" % (camera.label, x+1,y+1,res[0],res[1],res[2]))
break #finish when the first intersection is found
file.close()
t1 = time.time()
t1 -= t0
t1 = float(t1)
print("Script finished in " + "{:.2f}".format(t1) + " seconds.")
I am not sure I have understood everything about the code, especially this step:
result = PhotoScan.Vector([Q * E2, P * T, Q * D]) / (P * E1)
if (0 < result[1]) and (0 < result[2]) and (result[1] + result[2] <= 1):
t = (1 - result[1] - result[2]) * vertices[v[0]].coord
u = result[1] * vertices[v[1]].coord
v_ = result[2] * vertices[v[2]].coord
If someone feels like explaining it to me, I would be very grateful. I am quite new to Photoscan so I don't really know if I should post my question in this topic or somewhere else.
Best wishes,
Simon