Agisoft Metashape

Agisoft Metashape => Python and Java API => Topic started by: nickponline on July 29, 2014, 01:50:10 AM

Title: How can I compute the area that a camera is covering?
Post by: nickponline on July 29, 2014, 01:50:10 AM
Hey Alexey!


For each camera, I am looking to compute the coordinates of quad of what that camera can see on the ground. Is this information (in latitude and longitude) and if not how do I compute it?

Title: Re: How can I compute the area that a camera is covering?
Post by: Alexey Pasumansky on July 29, 2014, 06:20:55 AM
Hello nickponline,

You need to intersect the vectors going through camera center and each image corner with the reconstructed surface.
Title: Re: How can I compute the area that a camera is covering?
Post by: Alexey Pasumansky on July 29, 2014, 09:39:14 AM
Actually we have a sample script that outputs the coordinates of the "footprint" borders of the image frame projected on the reconstructed surface.

If you have any questions regarding the script, please let me know, but I assume that you can easily adopt the code to your task.

Code: [Select]
#compatibility - PhotoScan Professional v 1.0.4

import time
import PhotoScan

def cross(a, b):
result = PhotoScan.Vector([a.y*b.z - a.z*b.y, a.z*b.x - a.x*b.z, a.x*b.y - a.y *b.x])
return result

print("Script started")

cam_index = PhotoScan.app.getInt("Input camera index (starting from zero): ")
save_path = PhotoScan.app.getSaveFileName("Specify output file:")

t0 = time.time()
file = open(save_path, "wt")

doc = PhotoScan.app.document
chunk = doc.activeChunk
model = chunk.model
faces = model.faces
vertices = model.vertices

camera = chunk.photos[cam_index]
print(camera) #camera label

step = 100 #bigger value - faster processing.
steps = list(zip(list(range(0, camera.width - 1, step)), [0]*((camera.width - 1)// step)))
steps.extend( list(zip([camera.width - 1]*((camera.height - 1) // step), list(range(0, camera.height - 1, step)))) )
steps.extend( list(zip(list(range((camera.width - 1), 0, -step)), [camera.height - 1]*((camera.width - 1)// step))))
steps.extend( list(zip([0]*((camera.height - 1) // step), list(range(camera.height - 1, 0, -step)))) )

for x,y in steps:

point = PhotoScan.Vector([x, y])
point = camera.calibration.unproject(point)
point = camera.transform.mulv(point)
vect = point
p = PhotoScan.Vector(camera.center)

for face in faces:

v = face.vertices

E1 = PhotoScan.Vector(vertices[v[1]].coord - vertices[v[0]].coord)
E2 = PhotoScan.Vector(vertices[v[2]].coord - vertices[v[0]].coord)
D = PhotoScan.Vector(vect)
T = PhotoScan.Vector(p - vertices[v[0]].coord)
P = cross(D, E2)
Q = cross(T, E1)

result = PhotoScan.Vector([Q * E2, P * T, Q * D]) / (P * E1)

if (0 < result[1]) and (0 < result[2]) and (result[1] + result[2] <= 1):
t = (1 - result[1] - result[2]) * vertices[v[0]].coord
u = result[1] * vertices[v[1]].coord
v_ = result[2] * vertices[v[2]].coord

res = chunk.transform.mulp(u + v_ + t)
res = chunk.crs.project(res)

file.write("{:>04d}".format(x + 1) + "\t" + "{:04d}".format(y + 1) + "\t" + "{:.4f}".format(res[0]) + "\t" + "{:.4f}".format(res[1]) + "\t" + "{:.4f}".format(res[2]) + "\n")
#face.selected = True
break #finish when the first intersection is found


file.close()
t1 = time.time()
t1 -= t0
t1 = float(t1)
print("Script finished in " + "{:.2f}".format(t1) + " seconds.")
Title: Re: How can I compute the area that a camera is covering?
Post by: nickponline on July 30, 2014, 12:26:02 AM
Thanks are camera.width and camera.height the sensor width and height of the camera?
Title: Re: How can I compute the area that a camera is covering?
Post by: Alexey Pasumansky on August 13, 2014, 03:59:30 PM
Hello nickponline,

camera.width and camera.height are actually the resolution of image (in pixels).
Title: Re: How can I compute the area that a camera is covering?
Post by: AerRob on January 23, 2015, 07:08:23 PM
Hello Alexey,

when I start the python program PhotoScan crashes, continue to perform the procedure but not outcome.
Where am I wrong? I insert the size of the camera in the listing (camera.width, camera.height)  and the name of the output file during its execution (..."Specify output file:"... , that is : test.txt). What else should I enter ? ???

Thank You
Title: Re: How can I compute the area that a camera is covering?
Post by: Alexey Pasumansky on January 24, 2015, 12:12:52 AM
Hello AerRob,

The script is for version 1.0.4, so it need to be modified for the version 1.1.0.

I'll check it a little bit later, but from what I see now:
doc.activeChunk -> doc.chunk
camera.width -> camera.sensor.width
camera.height -> camera.sensor.height
chunk.transform -> chunk.transform.matrix
Title: Re: How can I compute the area that a camera is covering?
Post by: AerRob on January 26, 2015, 05:28:21 PM
Hello Alexey.
i have the same version of photoscan (1.0.4) but......the program don't give me any solution.
Can i test the program in the new version of photoscan, with the changes.

Thanks for the reply
Rob
Title: Re: How can I compute the area that a camera is covering?
Post by: Alexey Pasumansky on January 26, 2015, 06:03:18 PM
Hello Rob,

You can update PhotoScan to the actual version free of charge. You license key is valid for all version up to 1.9.x.

Note that script requires the model to be generated first.
Title: Re: How can I compute the area that a camera is covering?
Post by: AerRob on January 26, 2015, 06:32:04 PM
Hello Alexey,

i have update photoscan and my workflow is arrived at the 3d model.
Now i have run the py program but the new error is appear that is:
......in <module>
    camera = chunk.photos[cam_index]
AttributeError: 'PhotoScan.Chunk' object has no attribute 'photos'.

Thanks again.
Rob
Title: Re: How can I compute the area that a camera is covering?
Post by: Alexey Pasumansky on January 26, 2015, 06:42:24 PM
Hello Rob,

Please try this one:
Code: [Select]
#compatibility - PhotoScan Professional v 1.1.0
#
#requires: 1) georeferenced chunk, 2) reconstructed mesh

import time
import PhotoScan

def cross(a, b):
result = PhotoScan.Vector([a.y*b.z - a.z*b.y, a.z*b.x - a.x*b.z, a.x*b.y - a.y *b.x])
return result

print("Script started")

cam_index = PhotoScan.app.getInt("Input camera index (starting from zero): ")
save_path = PhotoScan.app.getSaveFileName("Specify output file:")

t0 = time.time()
file = open(save_path, "wt")

doc = PhotoScan.app.document
chunk = doc.chunk
model = chunk.model
faces = model.faces
vertices = model.vertices

camera = chunk.cameras[cam_index]
sensor = camera.sensor
print(camera) #camera label

step = 100 #bigger value - faster processing.
steps = list(zip(list(range(0, sensor.width - 1, step)), [0]*((sensor.width - 1)// step)))
steps.extend( list(zip([sensor.width - 1]*((sensor.height - 1) // step), list(range(0, sensor.height - 1, step)))) )
steps.extend( list(zip(list(range((sensor.width - 1), 0, -step)), [sensor.height - 1]*((sensor.width - 1)// step))))
steps.extend( list(zip([0]*((sensor.height - 1) // step), list(range(sensor.height - 1, 0, -step)))) )

for x,y in steps:

point = PhotoScan.Vector([x, y])
point = sensor.calibration.unproject(point)
point = camera.transform.mulv(point)
vect = point
p = PhotoScan.Vector(camera.center)

for face in faces:

v = face.vertices

E1 = PhotoScan.Vector(vertices[v[1]].coord - vertices[v[0]].coord)
E2 = PhotoScan.Vector(vertices[v[2]].coord - vertices[v[0]].coord)
D = PhotoScan.Vector(vect)
T = PhotoScan.Vector(p - vertices[v[0]].coord)
P = cross(D, E2)
Q = cross(T, E1)

result = PhotoScan.Vector([Q * E2, P * T, Q * D]) / (P * E1)

if (0 < result[1]) and (0 < result[2]) and (result[1] + result[2] <= 1):
t = (1 - result[1] - result[2]) * vertices[v[0]].coord
u = result[1] * vertices[v[1]].coord
v_ = result[2] * vertices[v[2]].coord

res = chunk.transform.matrix.mulp(u + v_ + t)
res = chunk.crs.project(res)

file.write("{:>04d}".format(x + 1) + "\t" + "{:04d}".format(y + 1) + "\t" + "{:.4f}".format(res[0]) + "\t" + "{:.4f}".format(res[1]) + "\t" + "{:.4f}".format(res[2]) + "\n")
#face.selected = True
break #finish when the first intersection is found


file.close()
t1 = time.time()
t1 -= t0
t1 = float(t1)
print("Script finished in " + "{:.2f}".format(t1) + " seconds.")
Title: Re: How can I compute the area that a camera is covering?
Post by: AerRob on January 28, 2015, 05:24:02 PM
Hello Alexey,

thanks for the new script, i can test it.

P.S. in my script i had forgotten to enter one command

Thanks again
Rob
Title: Re: How can I compute the area that a camera is covering?
Post by: markallan on March 31, 2015, 04:34:17 PM
Hi,

AerRob, did you manage to use this script?

I don't receive an error message when using the second script here (I received the same '...object has no attribute...' message with the first script), but Photoscan crashes two/three minutes after trying to run the second.

Currently running Photoscan v1.1.4 with a small image set (35 images, 5 GPS markers), 48.6 mill points in dense cloud and 9.7 mill faces on the resolved model.

I would be most grateful if you could suggest anything.

Mark
Title: Re: How can I compute the area that a camera is covering?
Post by: Alexey Pasumansky on March 31, 2015, 04:37:17 PM
Hello Mark,

Could you please try the script on the decimated model first (for example 50 000 - 100 000 polygons) and report if PhotoScan still crashes?

And have you submitted the crash report?
Title: Re: How can I compute the area that a camera is covering?
Post by: markallan on March 31, 2015, 04:53:03 PM
Hi Alexey,

Thank you for the advice - decimating the mesh to 50,000 polygons has worked. Must be time for a better machine!

Many thanks,

Mark
Title: Re: How can I compute the area that a camera is covering?
Post by: Alexey Pasumansky on March 31, 2015, 05:03:37 PM
Hello Mark,

Actually Python in PhotoScan is quite slow, so looping across millions of polygons will take a long time.
Title: Re: How can I compute the area that a camera is covering?
Post by: AndyC on April 27, 2015, 01:32:28 PM
I've a few questions for the latest script (for V1.1.0) (apologies in advance if they're rather basic, I'm rather new to Python).

(1) Is the user input 'camera index' the total number of ‘cameras’ (images) for which footprints should be exported??

(2) When specifying the path for the output file, is is necessary to specify the format? If so which formats are permissible?

(3) Should the 'step' parameter be adjusted for the coordinate system? (e.g., is using a coordinate system in m, with camera footprints of ~30 m, would it be best to reduce the step from 100 down to perhaps 1?)

(4) Will each camera footprint be exported as an individual unit (e.g., multiple polygons in a single shapefile), or will they be amalgamated?

Thanks, Andy
Title: Re: How can I compute the area that a camera is covering?
Post by: Alexey Pasumansky on April 27, 2015, 02:20:48 PM
Hello Andy,

1) Camera index starts from zero, so the last camera in the chunk of N cameras should have (N - 1) index. Using N is incorrect and may crash the script or lead to the unexpected results.

2) It's recommended to specify the format for your own convenience. I suggest to use .txt extension.

3) Step parameter is actually in pixels, going along the image frame perimeter from the top-left corner in clock-wise direction.

4) The output file is for the single camera only (for the one specified by the user) and includes the intersections of the rays going through camera sensor center and pixels on the image frame border.
Title: Re: How can I compute the area that a camera is covering?
Post by: Paulo on April 27, 2015, 09:46:28 PM
Alexei,

it would very interesting to have script reporting area covered on mesh for every camera but giving only the coordinates of 4 corners for each camara frame...
Title: Re: How can I compute the area that a camera is covering?
Post by: Alexey Pasumansky on April 27, 2015, 11:07:42 PM
Hello pap1956,

You can modify the script by removing "stepping" and leaving only four courners of the image. For multiple photos it's necessary to add a loop like the folloiwng:
Code: [Select]
for camera in chunk.cameras
Title: Re: How can I compute the area that a camera is covering?
Post by: Paulo on April 29, 2015, 07:30:56 PM
Alexeiu,

i did put in a loop to loop thru all cameras in chunk and write image footprints to file but I got following error:

 File "C:/Users/paul.pelletier/Documents/Photoscan/ImageFootprint_4corners.py", line 26
    for camera in chunk.cameras
                              ^
SyntaxError: invalid syntax

Can you help me?

Title: Re: How can I compute the area that a camera is covering?
Post by: AndyC on April 29, 2015, 07:38:38 PM
Pap1956, probably not the cause of the error, but is your script trying to query the projected coordinates of point 0,0 twice?
Title: Re: How can I compute the area that a camera is covering?
Post by: Paulo on April 29, 2015, 07:49:58 PM
Pap1956, probably not the cause of the error, but is your script trying to query the projected coordinates of point 0,0 twice?
Yes as I want to output the 4 corners of each camera plus repeated 1st point so to import polygons in Global mapper
Title: Re: How can I compute the area that a camera is covering?
Post by: Alexey Pasumansky on April 29, 2015, 10:54:29 PM
Hello pap1956,

Line with "for" statement should end with the semicolon ":".

Also if you want to include the corner export loop for each camera, you need to add indent for all the lines of this loop, to make it included to the upper level loop.
Title: Re: How can I compute the area that a camera is covering?
Post by: Paulo on April 30, 2015, 04:26:07 PM
Thanks for your help, Alexei...

t works but is very slow due to density of mesh (million faces). As I just want an aproximate view of image coverage on ground, I import a crude mesh from ASTER DEM into PS and the script runs fast and I get a text file with a polygon (5 vertices) for each camera....
Title: Re: How can I compute the area that a camera is covering?
Post by: Alexey Pasumansky on April 30, 2015, 04:33:51 PM
Hello pap1956,

Script speed depend on the number of faces in the mesh model.

The fifth vertex of the exported polygon, according to the script, should be the same as the first one, as (0,0) point is used twice.
Title: Re: How can I compute the area that a camera is covering?
Post by: simon_29 on June 28, 2016, 11:07:13 AM
Hello everybody !

I would like to compute the area covered for each of my cameras. I have already tried to use the code given as an example. I added the loop to go through each camera of my chunk. I also tried to select only the four corners. As a result i got ... nothing. It seems like it doesn't find any intersections with the faces of my mesh. Therefore I tried to add some pixels.  I got some results but it is not satisfying as I don't get the vertices positions corresponding to my corners.  I attached the txt file that I have at the end so taht you can see for yourself.

During the process I printed the position of the pixel I am using. It seems like I go through all of my corners [(0,0), (6015,0), (6015,399),(0,3999)] as my sensor.width=6016 and my sensor.height=4000. Nevertheless I don't get any vertices coordinates.

It is always for the same pixels that I manage to have intersections but never the four corners. I don't know what I am doing wrong. I barely changed the code from Alexey. I have tried many things but I don't have any ideas left. This is why I would like to know if there is anybody who met the same problem.

Here is the code:

Code: [Select]
import time
import PhotoScan

def cross(a, b):
result = PhotoScan.Vector([a.y*b.z - a.z*b.y, a.z*b.x - a.x*b.z, a.x*b.y - a.y *b.x])
return result

print("Script started")


#cam_index = PhotoScan.app.getInt("Input camera index (starting from zero): ") Manual selection of the camera
save_path = PhotoScan.app.getSaveFileName("Specify output file:")


t0 = time.time()
file = open(save_path, "wt")
file.write('FileName    Pixel x    Pixel y    Vertex Lon    Vertex Lat    Vertex Alt\n') # Header
doc = PhotoScan.app.document
chunk = doc.chunk
model = chunk.model
faces = model.faces
vertices = model.vertices

for camera in chunk.cameras:
    sensor = camera.sensor
    print(camera) #camera label

    step = 1000 #bigger value - faster processing.
    steps = list(zip(list(range(0, sensor.width - 1, step)), [0]*((sensor.width - 1)// step)))
    steps.extend( list(zip([sensor.width - 1]*((sensor.height - 1) // step), list(range(0, sensor.height - 1, step)))) )
    steps.extend( list(zip(list(range((sensor.width - 1), 0, -step)), [sensor.height - 1]*((sensor.width - 1)// step))))
    steps.extend( list(zip([0]*((sensor.height - 1) // step), list(range(sensor.height - 1, 0, -step)))) )
   
# Selection of the four corners:   
    #ltop_corner=PhotoScan.Vector([0, 0]) # left top corner
    #rtop_corner=PhotoScan.Vector([sensor.width - 1, 0]) # right top corner
    #rbottom_corner=PhotoScan.Vector([sensor.width - 1, sensor.height - 1]) # right bottom corner
    #lbottom_corner=PhotoScan.Vector([0, sensor.height - 1]) # left bottom corner
    #
    # List of the four corners
    #steps=[ltop_corner,rtop_corner,rbottom_corner,lbottom_corner]     
   
    print(steps)
 
    for x,y in steps:
        point = PhotoScan.Vector([x, y])
        point = sensor.calibration.unproject(point)
        point = camera.transform.mulv(point)
        vect = point
        p = PhotoScan.Vector(camera.center)
        for face in faces:
            v = face.vertices
            E1 = PhotoScan.Vector(vertices[v[1]].coord - vertices[v[0]].coord)
            E2 = PhotoScan.Vector(vertices[v[2]].coord - vertices[v[0]].coord)
            D = PhotoScan.Vector(vect)
            T = PhotoScan.Vector(p - vertices[v[0]].coord)
            P = cross(D, E2)
            Q = cross(T, E1)
            result = PhotoScan.Vector([Q * E2, P * T, Q * D]) / (P * E1)
            if (0 < result[1]) and (0 < result[2]) and (result[1] + result[2] <= 1):
                t = (1 - result[1] - result[2]) * vertices[v[0]].coord
                u = result[1] * vertices[v[1]].coord
                v_ = result[2] * vertices[v[2]].coord
               
                res = chunk.transform.matrix.mulp(u + v_ + t)
                res = chunk.crs.project(res)
               
                #file.write( "{:>04d}".format(x + 1) + "\t" + "{:04d}".format(y + 1) + "\t" + "{:.8f}".format(res[0]) + "\t" + "{:.8f}".format(res[1]) + "\t" + "{:.4f}".format(res[2]) + "\n")
                file.write("%s""\t""%d""\t""%d""\t""%.08f""\t""%.08f""\t""%.3f \n" % (camera.label, x+1,y+1,res[0],res[1],res[2]))
                break #finish when the first intersection is found

file.close()
t1 = time.time()
t1 -= t0
t1 = float(t1)
print("Script finished in " + "{:.2f}".format(t1) + " seconds.")

I am not sure I have understood everything about the code, especially this step:

Code: [Select]
result = PhotoScan.Vector([Q * E2, P * T, Q * D]) / (P * E1)
 if (0 < result[1]) and (0 < result[2]) and (result[1] + result[2] <= 1):
                t = (1 - result[1] - result[2]) * vertices[v[0]].coord
                u = result[1] * vertices[v[1]].coord
                v_ = result[2] * vertices[v[2]].coord

If someone feels like explaining it to me, I would be very grateful. I am quite new to Photoscan so I don't really know if I should post my question in this topic or somewhere else.

Best wishes,

Simon


Title: Re: How can I compute the area that a camera is covering?
Post by: Alexey Pasumansky on June 28, 2016, 11:10:41 AM
Hello Simon,

Maybe you can send the PSZ file with the alignment results and model (can be decimated) so that we could reproduce the problem (no need for dense cloud or depth maps in the project)?
Title: Re: How can I compute the area that a camera is covering?
Post by: simon_29 on June 28, 2016, 12:19:52 PM
Hello Alexey, first thank you very much for such a quick answer! I have sent the psz file to support@agisoft.com. You have probably received a download link. Just tell me if I need to sent it back.

Simon
Title: Re: How can I compute the area that a camera is covering?
Post by: Alexey Pasumansky on June 28, 2016, 01:32:32 PM
Hello Simon,

Thank you for providing the project files.

Are you sure that the rays through the corners of the input images do really intersect with the mesh model? There are no image thumbnails in the project, but I've tried to put the markers in the very corners of the images (using Create Marker option) and haven't got any other automatic projections created.
Title: Re: How can I compute the area that a camera is covering?
Post by: simon_29 on June 28, 2016, 03:26:45 PM
I think I get what you are saying. The four corners of my pictures corresponds whether to the sky or the sea (background with non relevant information for me, I have attached a printscreen ), so it is logical that the four corners don't intersect with my mesh.

Thank you for noticing it. I will try to estimate which are the last pixels that "see" my structure.

Thank you for your answer.

Title: Re: How can I compute the area that a camera is covering?
Post by: Alexey Pasumansky on June 28, 2016, 03:51:10 PM
Hello Simon,

Probably, you can use the steps only along the shorter sides with some interval (like 50-100 px) to track actual footprints.
Title: Re: How can I compute the area that a camera is covering?
Post by: simon_29 on June 28, 2016, 04:02:15 PM
Yes this is what I plan to do. I will look at the pixels across the middle line of my image: (1,2000) to (6016,2000) that fit with the strucutre I am studying.

Code: [Select]
steps = list(zip(list(range(0, sensor.width - 1, step)), [1]*((sensor.height - 1)/2)))

As a result I should get the borders of my image footprint, right ?
Title: Re: How can I compute the area that a camera is covering?
Post by: bec1010 on January 20, 2017, 10:45:00 PM
Hello,

I've tried the script from a previous posting (quoted below) to obtain the four corner coordinates of the image footprint, but the script is exporting 10 vertices instead of 4. My camera width/height = 3456 x 2304. Also the footprint coordinates do not seem correct when displayed on the ground. Is this a projection issue? Any ideas? Thank you!!

For example:

Code: [Select]

FileName    Pixel x    Pixel y    Vertex Lon    Vertex Lat    Vertex Alt
IMG_2667.JPG 1 1 219570.17726818 611774.69670128 855.638
IMG_2667.JPG 1001 1 219563.20451736 611767.92325199 855.856
IMG_2667.JPG 2001 1 219555.44606790 611762.07288836 856.364
IMG_2667.JPG 3456 1 219542.80837430 611763.63372321 858.517
IMG_2667.JPG 3456 1001 219539.55699424 611775.10267100 856.824
IMG_2667.JPG 3456 2304 219538.20162071 611780.67793972 856.743
IMG_2667.JPG 2456 2304 219539.68925572 611781.62191410 856.337
IMG_2667.JPG 1456 2304 219541.26649972 611782.79998516 856.049
IMG_2667.JPG 1 2304 219543.54260737 611784.81929548 855.836
IMG_2667.JPG 1 1304 219547.23769201 611783.38569132 855.838
IMG_2668.JPG 1 1 219559.12016914 611759.48322563 856.301
IMG_2668.JPG 1001 1 219550.10793864 611756.67986985 856.490
IMG_2668.JPG 2001 1 219539.66037803 611767.18872259 858.610
IMG_2668.JPG 3456 1 219531.09831331 611766.29381422 859.306
IMG_2668.JPG 3456 1001 219535.20783330 611778.50147168 858.160
IMG_2668.JPG 3456 2304 219536.29827064 611782.09493826 857.551
IMG_2668.JPG 2456 2304 219537.66672029 611781.72625022 856.782
IMG_2668.JPG 1456 2304 219539.51068606 611781.74005899 856.361
IMG_2668.JPG 1 2304 219542.53686453 611782.04697940 855.919
IMG_2668.JPG 1 1304 219544.99532538 611778.70517363 855.878





Hello everybody !

I would like to compute the area covered for each of my cameras. I have already tried to use the code given as an example. I added the loop to go through each camera of my chunk. I also tried to select only the four corners. As a result i got ... nothing. It seems like it doesn't find any intersections with the faces of my mesh. Therefore I tried to add some pixels.  I got some results but it is not satisfying as I don't get the vertices positions corresponding to my corners.  I attached the txt file that I have at the end so taht you can see for yourself.

During the process I printed the position of the pixel I am using. It seems like I go through all of my corners [(0,0), (6015,0), (6015,399),(0,3999)] as my sensor.width=6016 and my sensor.height=4000. Nevertheless I don't get any vertices coordinates.

It is always for the same pixels that I manage to have intersections but never the four corners. I don't know what I am doing wrong. I barely changed the code from Alexey. I have tried many things but I don't have any ideas left. This is why I would like to know if there is anybody who met the same problem.

Here is the code:

Code: [Select]
import time
import PhotoScan

def cross(a, b):
result = PhotoScan.Vector([a.y*b.z - a.z*b.y, a.z*b.x - a.x*b.z, a.x*b.y - a.y *b.x])
return result

print("Script started")


#cam_index = PhotoScan.app.getInt("Input camera index (starting from zero): ") Manual selection of the camera
save_path = PhotoScan.app.getSaveFileName("Specify output file:")


t0 = time.time()
file = open(save_path, "wt")
file.write('FileName    Pixel x    Pixel y    Vertex Lon    Vertex Lat    Vertex Alt\n') # Header
doc = PhotoScan.app.document
chunk = doc.chunk
model = chunk.model
faces = model.faces
vertices = model.vertices

for camera in chunk.cameras:
    sensor = camera.sensor
    print(camera) #camera label

    step = 1000 #bigger value - faster processing.
    steps = list(zip(list(range(0, sensor.width - 1, step)), [0]*((sensor.width - 1)// step)))
    steps.extend( list(zip([sensor.width - 1]*((sensor.height - 1) // step), list(range(0, sensor.height - 1, step)))) )
    steps.extend( list(zip(list(range((sensor.width - 1), 0, -step)), [sensor.height - 1]*((sensor.width - 1)// step))))
    steps.extend( list(zip([0]*((sensor.height - 1) // step), list(range(sensor.height - 1, 0, -step)))) )
   
# Selection of the four corners:   
    #ltop_corner=PhotoScan.Vector([0, 0]) # left top corner
    #rtop_corner=PhotoScan.Vector([sensor.width - 1, 0]) # right top corner
    #rbottom_corner=PhotoScan.Vector([sensor.width - 1, sensor.height - 1]) # right bottom corner
    #lbottom_corner=PhotoScan.Vector([0, sensor.height - 1]) # left bottom corner
    #
    # List of the four corners
    #steps=[ltop_corner,rtop_corner,rbottom_corner,lbottom_corner]     
   
    print(steps)
 
    for x,y in steps:
        point = PhotoScan.Vector([x, y])
        point = sensor.calibration.unproject(point)
        point = camera.transform.mulv(point)
        vect = point
        p = PhotoScan.Vector(camera.center)
        for face in faces:
            v = face.vertices
            E1 = PhotoScan.Vector(vertices[v[1]].coord - vertices[v[0]].coord)
            E2 = PhotoScan.Vector(vertices[v[2]].coord - vertices[v[0]].coord)
            D = PhotoScan.Vector(vect)
            T = PhotoScan.Vector(p - vertices[v[0]].coord)
            P = cross(D, E2)
            Q = cross(T, E1)
            result = PhotoScan.Vector([Q * E2, P * T, Q * D]) / (P * E1)
            if (0 < result[1]) and (0 < result[2]) and (result[1] + result[2] <= 1):
                t = (1 - result[1] - result[2]) * vertices[v[0]].coord
                u = result[1] * vertices[v[1]].coord
                v_ = result[2] * vertices[v[2]].coord
               
                res = chunk.transform.matrix.mulp(u + v_ + t)
                res = chunk.crs.project(res)
               
                #file.write( "{:>04d}".format(x + 1) + "\t" + "{:04d}".format(y + 1) + "\t" + "{:.8f}".format(res[0]) + "\t" + "{:.8f}".format(res[1]) + "\t" + "{:.4f}".format(res[2]) + "\n")
                file.write("%s""\t""%d""\t""%d""\t""%.08f""\t""%.08f""\t""%.3f \n" % (camera.label, x+1,y+1,res[0],res[1],res[2]))
                break #finish when the first intersection is found

file.close()
t1 = time.time()
t1 -= t0
t1 = float(t1)
print("Script finished in " + "{:.2f}".format(t1) + " seconds.")

I am not sure I have understood everything about the code, especially this step:

Code: [Select]
result = PhotoScan.Vector([Q * E2, P * T, Q * D]) / (P * E1)
 if (0 < result[1]) and (0 < result[2]) and (result[1] + result[2] <= 1):
                t = (1 - result[1] - result[2]) * vertices[v[0]].coord
                u = result[1] * vertices[v[1]].coord
                v_ = result[2] * vertices[v[2]].coord

If someone feels like explaining it to me, I would be very grateful. I am quite new to Photoscan so I don't really know if I should post my question in this topic or somewhere else.

Best wishes,

Simon
Title: Re: How can I compute the area that a camera is covering?
Post by: Alexey Pasumansky on January 21, 2017, 01:32:49 PM
Hello bec1010,

Some variations of the script are also writing intermediate points by the image border. If oyu need only four corners you need to create steps as a list of four elements.

Code: [Select]
steps = [(0, 0), (sensor.width-1, 0), (sensor.width-1, sensor.height-1), (0, sensor.height-1)]
this should go instead of five lines of steps definition via list(zip(...)).