Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - b3059651

Pages: [1]
Python and Java API / Export markers per image
« on: December 10, 2015, 05:11:16 PM »
Hi all,

I am using educational version 1.1.6 and I am interested in exporting markers (many) per camera. Actually what I do is import markers through the reference panel and then I can see the grey flag on the images. Is it possible to export only the visible markers  of each image with their pixel position? So I can get in the end a txt file that contains:

camera_label, marker_label, x_pixels, y_pixels

I am attaching the photo for an example to understand what I want.
Until now I am trying to play with a python script where I read a txt with the 3D position X,Y,Z for all the markers and manually import the camera label and I get in the console the pixel coordinates. It works fine  but it also gives the pix coordinates of points that are not included in the specified camera. And I am wondering if there is any way to solve this automatically so I can get only the included markers as you can see in the attached picture.

Thanks a lot for any feedback!

The script I am  using is:

Code: [Select]
#markers import script
#input file format:
#camera_label marker_label x y z
#(TAB separator)

import PhotoScan

doc =
chunk = doc.chunk

path ="Specify input file with marker coordinates:")

print("Import started...")  #informational message

file = open(path, "rt") #input file

eof = False
line = file.readline()
if len(line) == 0:
eof = True

while not eof:

sp_line = line.rsplit(",", 4)   #splitting read line by 5 parts

oid = sp_line[0]
z = float(sp_line[3])
y = float(sp_line[2]) #x- coordinate of the current projection in pixels
x = float(sp_line[1]) #y- coordinate of the current projection in pixels
camera_label = 82          #camera label1

photo_0 = chunk.cameras[int(camera_label)]
point3D = PhotoScan.Vector([x,y,z])
point_geocentric =
point_internal = chunk.transform.matrix.inv().mulp(point_geocentric)

imgx, imgy = photo_0.project(point_internal)


line = file.readline() #reading the line in input file
if not len(line):
eof = True
break # End of File

print("Script finished")  #information message

Python and Java API / Accuracy of the estimated camera positions
« on: October 29, 2015, 10:09:04 PM »
Hi all,

I know that to estimate the precision of the camera positions after the Bundle Adjustment many parameters are involved and it is computationally expensive.
It would be ideal if this is possible to compute. Is this isavailable from the version 1.1.6, or are there any build-in functions in python to estimate this per each camera?

Thanks a lot in advance.

Python and Java API / Reprojection error of sparse point cloud
« on: April 30, 2015, 10:20:33 PM »
Hi all,

It is great the support we get from this forum to solve issues. It seems that Agisoft provides fantastic results. :)

I would like to export the reprojection error of a sparse point cloud in order to perform accuracy assessment of my results.

This follows the same question from here:

Please any feedback would be great!!!


General / Reprojection Error for each point of the sparse point cloud
« on: March 08, 2015, 09:07:29 PM »
Hi all,

I would like to ask if there is any way to export the reprojection error for each point of the sparse point cloud as a separate column after X-Y-Z coordinate columns, so to visualize the error with other GIS package?
Is this feasible in python or Agisoft?

Thanks a lot for your reply in advance,


General / Image footprints in world coordinate system
« on: February 06, 2015, 03:35:15 PM »
Hi all,

Following a previous thread ( I would like to ask 1) what is the purpose of the if statement, as seen in the attached script:

if (0 < result[1]) and (0 < result[2]) and (result[1] + result[2] <= 1):
         t = (1 - result[1] - result[2]) * vertices[v[0]].coord
         u = result[1] * vertices[v[1]].coord
         v_ = result[2] * vertices[v[2]].coord
         res = chunk.transform.matrix.mulp(u + v_ + t)
         res =

and 2) do you know if the result of the transform.matrix.mulp is the transformation of pixels to image coordinate frame after the intersection with the actual TIN?

As I a understood without the if statement the process is running for ever but still I cannot find an explanation for the conditional if. Can you please give me a feedback on that?

This script seems really useful and important because it shows the footprint but I cannot make it work with my dataset.  :(
Thanks a lot for the script, anyway.

General / Photoscan process and publiced articles
« on: January 12, 2015, 04:35:27 PM »
Hi all,

I would like to follow a previous discussion about the process in Photoscan. I found a nice recent article (Remondino, F., Spera, M.G., Nocerino, E., Menna, F. and Nex, F. (2014) 'State of the art in high density image matching', The Photogrammetric Record, 29(146), pp. 144-166.) where the authors explain how the different software work. I will present here some sentences of the article exactly as they are written and ask some questions in the end so to understand if this is the case of Photoscan of what they describe. 

...Nevertheless, from the autors' experience and from the achievable 3D measurement results, the implemented image-matching algorithm seems to be a stereo semi-global matching (SGM)-like method (for this study, version 0.9.0 Photoscan was used). Normally the software delivers results that are already meshed.....(pg. 151)

...PMVS employs a true-multi-image matching approach, meaning that for each object point visible in multiple images only one unique 3D point (which satisfies certain geometric conditions) is computed. On the other hand a 3D point cloud is computed for each pixel in the overlapping area of each of the stereopair in the Photoscan and SURE methods. In such cases , for n stereopairs, n 3D points corresponding to the same object point can be computed. This is particularly true in the case of large GSD and sub-pixel matching, leading to clusters of 3D points grouped near each other in the object space (but representing the same 3D point). This large number of points then can be successively averaged or statistically  reduced to a cloud of unique points, but the user needs to consider a proper workflow that takes into account the point-cloud processing requirements for point averaging, de-noising and filtering....... (pg. 161)

1. From the first paragraph: does the resulting dense point cloud represent the raw data or the vertices of the triangulated mesh with the new version of Photoscan?
2. From the 2nd paragraph: Does we actually get many points near to each other representing the same object feature when we export the final point cloud after the optimisation part? Is the point average, flitering of de-noising process part of the in-build command 'construct the dense point cloud' or we, the users, should apply a seperate process to get the average points for each object from  the multiple image-pairs?

It would be ideal if you can confirm these statements because these can help us, the users, to understand in depth the concept and of course to reinforce the reasons why we get better accuracies with Photoscan than other software when it comes to explain that in a written report.

Thanks you so much for the time reading this and replying in advance.


Python and Java API / Export Reprojection Error of each camera station
« on: December 18, 2014, 12:53:21 AM »
Hello all,

I would like to ask if I can export the reprojections plus the reprojection error for all camera stations as a txt file with the format as follows: 
Photoname.tif, Reprojections, Reprojection error
I am searching to find a way to do it in Agisoft (without python scripting) like when it is possible to export the estimated camera stations but I cannot find it. So I assume it might be straight forward for python.

Is there any older topic for that?

I want the reprojection error for each camera station because I want to check if there is any correlation between this error and the accuracy achieved with the different number of ground control points (GCPs). So I am processing with Agisoft many times using different number of GCPs and I want to check everytime how much the reprojection error changes.

Thank you in advance for your help

Pages: [1]