Hi guys,
From the photoscan manual I have found:
"Tie point limit parameter allows to optimize performance for the task and does not generally
effect the quality of the further model. Recommended value is 4000. Too high or too low tie point
limit value may cause some parts of the dense point cloud model to be missed. The reason is that
PhotoScan generates depth maps only for pairs of photos for which number of matching points is
above certain limit. This limit equals to 100 matching points, unless moved up by the figure "10%
of the maximum number of matching points between the photo in question and other photos,
only matching points corresponding to the area within the bounding box being considered."
Is there a way in the python API to set manually the value of number of Tie points to be taken into account for selection of image pairs that will be used to generate depth maps? Other than for performance, is there another reason to not use all image pairs ? I would be interested in seeing whether it is possible to improve quality by using more tie points but without missing any depth maps that would not be generated if there is less than 10% coverage between two images.
Thanks !