Agisoft Metashape
Agisoft Metashape => Python and Java API => Topic started by: rongilad on March 07, 2019, 05:36:09 PM
-
Hi,
I was processing an 800 images project using python script with the HighestAccuracy.
The alignment process could align only small portion of the images (~100 images)
So I tried to re-process using HighAccuracy and MediumAccuracy and got pretty much same results (partial alignment)
Only when degrade the accuracy into LowAccuracy the alignment was successful.
1. What could be the reason for the partial alignment? does the SW filter some tie points based on a certain threshold?
2. After getting a partial alignment in the first time, I would like to analyze the data and decide on the optimal accuracy value without trying all possible values and waste precious processing time. Is this possible?
Thanks!
-
Hello rongilad,
High accuracy of the camera alignment is using full size original images, Highest is upscaling the images by factor two by each side, while each lower step is downscaling the images by factor two for each accuracy step down.
Low accuracy could give mode aligned cameras, for example, for the dense vegetation areas, like forest and crop fields, since the low accuracy is less sensitive to the inaccuracies that might appear on the images (moving greenery due to wind) and also less sensitive to the minor details.
However, usually High accuracy is recommended in any case.
What kind of data you are working with? Is it aerial data?
-
Yes, Aerial Data.
Is there any better approach to process a data-set using the maximal accuracy?
for accuracy in [HighestAccuracy, HighAccuracy, MediumAccuracy, LowAccuracy, LowestAccuracy]:
chunk.matchPhotos(accuracy=accuracy,
preselection=Metashape.ReferencePreselection,
tiepoint_limit=tiepoint_limit,
keypoint_limit=keypoint_limit,
generic_preselection=True,
reference_preselection=True,
keep_keypoints=True)
# Align images location
chunk.alignCameras(cameras=chunk.cameras, adaptive_fitting=True)
total_cameras = len([cam for cam in chunk.cameras])
align_cameras = len([cam for cam in chunk.cameras if cam.transform])
align_prcntg = (float(align_cameras) / float(total_cameras)) * 100.0
print("Total cameras: {}".format(total_cameras))
print("Align cameras: {}".format(align_cameras))
print("Alignment Percentage: {}%".format(align_prcntg))
if align_prcntg > 90.0:
print("Alignment was succesfull - {}%, continue processing...".format(align_prcntg))
break
print(50 * '=')
print("Could not align images using the desired accuracy value : {}".format(accuracy))
print("Trying again with degraded accuracy...")
for camera in chunk.cameras:
camera.transform = None
chunk.point_cloud = None
-
Any better solution?