Hi,
I'm using PhotoScan Professional in a fully integrated Python script workflow to process aerial images. For precise georeferencing after a rought alignment based on GPS coordinates, I'm using the non-coded GCP detection feature of PhotoScan with a very relaxed tolerance, followed by a reference import that assigns coordinates to the nearest detected GCP and discards all others:
chunk.detectMarkers(PhotoScan.TargetType.CrossTarget, tolerance=1)
chunk.loadReference(_settings['campaign']["gcp_file"],
PhotoScan.ReferenceFormatCSV,
columns='nxyz', delimiter=',',
ignore_labels=True, threshold = 0.4)
This code performs terribly slow if a lot of (false) GCPs were detected and had to become discarded. My 3.5 GHz 40 core CPU is on 3 % utilization, but the step takes up to 12 h. Could you please fix this performance issue, e.g. by allowing multiprocessing?