Can anyone explain for the
chunk.matchPhotos method:
- what are the workitem_size_cameras, workitem_size_pairs, and max_workgroup_size attributes?
- how do they affect matching results?
- (if applicable) what are the limits/guidelines for these values based on image mpx and GPU RAM?
I'm wondering how best I can use this method + chunk.triangulateTiePoints for "densifying" tie points after initial alignment and before optimization to increase intra-camera-group matches(before generating dense cloud).
For example, I have chunks with 10s of camera groups with thousands of images each that are coaligned (incrementally) using generic preselection 60k keypoints limit, 0 tiepoints limit.
Experimenting with a single camera group after optimization, by copying the chunk and deleting the other groups, I was able to increase the number of intra-group tiepoints by running chunk.matchPhotos then chunk.triangulateTiePoints with either the same matching parameters as before, or by switching to guided matching, for example:
chunk.matchPhotos(downscale = 1, generic_preselection=True, reference_preselection=False, keypoint_limit_per_mpx=4000, tiepoint_limit=0, guided_matching=True, reset_matches = True)
chunk.triangulateTiePoints()
Then if I re-filter with the same gradual selection parameters that I used before optimization I end up with anywhere from 2x to 10x the original number of tie points without changing intrinsics or extrinsics. This is nice because some of my early image collections didn't previously have enough projections to reconstruct well (many missing images from dense cloud), and using this method dramatically increases the projections on many of the trouble images without a huge increase in processing time. But I'm wondering if there are downsides that I'm not realizing to doing this, and if there are even more useful ways for me to take advantage of this by playing with the workitem_size_cameras, workitem_size_pairs, and max_workgroup_size elements.
I'm also wondering if I can use this technique to increase inter-group matches by combining specific groups of cameras and passing that list to the 'cameras' variable of the matchPhotos method, and what the difference is between reset_matches = True vs False. "True" seems to re-run the entire matching process on every image but generated 360k matches vs 24k). Is that because there was a tiepoint threshold that was limiting new tiepoints (many of the original tie points were with images from other groups)?
If anyone has some deeper knowledge of the inner workings of this method, I'd be most grateful for your insight.