Forum

Author Topic: optimizing co-alignment from very different cameras/views  (Read 2204 times)

andyroo

  • Sr. Member
  • ****
  • Posts: 440
    • View Profile
optimizing co-alignment from very different cameras/views
« on: May 18, 2022, 10:25:59 PM »
We're exploring the ability of a marine survey platform we built to provide "baseline" datasets that can be compared with later small high-res collections to evaluate habitat change. In that vein, we've been working to co-register large area survey imagery with handheld camera imagery of a small portion of the survey area from a very different viewing distance (and time, and resolution, and lighting, etc). Both sets have good overlap, and align well independently. The high-res set has >90% overlap.

We tried many techniques and parameters (resampling, color correction, guided image matching, manual tie points, injecting camera position/orientation) with the full overlap region for both sets of images (400 high-res close-in images and 100 images with about 10x lower resolution), and failed to get any matches (valid or invalid) between image sets.

As a last ditch effort we zoomed in on a small area with 10 and 11 images respectively and were able to get a couple dozen valid matches across the two cameras. Interestingly, for the full-res close-in images, guided image matching produced no cross-set matches, while for 50% scale resampled high-res images, guided matching produced more cross-set matches than unguided.

We're trying to understand how/if we can scale up to the full shared region and preserve the ability to generate cross-camera matches, and why we were able to get these matches. One theory is that our "matching budget" is being used up by within-set matches, and if we minimize within-set overlap we may be able to increase our cross-set matches. (within-set = matches for the same camera, and cross-set = matches between low-res long-range camera and high-res close-range camera).

Wondering if anyone, and especially Alexey or other metashape devs has insights into what parameters control our ability to find matches ACROSS camera groups vs within them. On a related note, I'm hoping to develop a couple python scripts that report within-camera-group vs across-camera-group valid and invalid tie points. Not sure if the API fully supports that, but if anyone else is working with similar problems I'd love to start a discussion.

Paulo

  • Hero Member
  • *****
  • Posts: 1321
    • View Profile
Re: optimizing co-alignment from very different cameras/views
« Reply #1 on: May 18, 2022, 10:51:44 PM »
Hello andyroo,

I have deveopped a script that graphically shows matches between photos as lines of different width and color. see https://www.agisoft.com/forum/index.php?topic=12924.msg57302#msg57302.

This type of script could always be adapted to work only within certain groups or between 2 selected groups....

You can send a PM for more interaction,
Best Regards,
Paul Pelletier,
Surveyor

Matt

  • Full Member
  • ***
  • Posts: 104
    • View Profile
Re: optimizing co-alignment from very different cameras/views
« Reply #2 on: June 28, 2022, 05:20:56 AM »
Hi Andyroo, I have been working with these issues for a while in the terrestrial historical photo matching area. Its not easy.  I am finding that each scenario (difference in resolution or gamma) generally requires a different approach. Bare in mind that i am working with a combination of 150mp digital aerial photos from a phase one and 600 DPI scans off historical prints or films at scales from 1:500 to 1:40000. Some disparate resolution sets align much better on medium and low alignment settings and some on maxed out settings. In most cases just the addition of three to four control points even if not at high precision makes a massive difference. Once you have a match or block of matches then incrementally selecting strings of images adjoining matched images and aligning them also works. Once you have a base dataset aligned you can use that as a base image to align more recent blocks too. Happy to keep om talking.