Author Topic: Fragmented reconstruction (underwater)  (Read 1188 times)


  • Newbie
  • *
  • Posts: 12
    • View Profile
Fragmented reconstruction (underwater)
« on: August 16, 2021, 05:34:23 PM »

Hi everyone,

I will appreciate the answer of one of the experts of Agisoft Metashape Pro.

I am sending a detailed explanation that includes the screenshots of the problem briefly described here. I believe it is essential to understand what I refer to.

The following reconstruction corresponds to an underwater feature video recorded by ROV (28 frames per second). One frame per second was extracted to use in this reconstruction.
Number of frames (images): 592
In the 10 mins video, 26 USBL (Ultra short baseline) positioning was recorded. The USBL positioning was upload in R to obtain positioning per second of each frame grab extracted (1per sec) and exported as CSV. This CSV was used in Agisoft Metashape Professional (64bit) to reference the images.

Problem: when aligning the images, I get the message some images failed to align. And the alienation jumps spaces within the reconstruction. like if the video was not continuous, but it was. (Fig 1)


Why is the reconstruction fragmented while the video where the images come from is continuous?
Why are the small chunks of images upside down?
Why are some cameras only used in the reconstruction when I used the reference, not when I aligned without a reference?


  • Hero Member
  • *****
  • Posts: 753
    • View Profile
Re: Fragmented reconstruction (underwater)
« Reply #1 on: August 16, 2021, 06:14:14 PM »
Hello Lia,

to beter appreciate your problem, maybe tou could show a screen capture of model including camera positions ( blue rectangles for aligned images, small blue dot for  non aligned)...

It seems your alignment results in a few disconnected components ....
Best Regards,

Paul Pelletier


  • Newbie
  • *
  • Posts: 43
    • View Profile
    • AccuPixel Ltd - Dealer and Training Centre
Re: Fragmented reconstruction (underwater)
« Reply #2 on: August 18, 2021, 03:11:40 PM »
Whilst the video may be continuous this does not guarantee success when it comes to aligning. Image blur caused by movement, out of focus, lack of detail & tie points will all trigger failure to align - take a close look at the images and see if there is any pattern to failure.

26 USBL measurements over 10 minutes will require interpolation to estimate the image locations for every frame between those taken at the same time as measurement - I would hesitate to suggest this will help with alignment or deliver a scaled model.

Extracting images from video will mean cameras are treated as N/C - not calibrated - and whilst Metashape can estimate the lack of focal length may be causing alignment issues.

ROV cameras tend to be good for seeing what is in front of the record video in low light...and guide the operator...they may not deliver high quality stills that work best for photogrammetry - can you share camera data?

We use a similar technique but work with GPS points taken every 2~4 seconds. Using these for scaling delivers very consistent results but would not use these values to aid camera alignment:

Not all images will need GPS reference for scaling and location, so the first steps would be to validate the source image quality, rerun alignment and then apply GPS values during recursive optimisation.
Agisoft endorsed online Metashape training - see: