Hello,
Working on a dataset of downlooking photos over the seafloor.
The dataset was acquired with the lawn-mower method and the images well overlap between eachothers.
I am also quite sure there are redundant photos but I did not try to discard them.
The alignment gives me a quite good result but I notice a weird effect for the first time with this dataset: in many occasions it seems like if the point cloud consist of two layers of points reproducing the same surface. The 2 layers are slightly separated and one over the other. In some sense it is like if one would suffer from a "doubled-vision" problem.
Thinking that this could be an issue of having too much overlap between adjacent images, I did try to reduce this redundancy by "Reset Alignment" on a set of adjacent cameras but this double-layer effect still persist.
Any idea of what is causing this and how to correct it?
Would it be something that the recursive optimization will fix (still have not tried)?
Attached some of screenshots to show the double layers as well as the track of camera positions (blue squares size shrinked to make the underlying cloud visible).