I can understand why sharpness is seen as poor. This:
Sharpness is poor to start with. Water doesn't not have the clarity of air, or as much light available
And this:
This is why my Manta Point dive site scanning is done at an altitude of just 4-5m!
At this distance a housed DSLR would comfortably produce fuzzy, soft and unsharp underwater images. With a tiny sensor (compared to a DSLR) a GoPro has the odds stacked against it.
The greatest challenge is not suspended matter - backscatter - but how water robs light. When I first picked up an UW camera I was given a piece of advice:
Get close. Then get closer still.The single reason we all shoot with ultra wide angle lenses is typically their ability to focus very close to the front element, maybe just 20cm from the subject. Working close to the subject removes the maximum amount of water.
This example was shot at a distance of 1~2m in the English Channel. Vis was maybe 3~4m max:
https://construkted.com/asset/a4wijcuy6yw/This one at the same distance, in better vis - 6~8m - but almost pitch black conditions thanks to a layer of plankton:
https://construkted.com/asset/art1ve7lisi/And this one was shot in much better conditions, but again the 1~2m distance to subject was used:
https://construkted.com/asset/a3q7vacu0cq/A GoPro will deliver better results when working much closer than 4~5m. It's not a question of kit (Kind of...read on...) but a question of method and technique that will go some way to really improve sharpness.
The second thing that will dramatically improve sharpness is artificial lighting. We cant control the shutter speed on a GoPro...but throw more light at the sensor and the camera will do that automatically. Lighting can be fixed and constant, which is better than no light, but strobes are better still as they deliver a short burst of intense light that helps freeze the action and improve sharpness.
Strobes really help improve efficiency as you can work much faster - less risk of motion blur.
Backscatter - the suspended particles - are always present. But photogrammetry software ignores them as they are moving relative to the primary subject. Any clumps of tie points from backscatter can be reduced using recursive optimisation, or manually excluded.
Why does sharpness matter?
Photogrammetry software will cope with blurred, soft or noisy images. They will align and things might look OK...the model always
looks OK and without any constraints, who can really say if there are accuracy issues?
But when faced with specific issues - like the OP with a bent model - then the first look at what has gone wrong starts with the source images.
Look at the reprojection errors for each image. Anything >1.0 pixels is not good. Look at the number of tie points. Anything <100 is not good. Together they may (stress may, not seen the data) be causing curvature, as discussed in the paper I linked to.
There are few shortcuts in UW photography. We have supplied some technical services to design a rig to work at 3~4m and deliver a GSD of <1mm, but each camera sensor is 64mp and there is a huge amount of artificial light available. Even with this, the ROV speed is/was absolutely critical...too fast and it's all blurred.
I would urge anyone working UW to work really close to the subject, and to include artificial light. Quality jumps and with it the chances of greater, repeatable accuracy.
I would also hesitate to suggest that adding more cameras into a method that is creating images that lack inherent quality is the correct way to fix what appears to be fundamental image quality issues?
Unconvinced? Test it...work close and compare the results. Check the EXIF data for shutter speed and see if handholding a camera is going to induce motion blur. Borrow a light and repeat the test.
ugurdemirezen - if you can share the source images, drop me message here
https://accupixel.co.uk/contact-us/ and I will be happy to take a look and see what the root cause may be.