Hey everyone,
I've been working on a dataset of 58 pictures, taken with two different analogue cameras in the year 2000. The object of study is an ancient ship, the Belgian "Kogge" (
http://kogge.be/). There's no 3D model of the site and it would be very cool to make one 14 years after excavation.
Since the pictures have been scanned from dias and hard copies there's no exif data, and some of the pictures are rather poor quality. Additionally they've been taken on different dates, and so objects around the vessel change from day to day. Unfortunately this is the data we've got to work with so here's my workflow:
-
masked the pictures so they include only the vessel (ie avoid changing objects around it)
-
automatic photo align at high accuracy, pair preselection generic, 40k point limit, constrain features by mask
=> after this phase 23/58 pictures were aligned, covering the starboard side of the vessel (see attachment)
-
manually photo align by manually putting markers on unaligned as well as aligned pictures, and one by one choosing "align selected cameras" for unaligned photos.
=> after adding roughly 120 manual markers in this way I've aligned 42/58 pictures, covering both sides of the vessel (see attachment 2)
Now, while these manually aligned pictures do in fact cover every part of the port side, there's still massive holes in the dense reconstruction. Does anyone have any ideas on how to improve this result? I feel like PhotoScan only uses the matches it's found in the initial automatic "Align Photos" phase to make the sparse point cloud. Is there any way to sort of tell the software "well now that I've manually shown you where this picture goes, look really thoroughly around that area for matches in nearby pictures"?
Anyone have experience with manual picture alignment?
Thanks in advance for any advice you may have!
Cheers,
Tom