Hello all,
I'm working as an accident investigator. We use GoPro fisheye cameras to completely scan accident sites and reconstruct the 3D model with MetaShape.
This works fine, when the accident site has a compact size. But we have been stuck on the reconstruction of a site, that ist approximately 150 m long and only 10 m in width. The site has been recorded with 1266 pictures, all oriented in the same direction to not confuse the SIFT algorithm or the camera alignment. During recording of the site, the camera was held pointing down to the surface of the street. The generated report of the project says, every cameraposition overlaps with at least 9 other cameras.
For camera alignment we obviously made sure, that MetaShape uses the fisheye camera model.
Although we made sure to scan the site with 4 rows of overlapping (to the side and to the top and the bottom of the picture) pictures, simultaneously recording the GPS Data of the camera positions, the camera alignment proces keeps reconstructing the accident site in sort of a bent shape. This happens although the GPS data - as a starting point for the alignment process - shows, that the accident site is completely flat in reality.
The reason seems to be that the reconstructed camera calibration deviates just a little bit from the correct camera calibration, which causes the reconstructed model to be bent over its whole length (see attached picture of the sparse point cloud).
We used Markers (spread over the accident site) to support the alignment process as well, but is there a possibility to just assign a height/Altitude parameter to a detected marker without assigning the other parameters like longitude and latitude?
Or do we need to introduce our own coordinate system on site, so that we can place markers to define our own coordinate system to scale the model and to introduce a reference for alignment?
How can we influence the alignment process in a way, that the camera calibration is determined correctly?
Thanks for your help!