I’m trying to determine a metric for the accuracy and reliability of measurements taken off of a photogrammetric model created in Photoscan.
If I take a picture using a 24mp camera and a 60mm lens from 1 meter away the best resolution I will have is approximately 0.2mm. That’s the size (0.2mm^2) of a pixel normal to the camera and I assume the smallest piece of information I can obtain.
When I take 500 of these pictures and compile them into a photogrammetric model using photoscan, I get a point cloud with point spacings of less than 0.05mm. There is clearly a fair amount of interpolation and extrapolation going on.
The pixel size times the rms reprojection error I think would fairly represent the most accuracy one could expect but I have seen details in the models that should not have been resolved if this were the case.
I assume, the initial point cloud in photo alignment effectively establishes benchmark locations over the model then the dense cloud creation carries that process further based on the imagery information but at what point does extrapolation and interpolation take over from accurate determination of xyz coordinates based on the photo comparisons and how accurate are those point determinations.
Any thoughts on this would be appreciated. I get the feeling at times that the precision of the models far exceed the accuracy of the base information. Like using a tape measure and recording the results to the 4th decimal place.