Hello all, thanks for stopping by. This is quite tricky, at least for me, to get my head around.
I've created a few models with UAV photogrammetry, utilising cheap direct-georeferencing. I know that it's not very accurate.
My question is,
how does Photoscan calculate the error of camera positions without having a known, fixed, georeferenced point to compare to?For example: the Total Error of these camera positions is 0.949:
http://imgur.com/a/M7Jnb which is massive!
Yet the Total Error of my scale bars is fairly low (calculated by inputting values that were manually measured in the field).
If anybody knows
how the error of camera locations is calculated then could they please explain it to me? My thinking was that by lowering the camera location error (by using RTK or GCP referencing), you will generally end up with better scale measurements.
Thank you.
PS: Extra bonus discussion point! Does good photography then negate the effect of bad georeferencing?