Hello everyone, this marks my first post on this forum. I have been reading a lot of threads here, and am also a user of Photoscan - what a great piece of software.
I'm a student and currently I'm working on my thesis that focuses heavily on photogrammetry. I have been doing experiments for a while now, but naturally I have to study the theory of the process also (to better understand the mechanics, and to document the basic theory to my thesis). I have now understood most of the process, but there's one thing that I really can't seem to wrap my head around, which is how does one determine the exterior orientation of the camera? More specifically, if we do not have a pre-determined coordinate system (like ground control points). Calculating with known points seems logical, but the lack of them drives me crazy, heh.
I'll quote here from the Agisoft wiki:
The parameters of the exterior orientation may be directly measured (with GPS and IMU systems), however, they are usually also estimated during photogrammetric procedures.
I understand that the lack of pre-determined coordinate system does cause the model often to be rotated arbitrarily in the software. But it has heavily been emphasized in multiple sources that in order to calculate the orientation of the camera (which of course, is critical information or the process fails) you need to know the world coordinate system. In photoscan, this apparently happens during the "align images" step of the workflow. So, when there are no pre-set coordinates or control points which are known to be in certain positions etc., does the software just choose some arbitrary orientation for the world coordinate system?
I have ran to the term "bundle adjustment" which seems to do most of the work as far as reconstructing the scene goes. Could one say that with bundle adjustment we do not need to have any initial information (ground control points, initial camera positions etc) to start processing? Is it all just "magic" done by the apparently very very powerful bundle adjustment?
I think my question could be simplified down to this: how does Photoscan (or photogrammetry in general) solve the problem of calculating camera orientations without a way to determine world coordinate system? (Because the software has then no way of knowing which way is which, things could be "upside down" from our human perspective...)
I'd greatly appreciate if someone could clarify this problem to me. A simply put explanation would be great (I do not need mathematical details, just a logical solution to this) or if someone could point me to some source material that explains things.