Forum

Author Topic: Help understanding the theory of camera orientation  (Read 4896 times)

Irratium

  • Newbie
  • *
  • Posts: 2
    • View Profile
Help understanding the theory of camera orientation
« on: April 20, 2013, 11:05:24 AM »
Hello everyone, this marks my first post on this forum. I have been reading a lot of threads here, and am also a user of Photoscan - what a great piece of software.

I'm a student and currently I'm working on my thesis that focuses heavily on photogrammetry. I have been doing experiments for a while now, but naturally I have to study the theory of the process also (to better understand the mechanics, and to document the basic theory to my thesis). I have now understood most of the process, but there's one thing that I really can't seem to wrap my head around, which is how does one determine the exterior orientation of the camera? More specifically, if we do not have a pre-determined coordinate system (like ground control points). Calculating with known points seems logical, but the lack of them drives me crazy, heh.

I'll quote here from the Agisoft wiki:
Quote
The parameters of the exterior orientation may be directly measured (with GPS and IMU systems), however, they are usually also estimated during photogrammetric procedures.

I understand that the lack of pre-determined coordinate system does cause the model often to be rotated arbitrarily in the software. But it has heavily been emphasized in multiple sources that in order to calculate the orientation of the camera (which of course, is critical information or the process fails) you need to know the world coordinate system. In photoscan, this apparently happens during the "align images" step of the workflow. So, when there are no pre-set coordinates or control points which are known to be in certain positions etc., does the software just choose some arbitrary orientation for the world coordinate system?

I have ran to the term "bundle adjustment" which seems to do most of the work as far as reconstructing the scene goes. Could one say that with bundle adjustment we do not need to have any initial information (ground control points, initial camera positions etc) to start processing? Is it all just "magic" done by the apparently very very powerful bundle adjustment?

I think my question could be simplified down to this: how does Photoscan (or photogrammetry in general) solve the problem of calculating camera orientations without a way to determine world coordinate system? (Because the software has then no way of knowing which way is which, things could be "upside down" from our human perspective...)

I'd greatly appreciate if someone could clarify this problem to me. A simply put explanation would be great (I do not need mathematical details, just a logical solution to this) or if someone could point me to some source material that explains things.

RalfH

  • Sr. Member
  • ****
  • Posts: 344
    • View Profile
Re: Help understanding the theory of camera orientation
« Reply #1 on: April 20, 2013, 02:04:32 PM »
As you state, Photoscan does not need a world coordinate system to work. It seems as if the geometry reconstruction starts with a single pair of images and assumes arbitrary coordinates for the camera positions. As more cameras are added sequentially, their coordinates are based on this. For example in Agisoft Stereoscan (which supposedly uses more or less the same algorithm), I found that the distance between cameras seems to always be very close to 1. In general, photogrammetry always requires some kind of coordinate system, but it does not need to be a world coordinate system. What I don't know is whether I don't know whether Photoscan Pro also starts with an arbitrary coordinate system and later transforms this to world coordinates or if world coordinates are used from the beginning.