After successfully digitizing an object I am now thinking about scanning environments. When I processed the photos the software produced a model that had some arbitrary size and orientation. This is okay for freeform model building but I see a lot of good information being wasted. In particular, the PhotoScan knows the relative orientations of the camera perspectives when the photos were taken. It would be nice if I could tell the software how far appart the camera positions were and what their inclinations were. This would help in setting up a coordinate system that could be used with my other software. With this information the software could make the object the right size and oriented correctly. Then it would be nice if PhotoScan could optionally export a marker for each camera position in the model. This would help in aligning the scanned environment with the CG virtual environment.
In anticipation for assisting in this process, I have purchased a geared camera head that can be leveled. I can control the three rotational orientations and measure them to a fraction of a degree. I also have tape measures for measuring short distances and a LASER range finder to measure distant objects up to about 1000m away. It would be nice to be able to feed this information to PhotoScan and have it solve my coordinate system for me (I will also need a compass). I understand that many cell phones have GPS in them and even include that information into the EXIF meta data associated with the photos. I do not have a cell phone and will be using a higher quality camera for taking all of my photos and will not have GPS data (which is only acurate to ~3m anyway).
Is it possible to do all of these things in PhotoScan or will I have to orient and scale everything manually?