Hello everyone,
i'm engaged on a project that consists of capturing a view from a model, previously reconstructed with a set of drone's acquisitions, and comparing it with an image of the real structure, which the model represents, obtained afterwards, in a second moment, with a mobile or a tablet. Obviously the image rendered from model has to be taken from the same viewpoint of the image captured later and the problem is exactly this: how can i obtain this viewpoint?
I know that for rendering an image from a model transform matrix and calibration data are needed.
In my case i have calibration data from the EXIF of the image but i can't calculate transform matrix because, when i try to align the image captured later together with the set of drone's acquisistions, PhotoScan can't align it.
Is there another way to obtain camera transform matrix? and so position and rotation info?
PS: sorry for my bad English..
