I'm trying to automate the generation of orthomosaics and models. There is only one problem currently, that being that when I export the model, the origin point does not align with the "ground" plane that I'd expect it to.
https://i.imgur.com/ICYrDxE.jpgThe photos are taken with a DJI drone, so there should be sufficient altitude data to rectify this issue.
My first plan was to simply get the transform location of the camera in PhotoScan and then offset the chunk by whatever the difference was with the reported altitude in the EXIF data. However, I believe there's some sort of coordinate system translation that needs to occur beforehand, as the translation matrices of the cameras are, for example:
cam.transform.translation()
Out[138]: 2018-08-16 11:45:05 Vector([-0.41432089205811645, 4.574950116471244, 0.02561866387081857])
Meanwhile, the chunk translation is:
chunk.transform.translation
Out[145]: 2018-08-16 11:47:10 Vector([-2238541.869777814, -3528167.781020327, 4742206.90104938])
I'm not sure in what units these are supposed to be or how to proceed. Any help is appreciated!
The camera reports it's altitude as:
2018-08-16 11:49:33 drone-dji:AbsoluteAltitude="+64.49"
2018-08-16 11:49:33 drone-dji:RelativeAltitude="+51.20"
Thanks
P.S. Is it possible that it's using the AbsoluteAltitude to determine the ground plane? How could I attempt to translate the chunk "down" by a set distance in meters?