1
General / Can Photoscan improve alignment by using drone inertial data?
« on: February 28, 2018, 07:06:20 PM »
Hello,
I have a very difficult image sequence acquired underground by a drone presenting various challenges (non uniform illumination, some fast movement, dust, etc). I worked very hard to get the alignment to work but it succeeded in aligning only 75% of the images and out of this 75%, there was much problems as the camera path is nothing like the real flight path!
I thought that a great way to help the alignment would be Photoscan to also use the inertial data to better constrain alignment solution or even reject them completely if deemed impossible. In doing so, you would essentially get a Visual-Inertial SLAM algorithm like the ones used in autonomous vehicles instead of just using photogrammetry alone.
Is it something that Photoscan could do? If so, what kind of format should this inertial data be in order for Photoscan to use it?
Regards,
Bruno
I have a very difficult image sequence acquired underground by a drone presenting various challenges (non uniform illumination, some fast movement, dust, etc). I worked very hard to get the alignment to work but it succeeded in aligning only 75% of the images and out of this 75%, there was much problems as the camera path is nothing like the real flight path!
I thought that a great way to help the alignment would be Photoscan to also use the inertial data to better constrain alignment solution or even reject them completely if deemed impossible. In doing so, you would essentially get a Visual-Inertial SLAM algorithm like the ones used in autonomous vehicles instead of just using photogrammetry alone.
Is it something that Photoscan could do? If so, what kind of format should this inertial data be in order for Photoscan to use it?
Regards,
Bruno