Forum

Author Topic: Can Photoscan improve alignment by using drone inertial data?  (Read 2366 times)

bmartin

  • Newbie
  • *
  • Posts: 15
    • View Profile
Can Photoscan improve alignment by using drone inertial data?
« on: February 28, 2018, 07:06:20 PM »
Hello,

I have a very difficult image sequence acquired underground by a drone presenting various challenges (non uniform illumination, some fast movement, dust, etc).  I worked very hard to get the alignment to work but it succeeded in aligning only 75% of the images and out of this 75%, there was much problems as the camera path is nothing like the real flight path!

I thought that a great way to help the alignment would be Photoscan to also use the inertial data to better constrain alignment solution or even reject them completely if deemed impossible.  In doing so, you would essentially get a Visual-Inertial SLAM algorithm like the ones used in autonomous vehicles instead of just using photogrammetry alone.

Is it something that Photoscan could do?  If so, what kind of format should this inertial data be in order for Photoscan to use it?

Regards,

Bruno

SAV

  • Hero Member
  • *****
  • Posts: 710
    • View Profile
Re: Can Photoscan improve alignment by using drone inertial data?
« Reply #1 on: March 01, 2018, 10:00:07 AM »
Hi Bruno,

Inertial data from your drone cannot be used in PhotoScan (it doesn't employ SLAM).

I have processed underground UAV datasets before. Make sure that you delete/disable all blurry images resulting from fast/jerky UAV movements (use the estimate image quality function) and mask all parts of any images which show dust or any other unwanted objects.

I actually extracted still frames from a 4K video acquired by drone because that gives the SfM algorithm more data to work with compared to still photos where the image overlap might not be sufficient.

All the best.

Regards,
SAV