42
« on: February 02, 2016, 02:48:11 AM »
This is a long thank-you post to Agisoft Support.
I’ve been trying to create high resolution DEMs in order to calculate the internal volume of containment berms around oil storage tanks using images taken with a UAV.
The problem has been that although normal orthogonal (camera straight down) images will stitch in Photoscan and generate a DEM, they don’t detect and report variations in the top of the structures well because of the triangulation problems associated with two overlapping images taken at almost the same angle, straight up.
On the other hand, if you fly the UAV lower and point the camera back towards the top of the structure being modeled, Photoscan will more accurately pick up altitude changes in the top of the structure, but flying a UAV mission that returns an image set where every picture will stitch is very difficult.
Usually, on a 4 sided rectangular containment berm, one or two of the sides will stitch, but on at least one side some or all of the images are left non-aligned.
The nice folks at Agisoft Support looked at a small subset of clear images with an overlap above 70 percent where one row was taken directly over the berm with the camera pointed straight down, then one row of images down each side with the camera pointed at 60 degrees down from horizontal.
They told me that the reason the images didn’t stitch was because the angle between the images was too great, and they suggested that 30 to 35 degrees was the most Photoscan could work with.
I went back and re-designed the autonomous UAV mission to take a vertical set, then a sideways set from 30 degrees from horizontal, and the images stitched properly every time.
Then I went back and added two more legs down each side at 60 degrees from horizontal and found that now they would align reliably.
Because they took the time to look at my images and make an informed response, they saved me days of work trying to adjust the overlap and distances, all of which would have been a waste of time.
The other thing I learned was that including the camera orientation values from the mission flight log helped Photoscan tremendously in terms of properly aligning all images.
For these missions I used a DJI Phantom 3 Pro, and by calculating the camera direction (YAW) and camera vertical angle (GIMBAL PITCH), I was able to set the accuracy values for all images to 1, and I have yet to have a non-aligned camera, even when I include the images taken during the turns at the end of each leg.
The DJI Phantom 3 Pro also includes worthless altitude information in the EXIF data associated with each image, that they call “GPS Altitude”. I have read that the GPS Altitude tends to be inaccurate, but with DJI, it’s more than that. I live near the coast and I have yet to export a single mission of images where the altitude values in the EXIF data were above sea level.
For my platforms sold in the US, DJI also writes the GPS altitude in feet, which using the WGS 84 coordinate system, Photoscan takes as meters. That means that all the alignment angles are off, and Photoscan starts properly disregarding the altitude values. This causes some cameras to be positioned below the ground, up in the sky, whatever.
By using the flight log data you can create a Reference/Import file that overwrites the incorrect DJI altitude value with a more rational “Mission Altitude” value, which is referenced from the take-off home point and seems to be very reliable.
The trick with Photoscan is that you can’t just import camera orientation values.
If you’re going to import anything, then as far as I can tell, you have to import everything.
That means you also have to calculate the lat/lon values for each image based upon it’s EXIF time, even though the flight log info doesn’t correspond to individual images, it writes a record every few milliseconds.
It is possible and I’ll help anyone interested if I can
Thanks Agisoft, for a great product and informed support
Hank
Texas