Forum

Author Topic: Accounting for lens and bundle adjustment error in uncertainty estimates  (Read 8953 times)

andyroo

  • Sr. Member
  • ****
  • Posts: 438
    • View Profile
Hi Everybody!

Looking for advice on error analysis to better estimate uncertainty for change volume calculations based on surfaces I produce with PhotoScan. Also looking for ways to potentially better calculate the uncertainty associated with a given raster cell from the DEM, or from the pointcloud (possibly including camera information).

I am struggling to accurately quantify uncertainty of my modeled surface. I have tons of checkpoints and get "pretty good" RMSE/StDev on the checkpoints (StDev of  checkpoints on surfaces is 10-20 cm). But I know that my error is not spatially uniform/randomly distributed, and that prevents me from accurately quantifying uncertainty (it gets way harder if you can't eliminate systematic error).

The major source of error is low frequency (hundreds of meters), low magnitude (~25cm) surface undulation between control points caused - I am pretty sure - by (1) small errors in calibration for lens distortion (consumer camera), and by (2) tiny errors (possibly) in bundle block adjustment and consequently camera position.

I've attached an image showing these low magnitude errors (faint reddish or blueish hues changing from yellow) and a plot of the vertical difference between two flights over an unchanged surface. values are meters.

I think that the best way to remove these systematic errors would be to correct the camera orientation parameters using additional check points (from a third "reference" surface like lidar and/or in combination with GCPs that would constrain the vertical and help to optimize the bundle adjustment/lens error. But that would have to be software supported or have me do more complicated math and programming than my geologist brain is ready for (unless anyone has some pointers).

Absent that, I think that my best bet is to try to create an estimated error (de-trending) surface using differences between the raster surface and some "reference" data source like RTK GPS points or points unchanged from lidar surveys. I've seen that done with traditional photogrammetry, but typically it's applied to a calibrated camera, and it's a simple inear transformation. I am pretty sure I'd need to do at least a third order polynomial, or a kriging surface.

jamesm

  • Newbie
  • *
  • Posts: 5
    • View Profile
Hi,
It looks like your longest wavelength error here could certainly be due to error in the description of radial lens distortion, which will be dominated by the value of the first parameter (k1). I'm assuming you used a self-calibrating bundle adjustment(?) - which can be susceptible to these systematics if the optic axes of matched images are near-parallel.

One option to explore may be to manually adjust the K1 parameter to try and get a better surface:
Fix the camera model with your current best parameter values, then make a small adjustment (e.g. 5%) to K1 and rerun the bundle adjustment. See if that reduces or increases your systematic error. You can repeat this until you have a 'best' solution, but you may also find that, if you can quantify the systematic error, so that you can plot an error metric against K1 values, you can model the relationship, and define a best fit K1 value.

For 'simple' scenarios in which systematic error shows as a symmetric doming in DEM surfaces, I have just fitted a straight line to one half of the error curve and used the gradient of the line as an error metric (zero gradient = no systematic error). With this, only a few K1 values have to be tried in order for a good value to be determined. In simulations and real imagery I have found this can reduce error by at least an order of magnitude.

If you are interested in more details, see:
http://onlinelibrary.wiley.com/doi/10.1002/esp.3609/abstract
or drop me a line direct if you don't have access.
Mike

David Cockey

  • Full Member
  • ***
  • Posts: 209
    • View Profile
Thre is no guarantee that the form of the lens distortion in the model which PhotoScan uses is a perfect match for the actual lens distortion. If it is not then PhotoScan will not be able to perfectly correct for lens distortion.

andyroo

  • Sr. Member
  • ****
  • Posts: 438
    • View Profile
Hi David and Jamesm,

Thank you both for responding.

jamesm,

Thanks for the tip, and thanks for the link! That's an awesome article! And very timely for me. Seems like PS must be using a self-calibrating bundle adjustment. I have about 40 flights over the same study area, with SCBA-derived camera parameters for each one, although the last several have been "seeded" with the same fit to speed up my GCP entry. My GCPs are pretty dense which I guess is why I don't see the doming too badly, but it looks like I need to take a hard look at that paper and try to duplicate some of your techniques for removing systematic error by modeling it using a single flight line. I have flights with lidar data of the same surface with nice flat(ish) riverbed terrain so that shouldn't be too hard to do.

--EDIT -- was just thinking about what your paper talked about with convergent views and possible approaches - and wondering - would it make sense to calibrate the lens by taking pictures of some object (say a statue or building) by walking all the way around it or at least doing an arc on one face.. I guess I don't even need real world points if I am just trying to get lens calibration parameters, but I could shoot reference points on corners of a building and make a model that way, and just take a ridiculous number of converging pictures. Just not sure yet how much my lens parameters change between camera powercycles. ---END EDIT --

If I can reduce the systematic error by an order of magnitude I'll be in the 2cm range, which would be amazing. The imagery is from a cessna at 500-600 m AGL using a Canon D10 that I hacked with CHDK.

Andy

David - I'm a little confused by your response. Do you mean that Photoscan may not model lens distortion properly?
« Last Edit: June 18, 2014, 07:03:32 AM by andyroo »

jamesm

  • Newbie
  • *
  • Posts: 5
    • View Profile
Hi Andy,
David's absolutely right - you would probably see this effect if the camera model wasn't appropriate for the form of your lens distortion too. However, the lens distortion model in PhotoScan is pretty generic (and very well tested over multiple decades) and should be fine unless you have a particularly wacky lens (which I don't expect the D10 does). But, if you were shooting through a curved window, this could add effects that are not anticipated within the lens model.

Yes, unless you are fixing the camera model parameters, the project will represent a self-calibrating BA. You could certainly get a calibration from a convergent project (all your suggestions would work although the more '3D' and frame-filling the object, the better). You wouldn't need a ridiculous number of photographs or necessarily any ground control. But - as you identified - the fact that you are using a compact and the lens physically moves with power cycles could be a problem. You could explore this by just doing a number of calibration projects, cycling the camera power in between each, and seeing how reproducible the results were. There's a paper somewhere (Rene Wackrow?) on calibration reproducibility for compacts. Depending on how rigid the D10 is, you could also have variation with how the camera is held too - i.e. when looking horizontally, the lens assembly my droop a little compared with when looking vertically down. I have just seen that the D10 is waterproof with a fully enclosed lens assembly, so (unlike other compacts) droop may not be an issue. Nevertheless, I'd still recommend considering a dSLR with a prime lens that you can fix to maintain geometry... but your accuracies seem pretty good given what you have.

Within a week or so, I will be releasing a new version of my geo-referencing software (sfm_georef) in which the doming analysis will be implemented. This could help provide a metric for quantifying the systematic deformation (you'd still do all the reconstruction work/BA etc. in PhotoScan though).
Mike

David Cockey

  • Full Member
  • ***
  • Posts: 209
    • View Profile
Quote
David - I'm a little confused by your response. Do you mean that Photoscan may not model lens distortion properly?
PhotoScan models lens distortion properly, but it does not model lens distortion exactly for most lenses. Look close enough and the there will be a small amount of residual distortion for most lenses after PhotoScan corrects for distortion. Usually the amount of residual distortion is inconsequential  and doesn't affect the results for the intended purposes. The residual is likely to be smaller in magnitude than other sources of errors and uncertainties.

Quote
However, the lens distortion model in PhotoScan is pretty generic (and very well tested over multiple decades) and should be fine unless you have a particularly wacky lens (which I don't expect the D10 does).
The lens distortion model in PhotoScan should be fine. But the small differences between the actual lens distortion and the lens distortion model for each photo can add enough to be significant for some applications. With long, narrow objects where each photo used shows only a short section of the object I've had small magnitude, long wavelength distortions occur. Less overlap between photos tends to increase the amount of distortion.

andyroo

  • Sr. Member
  • ****
  • Posts: 438
    • View Profile
Interesting. I'll try a couple calibration experiments. Reading the manuscript more closely it makes me think it would be pretty great if PS was able to use both control points and check points to take advantage of the linear relationship between the radial distortion parameter and vertical DEM error and refine the radial distortion estimate. Off to take a bunch of convergent pics with my two D10s...

--EDIT I am making a spreadsheet of calibration data now, but noticed that I have k4 inconsistently selected in my models. According to forum posts it should be only for wide lenses, but I am not sure if the d10, being a compact, would qualify (I have it on the widest/default zoom, and always have). I am guessing that to compare like to like I should group the k4 and not k4 together and figure out if I have some better or consistent value for k1

Also I am curious - definitely seeing a grouping, and wondering if I have that much drift in all of my variables or if I might get the best results by picking the average of them all for the initial parameters then maybe even fixing those parameters . I've looked at several of the Wackrow papers, and from what they saw it looks like parameters were stable for the Nikons. Will have to compare my numbers with theirs when I am done grouping I guess --END EDIT--

I would love to use a DSLR, and I've been trying to get a mirrorless (EOS M) to work, but I am having e hell of a time getting it to focus to infinity and stay there. I want to give someone who knows what they are doing the camera with a 22 mm lens and some superglue and just make it stay there. argh.
« Last Edit: June 19, 2014, 04:53:45 AM by andyroo »

andyroo

  • Sr. Member
  • ****
  • Posts: 438
    • View Profile
Thanks again Mike and David for your constructive comments and references to literature. I've plunged into the pool of uncertainty and systematic error and I am thoroughly soaked in radial and tangential distortion. I dug up my old photogrammetry text and poured through more literature, and I'm still trying to figure out how far I can take this before I am just beating a dead horse.

After reviewing the James and Robson (2014) article and an earlier article on stability of consumer grade cameras (Wackrow and Chandler 2007), and trying some calibration with convergent imagery on highly 3D objects, I reprocessed a recent flight by aligning with a fixed camera model and got (1) a reduction of about 10-15% in my GCP error (as quantified by PS) after optimization, holding radial and tangential distortion variables fixed; and (2) a reduction in both amplitude and bias of my checkpoint error (SfM - LiDAR on unchanged surfaces).

I didn't like the fact that I couldn't pull error stats out of the camera model developed with convergent imagery, so I decided to try Agisoft Lens out for calibration, doing convergent imagery on it as well. Because my camera is focused to infinity the calibration images are a little blurry, but overall the results are looking really good. So far I'm up to 251 calibration pics over several days and power cycles, and my error is still decreasing, which implies that at least over this time and power cycle scale the lens model is relatively stable. I posted more on that in the Agisoft Lens forum.

My question(s) to both of you (and folks at Agisoft) - (1) Is there any advantage to calibrating on highly 3D objects rather than the Agisoft Lens screen pattern? I am finding that the Agisoft Lens screen pattern calibration seems to be converging on relatively stable values the more images I add, while I did not see the same for a half dozen convergent imagery alignment exercises using PS on random complex real world scenes;

and (2) Any idea how sensitive the radial and tangential distortion parameters are? how far should/can I go before I am satisfied with this whole camera alignment excercise? I really would love to feel like I calibrated the heck out of my camera - do I need to take it to 500 pics and see how much error I have then? At what point can I say that I calibrated adequately? I was hoping I'd stop seeing improvements in my std err, but apparently I'm not there yet...

Really appreciating your perspective,

Andy


References:

James, M.R., Robson, S., 2014. Mitigating systematic error in topographic models derived from UAV and ground-based image networks. Earth Surf. Process. Landforms n/a–n/a. doi:10.1002/esp.3609

Wackrow, R., Chandler, J.H., Bryan, P., 2007. Geometric consistency and stability of consumer-grade digital cameras for accurate spatial measurement. The Photogrammetric Record 22, 121–134. doi:10.1111/j.1477-9730.2007.00436.x

Marcel

  • Sr. Member
  • ****
  • Posts: 309
    • View Profile
I would love to use a DSLR, and I've been trying to get a mirrorless (EOS M) to work, but I am having e hell of a time getting it to focus to infinity and stay there. I want to give someone who knows what they are doing the camera with a 22 mm lens and some superglue and just make it stay there. argh.

Lensrentals.com has done something like this recently (also making the aperture fixed as well). They might be able to help?

http://www.lensrentals.com/blog/2014/07/some-holiday-lens-mutilation