Thanks again Mike and David for your constructive comments and references to literature. I've plunged into the pool of uncertainty and systematic error and I am thoroughly soaked in radial and tangential distortion. I dug up my old photogrammetry text and poured through more literature, and I'm still trying to figure out how far I can take this before I am just beating a dead horse.
After reviewing the James and Robson (2014) article and an earlier article on stability of consumer grade cameras (Wackrow and Chandler 2007), and trying some calibration with convergent imagery on highly 3D objects, I reprocessed a recent flight by aligning with a fixed camera model and got (1) a reduction of about 10-15% in my GCP error (as quantified by PS) after optimization, holding radial and tangential distortion variables fixed; and (2) a reduction in both amplitude and bias of my checkpoint error (SfM - LiDAR on unchanged surfaces).
I didn't like the fact that I couldn't pull error stats out of the camera model developed with convergent imagery, so I decided to try Agisoft Lens out for calibration, doing convergent imagery on it as well. Because my camera is focused to infinity the calibration images are a little blurry, but overall the results are looking really good. So far I'm up to 251 calibration pics over several days and power cycles, and my error is still decreasing, which implies that at least over this time and power cycle scale the lens model is relatively stable. I posted more on that
in the Agisoft Lens forum.
My question(s) to both of you (and folks at Agisoft) - (1) Is there any advantage to calibrating on highly 3D objects rather than the Agisoft Lens screen pattern? I am finding that the Agisoft Lens screen pattern calibration seems to be converging on relatively stable values the more images I add, while I did not see the same for a half dozen convergent imagery alignment exercises using PS on random complex real world scenes;
and (2) Any idea how sensitive the radial and tangential distortion parameters are? how far should/can I go before I am satisfied with this whole camera alignment excercise? I really would love to feel like I calibrated the heck out of my camera - do I need to take it to 500 pics and see how much error I have then? At what point can I say that I calibrated adequately? I was hoping I'd stop seeing improvements in my std err, but apparently I'm not there yet...
Really appreciating your perspective,
Andy
References:
James, M.R., Robson, S., 2014. Mitigating systematic error in topographic models derived from UAV and ground-based image networks. Earth Surf. Process. Landforms n/a–n/a. doi:10.1002/esp.3609
Wackrow, R., Chandler, J.H., Bryan, P., 2007. Geometric consistency and stability of consumer-grade digital cameras for accurate spatial measurement. The Photogrammetric Record 22, 121–134. doi:10.1111/j.1477-9730.2007.00436.x