I am interested in what other people are doing for a 1.0 workflow with "large" aerial projects.[Edited 12 Mar 2014 to add camera calibration step since this helps sooo much with GCP placement - thankyou Porly for your suggestion]First, I want to say thank you and cheers to the Agisoft team. I am really enjoying the changes that are taking shape in 1.0 and I love how involved with the user community you are on this forum and in PMs. It's a pleasure to work with folks who do such a great job of supporting and developing their product while keeping in touch with the user community.
I collect aerial imagery via Cessna with a wing-mounted 12MP camera at ~600 m elevation over ~25 linear km shooting images every 3 seconds in four overlapping passes (about 1km wide swath). I am using a Canon D10, but about to do a flight with a EOS M (22mm EF-M lens and big beautiful sensor - for my low budget anyway). I am doing repeat flights at least once/month. Ground pixel resolution is 10-15cm depending on elevation, but generally about 12cm.
With the changes in processing in 1.0, and my ongoing learning, I find that I get best results for the overall dataset as follows:
0
(new step) If you have used your camera(s) on another project, Export the adjusted camera calibration and import it into the new project to improve initial alignment and reduce/eliminate bowl effect - this was a
HUGE timesaver for me since the bowl effect made placing GCPS a hunting effort)
1)Align all images with Accuracy=High, Pair pre-selection=generic, point limit=40 000.
2) trim flyers and sinkers - I usually just trim the obvious stuff - though I have experimented with gradual selection - I would be interested in other folks' experiences for 1.0 workflow using gradual selection especially.
3)Set Coordinate system, Import GCPs and manually place 3 to 4 GCPs that are well-distributed over the flight area (2 images each). Then update georeferencing. This generally gets all the other calculated point locations somewhat close to their real location.
4) starting from one end of the project, I sort GCPs by lat or long (depending on orientation of project) and work my way through all GCPs by filtering photos by marker, then placing the GCPs on all images where they are visible. Generally I update georeferencing after doing this for each GCP.
5) After all GCPs are placed, I optimize alignment, unchecking fit aspect, fit skew, and fit k4 - this is based on what I read on earlier forum postings. I am especially interested in feedback on this step too. Seems like skew might be useful for folks trying to do stuff with rolling shutter cameras, but not sure..
6) copy optimized model into multiple chunks and clip each chunk to about a 5th of the model (roughly 5km sections). Trim GCPs and cameras and adjust model extent. Generally I overlap about 500 meters (2 or 3 GCPs) on each side with the adjacent chunk. Note that doing the GCPs first before splitting saves a lot of GCP placement time and seems to provide better continuity.
7) generate dense pointcloud with ultra-high quality/moderate depth filtering ( I wish there was more control over depth filtering - still dealing with bad noise in water). This gives me about 400 million points per chunk.
8 ) build mesh with height field/dense cloud/Interpolation enabled/custom face count = 40 million faces. This gives me good enough quality to produce a DSM with 0.5m xy resolution where I can resolve features with z relief of about the same magnitude, like logs on the ground and slope breaks from coarse to fine sediment in scarps.
9) export color corrected RGB-average image and DSM.
Notes:Step 8 and 9 are batch-processing. I would love to develop a python script to always export DSMs and models with the same coordinate system and extents (and resolution for the DSM) but I haven't had time to sit down and play with the API yet.
Up until recently (maybe build 1684?) I generally got good orthophoto results (RGB average) by simply constructing a mesh for each chunk from the sparse cloud, then I constructed a dense mesh (custom, 40-50 million faces) for the DSM. I liked the sparse cloud orthos because they didn't cause so many artifacts in forested areas - and the trees looked more natural in the orthophoto. Now it seems like there are blending or projection issues with the sparse cloud that are resolved when using the dense cloud for orthos, but it takes much longer (about 30 hrs to build dense cloud and mesh in my case).
I use RGB averaging because I find that it increases detail from my relatively noisy sensor by essentially working like image stacking to increase the signal to noise ratio.
Hardware:My system is aDell T7500 with dual Xeon X5647s@2.93GHz with 192GB RAM and either 1 NVidia GeForce 560Ti and one ATI HD 7970 or two ATI 7970s depending on how successful I am at making everything happy together. I just ordered a R9 290, but man those things are hard to get your hands on!