Another followup on my dense point cloud processing time observation above.
I decided to process a similar sized dataset (but from an aircraft covering 30 sq km instead of a crane covering ~10,000 sq m) and the dense point cloud processing finished in <6 hours. I am guessing that the increased overlap on the project I mentioned earlier is the main cause for the increase in processing time, as opposed to the total number of images.
(I also updated to build 1742)
And on another note I have a question about this build (and my workflow). Previously I had been processing about 20km-long flights in four 5km-long chunks, with vertices set at 50 million, to try to maximize detail. Disadvantage is that I had to enter many of the GCPs twice since I am overlapping the chunks to improve the fit.
With the new pointcloud generation step, I decided to try generating the whole model at medium (sparse->dense->mesh), inputting all of the GCPs, then regenerating the whole model at a higher resolution. I figure at worse case I can clip the model into chunks again and regenerate those if it gets too slow, but things seem to be working well. My big question is - to maintain the same grid resolution, should I increase my faces to ~150 million? that seems... daring...