I'm currently doing a benchmarking exercise to compare speeds of different machines, including Azure based VMs, and in doing so I seem to be experiencing a massive difference in performance between v 1.3.0 versus v 1.3.2 & v1.3.4 (the versions I have tested) for the dense cloud build.
I have a fully aligned data set of 1509 photographs, which i am running through an Azure NV24 VM machine. This is quite a powerful machine with 24 cores and 2 x Tesla M60 GPUs, see screen grab.
The dense cloud build comparative timings and resultant number of points is as follows:
1.3.0 - Depth Map 00:32:50, Dense Cloud 07:39:00, Points 56,043,481
1.3.4 - Depth Map 00:19:24, Dense Cloud 00:28:54, Points 26,606,884
So note the MASSIVE difference in time to build the dense cloud.
All processing parameters were of course identical.
Interesting also is that the depth map took substantially SHORTER time using 1.3.4, but also the number of resulting points is much less, around half, yet another strange thing.
What I noticed during the processing of the depth maps on 1.3.0 was that the GPUs were maxing out at around 10-12% each...could explain the timing difference here.
But of course the most significant factor is the timing on the dense cloud part, and also the difference in point count.
Any ideas? Is this a bug in 1.3.2 and later versions?
Thanks
John