I'm using Photoscan 1.0.4. on a 64 bit PC running Windows 8.1 Enterprise. I have 2 GTX 780 cards, a Quadro K5000, and 128 GB of RAM.
I processed 84 images of a tropical rain forest canopy as a test to determine the best combination of density (lowest, low, medium, high and ultrahigh) and filtering (aggressive or mild). I expected to see differences in point density, and for aggressive filtering to retain fewer points than mild filtering. Instead, I found that there are (1) differences in the spatial distribution of points, with some of these analyses resulting in large areas that have no points at all (holes in the point cloud, see images below). I also found that (2) some objects get filtered out by one or the other filters (mild or aggressive). The attached images illustrate the problem.
Here's the breakdown on processing time and point density for each run.
Dense reconstruction (ultrahigh, mild): 6.99 hours, 82,435,352 points
Dense reconstruction (ultrahigh, aggressive): 7.18 hours, 111,084,053 points
Dense reconstruction (high, mild): 1.03 hours, 35,371,879 points
Dense reconstruction (high, aggressive): 1.05 hours, 39,441,465
Dense reconstruction (medium, mild): 0.20 hours, 10,654,843 points
Dense reconstruction (medium, aggressive): 0.20 hours, 11,497,366 points
Dense reconstruction (low, mild): 0.05 hours, 2,892,327 points
Dense reconstruction (low, aggressive): 0.05 hours, 3,025,643 points
Dense reconstruction (lowest, mild) 0.02 hours, 746,817 points
Dense reconstruction (lowest, aggressive) 0.02 hours, 754,450 points
Therefore, my questions are, (1) why do higher density reconstructions result in large areas with no points at all? (2) Is there some way to turn depth filtering off entirely, and if so, would this retain all points for a given density of reconstruction?