8
« on: March 19, 2013, 12:28:49 PM »
At what point in the 3D processing do the depth filtering settings (mild, moderate, agressive) play a role?
The reson why I ask this is the following: I have a project with approx. 450 photos of very sparse plant cover over a ground surface. Because the plant cover is so sparse, Photoscan finds many points on the ground, so I can theoretically use the dense Photoscan point cloud like a lidar point cloud to measure vegetation height. I have now computed depth maps and dense point clouds at different quality settings (lowest to high) but always at agressive depth filtering setting. When looking at the dense point clouds I have the impression that very many plant 3D points are missing, and I assume that this is because of the "agressive" depth filtering.
If I want to compute dense point clouds at "mild" depth filtering, do I have to re-compute the depth maps? It all comes down to when the depth filtering is applied by Photoscan: during depth map computation or during dense point cloud computation from the depth maps?