Hi I have taken >1000 pictures from a landscape with a helikite.
for most of the pictures, the camera was at altitudes between 50m and 150m.
Because photographs which were taken at lower camera elevations (e.g. 50m) have more spatial resolution than photo's produced from higher camera elevations (e.g. 150m), the question is how PhotoScan deals with photo's with different resolution and field of view, resulting from different camera heights?
Let's say for example that 900 pictures where taken at 50m height, and that those 900 pictures are enough to build the textured 3D model in PhotoScan. Let's further say that there are also 100 pictures that were taken at 150m height with lower resolution and higher field of view.
How will PhotoScan process this mix of 1000 (900+100) photos. Is it smart enough to leave the 100 low resolution photos out, because this will improve the resolution of the model? Or how will Photoscan deal with this situation if this is not the case, for (i) the alignment process, (ii) the geobuilding process, (iii) the texture building process and (iv) the orthophoto process?
This is important for me, because this may mean that photo's taken from a high elevation are best left out of the set of photos which is used as input in Photoscan. The problem here is that I don't know at forehand if the set of photo's is then sufficiently overlapping.
Kind regards,
Jan