Hi everyone,
I would like to know if anybody here has implemented a process able to deal with millions of photos with Photoscan, if you have any feedback I am listening !
One of the dataset we are building contains +7 millions files and will double in the next months, all these pictures were taken with the same camera model and are georeferenced. The resolution is low (2008x1680, 3.37Mpx) but enough for our use cases.
While I am used to deal with massive lidar or raster datasets, I have not figured out how to achieve it with Agisoft Photoscan. Even if it is divided in 1400 alignment subsets which will themselves be subdivided with the split_in_chunck.py script, I don't know how to get to a seamless textured mesh without many manual interventions.
The chaintool I'm thinking of so far :
1. align 1400 subsets
2. divide them in chunk with an overlap for densification
3. export the point clouds and merge them back with PDAL
4. sample the cloud to get an homogeneous density (remove the overlap effect)
5. tile and mesh the output
6. bring ten of thousands of chunks back in PS for texturing
Problems :
1. python or manual work is required for loading cameras given a file list, is there a size limit for a project ?
2. python or manual work as it is not available as a batch process
6. python or manual work for loading the tiles and the referenced cameras, size limit of a project
And surely some others I can't think of right now

regards,
jr