I'm attempting to generate a dense cloud with 1000 large photos (~112 MP) from a large format metric camera (Z/I DMC-1). My machine has 48 gb of RAM so it's not possible to generate a dense cloud on High or Ultra High with this many images as a single large chunk or even a handful of smaller chunks. I would probably have to create more than 50 chunks to process this many large images so I'm considering an alternative tiling process.
This process would involve the following steps:
1. Decrease the size of the region bounding box to cover an area that could be handled with my available memory (maybe 5 km^2)
2. Identify and align this small region to the geographic northwest corner of the bounding box for the whole project
3. Generate a dense cloud with Quality = High or Ultra High for this small region
4. Export the dense cloud to LAS
5. Move the region over to the next tile ( ~5km east) and repeat the process until all tiles have an exported LAS
I think the two primary benefits of this approach over chunking would be (1) a removal of the need to manually break up chunks and (2) a removal of duplicate points in the overlapping areas that are created when using multiple overlapping chunks. The obvious downside is that I couldn't generate a complete mesh or orthomosaic within photoscan, but I have external software that I could use for these processes.
So my questions are:
Are there any other drawbacks to this approach that I'm overlooking?
Is chunking a better approach if all I want is a dense cloud?
Has anyone tried this process or implemented it in a python script that they'd be willing to share?