Well, I'm currently working on an project which have 35k photos, and 40M pixels of each photo.
I'm using three workstation and an NAS to store the project. From what I've already learned, you should at least have 7 times larger space of the photo size, which means if your photo is 1T on disk, then you should at least have 7T spare disk space. And of course, You can not use HDD or any HDD RAID, that's extreme slow. I only used 3 workstation, But I've observed 7Gb network bandwidth at some moment.
And for the generation, I can say, you'll need some nvidia cards which have at least 24G VRAM, or you might encounter this problem: when generating textures from blocked model, there is not enough VRAM to calculate on GPU, then you will end up with calculate on CPU, which is at least 20 times slower.
And for model generation, metashape have some defects. When you creating the blocked model, which is a must have option for this kind of large dataset, at least 50% of the calculation work is single thread, only need single core. So you'll need a high freq CPU and of course, high freq of memory.
And when generating the textures of blocked model, there is some improvements needs to be done in next release? The problem is, if you have 300 blocks of model, then when generating the texture, each block need to read all 300 blocks, which takes at least 80% of time. and for the last step of generating the texture of blocked model, you'll need a lot more more more memory, otherwise, It will simply fail because there is not enough memory.
My experience is based on only one chunk of 35k/40M pixels dataset. Because I need to modify the generated model, without the blocked model, I think it's impossible to modify the mesh.
Of course, If you need only the tiled model, That's totally another story, just avoid the high polygon count setting. I've tested with high setting, my dataset gives me 95G cesium tilesets.
