I notice on a ~45k image project that during dense reconstruction the initial processing runs through chunks of ~235 images at a time.
For each step, the processing loop usually spends about 1/3 of the time (per chunk) "loading images" with relatively low CPU and disk usage (CPU in bursts of up to about 35%, disk in 9-10 "sawtooth" bursts per minute - see screenshots).
Then the other 1/2-3/4 of the time things proceed as I would expect "estimating disparity" with CPU and GPU at pretty much maximum, minimal (I think no) disk access.
I'm wondering what the bottleneck is during the 1/3 of the time I'm loading images, since Metashape is so good for most of the workflow at maximizing at least read access if not CPU or CPU+GPU.
I didn't compare TIF/JPG performance with DNG yet, and wondering if there's a difference in what image format I use or bit depth the project is processed (current chunk is processed as DNG from ARW) or if I could make one by changing up hardware (SSD or M2 or RAMDISK) or other system settings. I am suspicious that storage media would make a difference since HDD access is so sporadic, and I was thinking maybe there's an issue with file index optimization that's slowing things down.
The one reason I think file indexing might play a role is that I noticed that if I disable photos in a chunk with 45K images it takes about 2 seconds, but if I delete photos from those chunks instead (trying to make a better organized/more efficient project structure) then it takes 5-10 minutes in this project.