does having a single zip for thumbnails by chunk is a real space saver ? The decompression had a noticeable delay at opening with +100k files in a PSX project.
If you open up any of the zips, you'll see there's zero compression, they just use compression set to 'store' only, which probably helps performance a bit as you're writing one big file to disk instead of thousands of little ones... where the HDD (less so for an SSD) will spend more time talking about reading, writing files, than actually reading or writing the files...
Using Store also does save disk space, especially with small files, as you eliminate slack space in the drive's clusters... As you can only have one file per cluster... so tiny files, or just a lot of files, will have that slack space adding up. One big zip file will only have the slack space from the very last cluster allocated to the file.
Looking at Procmon, it read the thumbnails.zip file right after load, is finished with that quickly, and then gets on with reading the 300 depth map zip... Which takes aaaaaaaaaaaaaagggggggggggeeeeeeeeessss for me.
Does it really need to read them after load? Could it read them later, when doing so is a prerequisite for a task?
Gonna continue deleting the depth maps, i guess...