Hello All,
So I am a novice with both photoscan and programming, but I will do my best to supply as much information as possible.
I am modelling surface texture and micro-topography of bedrock samples. Typically my models have around 55-60 photos to them shot on a full frame Nikon camera, usually 7378x4924 pixels. However, I have gone through and masked off around 60% of the photo area to ensure that photoscan processes the least amount of data necessary.
I can generate my dense point cloud on all settings except "Ultra High", which is unfortunately the only resolution that I actually need. I am using our 32-core high performance cluster with 128gb of RAM.
Each model will run for varying periods of time, some up to 100 hours, and always fail just before completion with the same message "Not Enough Memory". Looking at the records, photoscan utilizes around 110gb of the cluster's RAM while it is processing, and suddenly in the last portion of processing it spikes and tries to consume more than 200% of the available memory for the system. I have it recorded that photoscan attempted to use 287gb of ram on our system that only has 128.
My question:
Is there any way to assign a maximum RAM allocation threshold for photoscan?
If needed I can offer the program several terabytes of virtual memory, though I do not know if that would solve anything, nor do I currently know how to do so. All other processes seem to work no problem, but every single model regardless of the photoset and level of masking fails, without fail (pun intended) on the dense cloud generation. I have tried at least a dozen "Ultra High" res dense clouds over the past two and a half weeks with no success.
Please let me know if you have any tips, all advice is very much appreciated!
Thank you.