I've found that I can increase the speed which models are processed if I run multiple simultaneous instances of PhotoScan on the same machine, and achieve over
twice the performance increase!
On a TR1950X workstation:
- Ran a dataset, and I got a processing time of 2670 sec for processing the "Build Mesh" step.
- Used the 'Split_in_Chunks.py' python script, and divided up the model into 4 chunks, and saved each chunk as a separate file.
- Opened up 4 instances of Photoscan on the same machine, and loaded one chunck into each instance.
- Started the "Build Mesh" step in all 4 instances at nearly the same time, and got the following processing time for each instance: 577 sec, 1131 sec, 1132 sec, 1273 sec.
Of the 4 chucks running simultaneously , the longest one was 1273 sec. This means that I was able to process the model faster (by a
factor of 2.1, which is quite significant) if I split the work between multiple instances.
On a different workstation, running a i7-6700k:
- Ran the "Building" dataset, and got a procesing time of 512 sec for the "Build Mesh" step.
- Split the model into two chunks, and processed the two chunks simultaneously.
- For the "Build Mesh" step, I got 417 sec and 415 sec.
This time the two chunks finished in almost the same time, but still provided a processing time increase of
1.22 times by running multiple simultaneous instances vs one instance of PhotoScan.
Has anyone else noticed this kind of behavior or done any similar testing?
The relationship between increasing number of cores and processing time is far from linerar.
https://www.pugetsystems.com/labs/articles/Agisoft-PhotoScan-Multi-Core-Performance-709/I'm guessing this type of processing makes the best use of the cores/threads available.
I'd encourage others to try this and post their results on their multi-core systems.