Author Topic: Increase processing performance by a factor of 2  (Read 1957 times)


  • Newbie
  • *
  • Posts: 31
    • View Profile
Increase processing performance by a factor of 2
« on: November 06, 2018, 12:52:12 AM »
I've found that I can increase the speed which models are processed if I run multiple simultaneous instances of PhotoScan on the same machine, and achieve over twice the performance increase!

On a TR1950X workstation:
- Ran a dataset, and I got a processing time of 2670 sec for processing the "Build Mesh" step.
- Used the '' python script, and divided up the model into 4 chunks, and saved each chunk as a separate file.
- Opened up 4 instances of Photoscan on the same machine, and loaded one chunck into each instance.
- Started the "Build Mesh" step in all 4 instances at nearly the same time, and got the following processing time for each instance: 577 sec, 1131 sec, 1132 sec, 1273 sec.

Of the 4 chucks running simultaneously , the longest one was 1273 sec. This means that I was able to process the model faster (by a factor of 2.1, which is quite significant) if I split the work between multiple instances.

On a different workstation, running a i7-6700k:
- Ran the "Building" dataset, and got a procesing time of 512 sec for the "Build Mesh" step.
- Split the model into two chunks, and processed the two chunks simultaneously.
- For the "Build Mesh" step, I got 417 sec and 415 sec.

This time the two chunks finished in almost the same time, but still provided a processing time increase of 1.22 times by running multiple simultaneous instances vs one instance of PhotoScan.

Has anyone else noticed this kind of behavior or done any similar testing?

The relationship between increasing number of cores and processing time is far from linerar.
I'm guessing this type of processing makes the best use of the cores/threads available.

I'd encourage others to try this and post their results on their multi-core systems.
« Last Edit: November 06, 2018, 01:25:34 AM by outsider »


  • Newbie
  • *
  • Posts: 31
    • View Profile
Re: Increase processing performance by a factor of 2
« Reply #1 on: November 06, 2018, 01:21:55 AM »
Did another test on the i7-6700k system, and split up the "Building" dataset into 4 chunks.
The 4 chunks processed the "Build Mesh" step simultaneously, and finished in 382 sec, 390 sec, 391 sec and 372 sec.

Compared to the 512 sec time for completing the Build Mesh step in one chunk, running the model in 4 chunks (on a i7-6700k) can yield a 1.33 times improvement.
Though the i7-6700k only has 4 cores, so I imagine there's not much to be gained beyond 4 simultaneous chunks.

Which is likely why the TR-1950x with 16 cores did much better with the simultaneous processing. CPUs with more cores are likely to gain much more by parallelazing processing this way.


  • Sr. Member
  • ****
  • Posts: 408
    • View Profile
Re: Increase processing performance by a factor of 2
« Reply #2 on: November 06, 2018, 12:03:48 PM »
Interesting read. The first thing comes to mind; is the processed data (before you split it in 4 chunks) not still active in the cache or working memory?

When I'm running only one instance of PS I can see all of my cores being utilized, each and every time. And I definitely see a slowdown when running two unrelated projects simultaneously.

Curious to see what Agisoft has as an explanation.


  • Hero Member
  • *****
  • Posts: 726
    • View Profile
Re: Increase processing performance by a factor of 2
« Reply #3 on: November 06, 2018, 03:13:34 PM »
Yes there was a lot of talk about this and related things in this thread (quite some time ago, but the results were similar)