Forum

Author Topic: matchPhotos() not using all CPU cores?  (Read 4729 times)

WickedShell

  • Newbie
  • *
  • Posts: 17
    • View Profile
matchPhotos() not using all CPU cores?
« on: May 07, 2014, 10:12:02 AM »
I'm using a machine that has 16 true cores, and due to hyperthreading Linux enumerates it as 32 cores. Agisoft is only appears to be using 8 of the 32 enumerated cores. (Based on watching top). Is there a limit to the number of CPU cores it can actually utilize or am seeing lower CPU usage based on limiting factors from the disk?

<speculation> Actually, based on watching the console it is strictly handling detecting points on a per photo basis in a strictly monotonic order, which leads me to believe that agisoft is throwing all the cores at a single image at a time, and then waiting to sync the results before starting the next one. If this is the case then could would it be possible to instead handle point detection on an image per thread basis? (A lot of CPU time is sitting idle at the moment. </speculation>

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 14813
    • View Profile
Re: matchPhotos() not using all CPU cores?
« Reply #1 on: May 07, 2014, 12:30:22 PM »
Hello WickedShell,

Detecting points stage really works with single photo at a time and currently we are not planning to change it, to avoid much higher memory consumption related to the simultaneous feature point detection on multiple images. Usually time required for point detection stage is much lower than time required for photo matching, so at the moment we are not planning to change the things, at least not for the next version.

And such behavior is related not only with Python command and is actually the same for common processing workflow via GUI.
Best regards,
Alexey Pasumansky,
Agisoft LLC

WickedShell

  • Newbie
  • *
  • Posts: 17
    • View Profile
Re: matchPhotos() not using all CPU cores?
« Reply #2 on: May 07, 2014, 06:22:23 PM »
I'm suprised the memory consumption would be that much higher then some of the later steps (and the final step of detecting points), but obviously I'm unfamiliar with that side of the code. On the current machine for 2587 images it took 1 hour 3 minutes to detect points, and 1 hour and 51 minutes to select pairs and match (I believe it was one hour and 51 minutes respectively). Which is the only reason I'm looking to speed that up.

The workflow has moved into reconstructing depth at the moment, (no GPU attatched), and I'm only seeing 75-80% CPU usage, is this the same behavior? (and if so this would be really nice to be able to modify it here, as I'm looking at 21 hours, 40 minutes to build depth, when it could be leveraging 20% more CPU.