Forum

Author Topic: Help with network processing performance  (Read 2842 times)

Srolin

  • Newbie
  • *
  • Posts: 12
    • View Profile
Help with network processing performance
« on: July 17, 2018, 10:53:21 AM »
I am trying to understand why network processing is significant slower then normal processing?

Single node Core I9 dual GTX1080 align of 963 100mpix images 15minutes
Single node Xeon E5 2687W dual GTX1070 align of 963 100mpix images 26minutes

Network processing 2 nodes align of 963 100mpix images 38minutes

alle done with the same settings, see attachments.

Gigabit network, project stored on a SAN with 10gbit network

Any ideas?

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 14813
    • View Profile
Re: Help with network processing performance
« Reply #1 on: July 17, 2018, 06:23:01 PM »
Hello Srolin,

For i9 run you had High instead of Highest (like in other test).

As for the processing time in the report related to the network processing - it records the total time spent by each node, so you can see 40 minutes in report if each node worked for 20 minutes.

Also do you have properly configured GPU mask on each node and is fine level task distribution enabled?
Best regards,
Alexey Pasumansky,
Agisoft LLC

Srolin

  • Newbie
  • *
  • Posts: 12
    • View Profile
Re: Help with network processing performance
« Reply #2 on: July 20, 2018, 11:35:39 AM »
Hi Alexey

Thankyou for your answer, how do i set use GPU mask? i am having a hard time understanding the manual, per auto the two nodes choose mask 3 and 7 the I9 now has 3 GTX1080 and teh Xeon has 2 GTX1070 what woul be the correst mask for the two nodes?


Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 14813
    • View Profile
Re: Help with network processing performance
« Reply #3 on: July 20, 2018, 03:25:21 PM »
Hello Srolin,

The gpu_mask value is the integer representation of a binary mask.

For example, if you have 2 GPUs and wish to have both of them enabled, then the binary mask will be 11 (each bit applies to a separate GPU), the integer representation of "11" is 3.
So for enabling all N out of N graphic cards you can use the following formula to calculate gpu_mask value:
Code: [Select]
gpu_mask = (2 ^ N)  - 1
For three GPUs, it should be 7, so it's correct, if you are using it for GTX 1080 system. You can also check the log on the node to see, if all cards are initialized in the beginning of GPU-supported stage.
Best regards,
Alexey Pasumansky,
Agisoft LLC