Forum

Author Topic: Titan X benchmark  (Read 14575 times)

igor73

  • Full Member
  • ***
  • Posts: 228
    • View Profile
Re: Titan X benchmark
« Reply #15 on: April 13, 2015, 01:23:10 PM »
Asus P9X79 standard.  Not the WS version.  It has 2 x16 full speed slots.  This card should not cap performance using 2 GPU right?   

http://www.asus.com/se/Motherboards/P9X79/

Wishgranter

  • Hero Member
  • *****
  • Posts: 1202
    • View Profile
    • Museum of Historic Buildings
Re: Titan X benchmark
« Reply #16 on: April 13, 2015, 01:51:06 PM »
its because you use multi GPU, you "never" get 100% scaling with multiCPU-GPU enviroment, its not limited with PCI-E speed...
----------------
www.mhb.sk

igor73

  • Full Member
  • ***
  • Posts: 228
    • View Profile
Re: Titan X benchmark
« Reply #17 on: April 13, 2015, 04:52:40 PM »
Wishgranter, thats what i thouught to.  How ever some software seems to have this worked out,  In Octane render you get 100% better performance or very close to it. 

Marcel

  • Sr. Member
  • ****
  • Posts: 309
    • View Profile
Re: Titan X benchmark
« Reply #18 on: April 13, 2015, 10:08:52 PM »
I agree with Wishgranter, it's not a PCI bandwidth issue.

If you look at the GPU load graph during processing, you can see that the work is done in batches. A GPU gets a batch of work and has high load for a minute or two, and then goes idle for a short while until it gets a new batch. At the end of the Dense Cloud reconstruction one GPU is often idle while the other is finishing the last batch.

So with a single GPU you have often 100% utilization, and with two GPUs it's just less efficient.




igor73

  • Full Member
  • ***
  • Posts: 228
    • View Profile
Re: Titan X benchmark
« Reply #19 on: April 14, 2015, 11:06:03 AM »
Ahh ok,   that explains a lot.  The longer the job the more even the card performance becomes i have noticed.