Forum

Author Topic: Titan X benchmark  (Read 14544 times)

igor73

  • Full Member
  • ***
  • Posts: 228
    • View Profile
Titan X benchmark
« on: April 09, 2015, 03:41:28 PM »
Got my new cards 2 days ago and ran some tests.  EVGA Titan X superclocked.  Runs at about 1280mhz on boost.  Noticed GPU load was only around 60% during the dense cloud generation .  SLI was enabled during the test. 

Nvidia Titan X Benchmark

The building sample data was used http://www.agisoft.com/downloads/sample-data/

I used the following settings: (Same as Anand Tech but tie points set to 0 and CPU 0/12)

## Align Photos: High, disabled, 40000.  0 tie points
## Build Dense Cloud: Medium, Aggressive
## Build Model: Arbitrary, Dense, Interpolation=Enabled, 0 face count
## Build Texture: Generic, Mosaic, 4096, 1


System info
Microsoft Windows 7 Professional
Processor   Intel(R) Core(TM) i7-3930K CPU @ 3.20GHz
64GB RAM
2x Titan X 12gb cards.  SLI enabled


Align photos

Finished processing in 294.814 sec (exit code 1)

Dense cloud
All CPU cores disabled in agisoft prefrences.  0/12

finished depth reconstruction in 80.37 seconds
Device 1 performance: 1004.91 million samples/sec (GeForce GTX TITAN X)
Device 2 performance: 994.448 million samples/sec (GeForce GTX TITAN X)
Total performance: 1999.36 million samples/sec
Generating dense point cloud...
selected 50 cameras in 0.22 sec
working volume: 1755x2763x1220
tiles: 1x1x1
selected 50 cameras
preloading data... done in 0.325 sec
filtering depth maps... done in 43.934 sec
preloading data... done in 7.956 sec
accumulating data... done in 1.201 sec
building point cloud... done in 0.608 sec
5706624 points extracted
Finished processing in 137.235 sec (exit code 1)


EDIT.  Did the same test on Ultra setting as i noticed the cards where onlu on 60% load.  Ultra pushed the cards to use 80-.83% and boost clock speed went up to 1316 mhz.  Result much better.  If Agisoft xould tweeks someting so 100% of the card was used evene better performance i guess. 

Ultra Results

finished depth reconstruction in 2004.43 seconds
Device 1 performance: 1326.94 million samples/sec (GeForce GTX TITAN X)
Device 2 performance: 1279.33 million samples/sec (GeForce GTX TITAN X)
Total performance: 2606.27 million samples/sec

Finished processing in 4388.76 sec (exit code 1)


Mesh

Finished processing in 144.503 sec (exit code 1)


Build Texture
9056814 faces extracted in 32.709s
Calculating vertex colors...
processing nodes... done in 0.016 sec
calculating colors... done in 20.289 sec
Finished processing in 144.503 sec (exit code 1)
Parameterizing texture atlas...
Packing 2117 charts...
Blending textures...
blending textures... ************************************************** done in 49.94 sec
postprocessing atlas... done in 0.117 sec
Finished processing in 154.71 sec (exit code 1)




« Last Edit: April 09, 2015, 09:16:30 PM by igor73 »

dtmcnamara

  • Jr. Member
  • **
  • Posts: 73
    • View Profile
Re: Titan X benchmark
« Reply #1 on: April 09, 2015, 03:47:54 PM »
Are these the new 12GB cards?

Just read your other post and realized they are the new 12GB cards....performance is not what I was expecting. My GTX780 cards pull 700-800 million samples/sec and ran me $200 each. Crap, I really was looking forward to buying new cards this month. Guess I will just hold off for a little longer

On a side note, the 250w max power draw from each card is nice.
« Last Edit: April 09, 2015, 03:51:47 PM by dtmcnamara »

Wishgranter

  • Hero Member
  • *****
  • Posts: 1202
    • View Profile
    • Museum of Historic Buildings
Re: Titan X benchmark
« Reply #2 on: April 09, 2015, 06:31:34 PM »
Hi Igor thanx for the info. i get approx 900 Mil samples from GTX980, so its just a bit faster, what drivers ahve you used ? the new driver https://www.youtube.com/watch?v=CMhfQayEiXI have support for OpenCL 1.2 can test it with it ?? the thing is Nvidia is  cripling performance in OpenCL from 260-270 version  (opneCL 1.0-1.1 ) afther my calculations it could be 2 times faster as its now when they "unlock" the OpenCL performance....
----------------
www.mhb.sk

Wishgranter

  • Hero Member
  • *****
  • Posts: 1202
    • View Profile
    • Museum of Historic Buildings
Re: Titan X benchmark
« Reply #3 on: April 09, 2015, 06:52:30 PM »
are you used the HOUSE and MONUMENT for recon ?
----------------
www.mhb.sk

igor73

  • Full Member
  • ***
  • Posts: 228
    • View Profile
Re: Titan X benchmark
« Reply #4 on: April 09, 2015, 07:41:53 PM »
The GPU´s barley warmed up for this test.  I checked and only 60% of the GPU´s where used according to Nvdia control panel.  Will run again on Ultra and also on a bigger project.  Will check the drivers to.  I used the Building data set same as Anand tech. 

I bought these cards for Octane render because they have 12GB.  Some of my scenes that i render with 16k textures don´t fit in to my old Titan 6gb card.  So for this purpose the Titan 12gb cards are worth the money.  They kick ass at Octane render as its optimized for CUDA. Agisoft is only a small part of my workflow.  For Agisoft i can´t imagine it would be worth the money buying these cards as the 12GB is not needed anyway and of the poor CUDA support. 

 I have a 700W PSU and have had no problems,  seems to be enough. 
« Last Edit: April 13, 2015, 04:54:35 PM by igor73 »

igor73

  • Full Member
  • ***
  • Posts: 228
    • View Profile
Re: Titan X benchmark
« Reply #5 on: April 09, 2015, 07:54:50 PM »
ThWishgranter, the link to the drivers goes to a monkey video :-) 

Anywy i used this driver which is the latest non beta 347.88
« Last Edit: April 13, 2015, 04:55:48 PM by igor73 »

igor73

  • Full Member
  • ***
  • Posts: 228
    • View Profile
Re: Titan X benchmark
« Reply #6 on: April 09, 2015, 09:18:18 PM »
Ran the test on Ultra and this time the load went up to around 80% on the cards.  Much better performance now. Under boost the clock speed stayed at between 1300-1316mhz on the 2 EVGA Titan X Superclocked cards.

finished depth reconstruction in 2004.43 seconds
Device 1 performance: 1326.94 million samples/sec (GeForce GTX TITAN X)
Device 2 performance: 1279.33 million samples/sec (GeForce GTX TITAN X)
Total performance: 2606.27 million samples/sec

The second card is running a tad hotter and slower  as its sitting under the first one and does not  get as good cooling. 
No problems with heat though.  Even the second card never goes of 83C even after extended periods.  Fan speed hits 2500rpm on card 2 and stays at around 2100 on card 1. 
« Last Edit: April 13, 2015, 04:56:21 PM by igor73 »

igor73

  • Full Member
  • ***
  • Posts: 228
    • View Profile
Re: Titan X benchmark
« Reply #7 on: April 09, 2015, 09:24:53 PM »
Ran the test on Ultra and this time the load went up to around 80% on the cards.  Much better performance now. Under boost the clock speed stayed at between 1300-1316mhz on the 2 EVGA Titan X Superclocked cards.

finished depth reconstruction in 2004.43 seconds
Device 1 performance: 1326.94 million samples/sec (GeForce GTX TITAN X)
Device 2 performance: 1279.33 million samples/sec (GeForce GTX TITAN X)
Total performance: 2606.27 million samples/sec

The second card is running a tad hotter and slower  as its sitting under the first one and does not  get as good cooling. 
No problems with heat though.  Even the second card never goes over 83C even after extended periods.  Fan speed hits 2500rpm on card 2 and stays at around 2100 on card 1.

igor73

  • Full Member
  • ***
  • Posts: 228
    • View Profile
Re: Titan X benchmark
« Reply #8 on: April 09, 2015, 09:45:26 PM »
Ran another test this time with SLI disabled and only using 1 card in Agisoft.  All CPU cores disabled and exactley the same settings as before.  High setting on dense cloud instead of medium as i used on first test. Pretty good performance i would say!  The GPU load went up to 95% now for some reason.  Still hitting 1316mhz as max clock speed so no change there. 

finished depth reconstruction in 541.615 seconds
Device 1 performance: 1534.46 million samples/sec (GeForce GTX TITAN X)
Total performance: 1534.46 million samples/sec
« Last Edit: April 09, 2015, 09:57:42 PM by igor73 »

igor73

  • Full Member
  • ***
  • Posts: 228
    • View Profile
Re: Titan X benchmark
« Reply #9 on: April 09, 2015, 10:05:30 PM »
Another test on High dense cloud  setting.  This time SLI diabled but both GPUs enabled in Agisoft. For some reason the performance drops a bit per card compared to using a single card.. Wonder why that is?  I don´t think its heat related as the cards still run on 1316 mhz, same as running a single card.   Titan X seems to be around 40%-60% faster then 980 in Agisoft and only use 250W. So if you are building a serious hard core system Titan X might be worth it after all even for Agisoft?


2x Titan X EVGA Superclocked.   Dense cloud High. 
finished depth reconstruction in 322.303 seconds
Device 1 performance: 1353.73 million samples/sec (GeForce GTX TITAN X)
Device 2 performance: 1350.39 million samples/sec (GeForce GTX TITAN X)
Total performance: 2704.12 million samples/sec


1x Titan X EVGA Superclocked.Dense cloud High. 
finished depth reconstruction in 541.615 seconds
Device 1 performance: 1534.46 million samples/sec (GeForce GTX TITAN X)
Total performance: 1534.46 million samples/sec

Wishgranter

  • Hero Member
  • *****
  • Posts: 1202
    • View Profile
    • Museum of Historic Buildings
Re: Titan X benchmark
« Reply #10 on: April 10, 2015, 01:49:10 AM »
Sorry for that link on monkeystuff :D and so have used the newest driver in this bench or ??
----------------
www.mhb.sk

igor73

  • Full Member
  • ***
  • Posts: 228
    • View Profile
Re: Titan X benchmark
« Reply #11 on: April 10, 2015, 02:54:37 PM »
Yes i used the newest driver i could find at the Nvidia website. 

dtmcnamara

  • Jr. Member
  • **
  • Posts: 73
    • View Profile
Re: Titan X benchmark
« Reply #12 on: April 10, 2015, 05:50:49 PM »
Are the tests with 2 cards with or without SLI disabled?

igor73

  • Full Member
  • ***
  • Posts: 228
    • View Profile
Re: Titan X benchmark
« Reply #13 on: April 10, 2015, 07:48:13 PM »
This is with 2 cards  SLI disabled

2x Titan X EVGA Superclocked.   Dense cloud High. 
finished depth reconstruction in 322.303 seconds
Device 1 performance: 1353.73 million samples/sec (GeForce GTX TITAN X)
Device 2 performance: 1350.39 million samples/sec (GeForce GTX TITAN X)
Total performance: 2704.12 million samples/sec

This is with only 1 card

1x Titan X EVGA Superclocked.Dense cloud High. 
finished depth reconstruction in 541.615 seconds
Device 1 performance: 1534.46 million samples/sec (GeForce GTX TITAN X)
Total performance: 1534.46 million samples/sec

As you can see Titan X is capable of 1534.46 million samples/sec but when you run 2 cards performance drops a bit, i don´t know why as the clock speed was the same during the tests. 

fx27

  • Newbie
  • *
  • Posts: 5
    • View Profile
Re: Titan X benchmark
« Reply #14 on: April 13, 2015, 11:18:35 AM »
...
System info
Microsoft Windows 7 Professional
Processor   Intel(R) Core(TM) i7-3930K CPU @ 3.20GHz
64GB RAM
2x Titan X 12gb cards.  SLI enabled

...

Sounds like not enough PCI-E lanes on your board...

Which mainbord do you use?

At the moment the best choice is X99E-WS from Asus...

Daniel