Agisoft Metashape
Agisoft Metashape => General => Topic started by: Magnus on September 20, 2014, 10:07:38 AM
-
Hello!
Seems that the new 900-series from Nvidia might be good for PS http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/20 (http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/20)
This brings me to a question though.
Since PS doesn't work with the newer nvidia drivers (I tried the newest release GEFORCE 344.11 with my old GTX570) I assume PS wont work with the 900-series with its current release?
I have a Asus 970 Strix (turns its fans off when not under load, I love that) coming next week hopefully so I will soon find out I guess, hehe.
Best, Magnus.
-
Hello Magnus,
The problems with latest drivers (340.52 & 344.11) will be fixed in the next update. I think that pre-release will be available in a few days, so GTX 900 series could be tested.
-
We haven't yet got GTX980, but looking on the specifications we assume that it might be on the same performance level with GTX TITAN.
-
Thanks a lot Alexey! That's great to hear!
Best, Magnus.
-
On a little sidenote;
The 900 series has destroyed Titan on several benchmarks regarding pure processing power.
-
Hello Marius,
I hope will get GTX 980 soon to check it for PhotoScan Performance.
-
I've to decide between buying a GXT-780 or a ASUS STRIX-GTX980-DC2OC-4GD5
Is there any news regarding compatibility?
Thanks in advance.
-
compatibility should be OK, for the performance, hard to say without any benchmark BUT from various site have seen very encouraging OpenCL performance, even better than 290X cards...
so go with 980.....
-
Any news on the GTX 980 card :) Here at work we are planing to buy new graphic cards but they had planed for GTX 780, so Im really interested to know if I should try convincing them to buy the 980 card or if it really is worth the money going for the 980 instead of 780.
-
I think, the 980 is a better choice. (If this card is working in PS;) )
The power consumption is reduced by 50Watt in comparison to the 780 and you have more processing power.
great card ..... i want also 2 of them 8)
-
Actually, I just found out that one guy here at work today installed 980 card, will try that with agisoft asap. What is it that I should look for to really know that agisoft supports it? Is it that the gpu kicks in when calculating dense clouds?
-
Ok did a test and the GPU didnt work, guess it's the Nvidia driver problem.
-
Hello bmc130,
Please try GTX 980 with PhotoScan 1.1.
-
I will try and borrow my colleges computer again install 1.1 on it. Is it possible to run the 1.1 in evaluation/demo mode?
-
Hello bmc130,
Sure, version 1.1 can be run in demo mode without even a Trial key.
-
hmm dont know if I did something completely wrong here or if I used some weird setting in Agisoft..but the GTX 980 was actually slower at least when looking at the Show info for my chunk.
Pic 1 is on my machine GTX 670, 24gb ram and a xeon 3,47Ghz
Pic2 is the machine with the GTX 980, 24gb ram and a xeon 2.6 Ghz (cant remember exactly) In the open cl settings I had 2 cpu cores deactivated
Pic3 is the machine with the GTX 980, 24gb ram and a xeon 2.6 Ghz (cant remember exactly) In the open cl settings I had all except one cpu cores deactivated
This is just a scan with 18 pictures, maybe it will be much different on a larger scan.
-
Hello bmc130,
You need to check Device Performance lines in the Console pane right after the depth maps estimation phase. Or enable "Keep depth maps" option in PhotoScan Preferences window.
On the screenshots there's no time for the depth maps estimation and it is the only step where GPUs are used for processing.
-
Aha ok had no idea, so if I keep depth maps will that be showing in the show info dialog?
-
Yes.
-
Ok so know I enabled Save depth maps and this are the times I got. Strange it's being slower, could it be that the computer with the 980 card have a not as good processor?
Pic1. GTX 670
Pic2. GTX 980
-
was run on SAME PC ?? what CPU was there ???
best if use same benchmark scene http://www.agisoft.com/forum/index.php?topic=651.msg2971#msg2971
wil send the the 2nd benchmark scene but need search it
That we compare apple with apples
-
No thats the thing, they are not run on the same computer. This was just a quick test since one of my co workers got the card yesterday. His processor is smaller than mine.
The spec for the computer is:
Xeon x5650 2.67 Ghz, 24gb Ram, GTX 980
My computer:
Xeon w3690 3.47Ghz, 24gb Ram, GTX 670
Unfortunatly we are in full production so cant hijack his computer as much as I would like to:)
-
change the GPu for the speedier CPU as it deliver much faster data to the GPU.. disable ALL CPU in OpenCL benchmark.. and report back, should see much higher speed..
-
ahhh ... I'm still waiting for my 4 GTX980 i've ordered some weeks ago for the new machine...
-
Just in case this may help somebody:
When I installed the new Nvidia-Driver for my GTX 675M and started PS the preference settings for OpenCL had changed. It used both the GTX 675M and the onboard Graphic HD4000 (which is only able to use Opencl ver. 4.0). After I changed the Nvidia Optimus settings for PS so that it should only use the GTX 675M it worked fine and used OpenCL ver. 4.4.
-
for GPU speed reference. These cards were introduced in november of 2010... The quadro much later and supposedly superior for openCL calculations. Don't believe the hype. I can't wait to get 5 x 970's in this system.
-
0/8 cores / 2/2 r290x
Device 1 performance: 932.164 million samples/sec (Hawaii)
Device 2 performance: 926.353 million samples/sec (Hawaii)
Total performance: 1858.52 million samples/sec
0/8 cores / 2/2 gtx980
Device 1 performance: 1007.78 million samples/sec (GeForce GTX 980)
Device 2 performance: 1003.94 million samples/sec (GeForce GTX 980)
Total performance: 2011.73 million samples/sec
Monument Testfile, Resolution high...
Hope that helps...