Agisoft Metashape
Agisoft Metashape => General => Topic started by: an198317 on July 08, 2014, 01:04:37 AM
-
Hi all,
I am curious about how much the GPU is involved in the modeling process. I can see during the dense point cloud step the GPU is used for computing. But during the mesh generation which is the most heavy computing process, I didn't see a lot of GPU involvement. So I am curious about during what the step(s) GPU is really used for processing?
And also what should I set up the OpenCL appropriately to optimize the speed? My workstation has 12 CPU cores, and Quadro 4000 has 8 cores based on PhotoScan OpenCL interface. Based on the OpenCL interface suggestion, deactivate one CPU core (using 11 of them) will use all Quadro 4000 8 cores?
Thanks,
-
GPU is not used during Align Photos.
GPU is used heavily during Reconstructing depth portion of Build Dense Cloud.
GPU is not used during Build Mesh.
GPU is not used during Build Texture.
With I7-3770 CPU (4 cores / 8 virtual cores) and Radeon HD 7770 GPU best performance is with 6/8 "Active CPU Cores". However during Align Photos 100% of all 8 virtual cores are used.
-
Is there a possibility that Photoscan could also use GPU for more steps than Build Dense Cloud, in the near future ?
This would be a huge Performance upgrade for the Software.
-
That's what I would hope PhotoScan Pro can change....
-
I am confused since I have seen this a couple times now, people saying that the build mesh step is the most heavy computing step? I keep seeing the dense cloud generation to be the most intensive on my side. Even by using the GPU that is much more powerful than the CPU it takes much longer than the mesh generation.
Am I the only one seeing this?
Leo
-
Current project has 250 to 750 photos per chunk. Medium setting for Create Dense Cloud results in 1 million to 7 million points per chunk, and takes considerably longer then Build Mesh.
-
Hello,
although the infos given in the former posts seem quite clear, there is something I do not understand.
I'm working with an I7-2770 CPU (4 cores / 8 virtual cores) and a GTX 660 graphic card.
First I disabled 1 CPU core. When starting the depth reconstruction, no GPU activity takes place.
OK, so I thought, that has to do with the virtual cores, meaning that in practive I have to disable 2 cores. Restarted depth reconstruction, but GPU activity is still at 0%.
Now, what am I doing wrong, or what did I understand wrong?
depth reconstruction & dense point cloud generation should happen with GPU, or is it only the DPC generation?
If the first case is correct, than why does PS not use the GPU?
thanks for any hint
-
I would be interested in the same question... I use the GTX 770 though, also i7
-
Hello Patribus,
PhotoScan uses every OpenCL supported device that is checked on in the corresponding tab of PhotoScan Preferences window.
Note that if you are using Windows Remote Desktop to connect to another machine the list of OpenCL devices will be likely empty and PhotoScan would not be able to use GPUs installed on remote computer.
-
Hello Alexey,
I do indeed use TEamViewer as remote desktop service. But my OpenCL devices are listed in PS and respectively 'activated'.
So I would expect them also to be used.
But my GPU remains unused.
Some other possible reason for this?
PS: just added a screenshot of my preferences.
-
Hello Patribus,
I missed the tool with one you are checking GPU usage. Can you please share the method?
Best regards,
Alex
-
Hello Patribus,
I missed the tool with one you are checking GPU usage. Can you please share the method?
Best regards,
Alex
I just search for GPU monitor and found some widgets for windows (7 in my case) which show some (or all) parameters of my graphic card (T, Usage, Fan activity, MEmory usage, etc.). When I scroll in the browser I can see how the activity of the GPU goes up from 0% to 1% or 2%. The rest of the time it's at 0%
-
Hello Patribus,
Could you please also check the console output in PhotoScan? If the GPU is used you'll see lines starting from [CPU] and [GPU] during depth maps estimation process.
-
Can you share the widget name?
I suggest you do a Bulild Dense Cloud on a dataset of 100+ photos with OpenCL enabled and another with disabled and compare the time. That is, if you can enable it now. ;)
If you would like to test the GPU, try the GPU Shark utility. Don't know if it exact, but it shows extended usage of the GPU and temperature rising. 8)
Best regards,
Alex
-
Hello Patribus,
Could you please also check the console output in PhotoScan? If the GPU is used you'll see lines starting from [CPU] and [GPU] during depth maps estimation process.
Yes, the lines are present.
Just strange that no activity appears in the gpu monitor.
-
Can you share the widget name?
Well, they are all called GPU Monitor. So it's a bit difficult to distinguish them.
One is attached, the other is called GPU_Meter_V2.4 (too big to attach)
That is, if you can enable it now. ;)
Hehe, I do not know why, it works now... Good so!
-
Thanks, will check them out.
-
I did check it out and it works for me.
GPU usage actualy moves from 1% to max, but the utility has a 1 second refresh, so it is not to accurate. Try to do the Dense Cloud with OpenCL enabled and disabled. You will see the difference. :)
Best regards,
Alex
-
Hello Patribus,
Could you please also check the console output in PhotoScan? If the GPU is used you'll see lines starting from [CPU] and [GPU] during depth maps estimation process.
Yes, the lines are present.
Just strange that no activity appears in the gpu monitor.
HEllo Alexey,
the other day I just searched quickly for the [GPU] lines, today I had a closer look, and there seems to be an error with the GPU process.
....
timings: rectify: 0.016 disparity: 0.593 borders: 0.015 filter: 0.156 fill: 0
[GPU] estimating 213x476x96 disparity using 213x476x8u tiles, offset 0
ocl_engine.cpp line 231: clEnqueueWriteBuffer failed, CL_OUT_OF_RESOURCES
GPU processing failed, switching to CPU mode
[CPU] estimating 213x476x96 disparity using 213x476x8u tiles, offset 0
timings: rectify: 0.016 disparity: 0.764 borders: 0 filter: 0.062 fill: 0
[CPU] estimating 296x587x96 disparity using 296x587x8u tiles, offset -33
timings: rectify: 0.031 disparity: 0.593 borders: 0.078 filter: 0.047 fill: 0
[GPU] estimating 375x567x96 disparity using 375x567x8u tiles, offset -15
ocl_engine.cpp line 231: clEnqueueWriteBuffer failed, CL_OUT_OF_RESOURCES
GPU processing failed, switching to CPU mode
[CPU] estimating 375x567x96 disparity using 375x567x8u tiles, offset -15
timings: rectify: 0.032 disparity: 0.781 borders: 0.047 filter: 0.109 fill: 0
[CPU] estimating 509x532x96 disparity using 509x532x8u tiles, offset -30
timings: rectify: 0.078 disparity: 0.874 borders: 0.047 filter: 0.094 fill: 0
[GPU] estimating 464x623x96 disparity using 464x623x8u tiles, offset -23
ocl_engine.cpp line 231: clEnqueueWriteBuffer failed, CL_OUT_OF_RESOURCES
GPU processing failed, switching to CPU mode
[CPU] estimating 464x623x96 disparity using 464x623x8u tiles, offset -23
timings: rectify: 0.109 disparity: 1.093 borders: 0 filter: 0.031 fill: 0
[CPU] estimating 453x528x96 disparity using 453x528x8u tiles, offset -22
...
Can you recognize this?
Best regards
-
Hello Patribus,
Could you please provide the full log related to the depth maps generation? Please also specify if this problem is reproducible on any project with any reconstruction settings used.
We can suggest to use some tests for OpenCL, for example, GPU Caps should have such functionality.
-
Hi Alexey,
please find attached the full report of the very short processing.
The error for the GPU appears every time (i.e. for all PS projects).
Also, the Software GPU Caps and another OpenCL Benchmark software I downloaded both crash during start.
There seems to be something more profound on the side of the GPU, although I have the newest drivers.
I suppose this is also related to the problems I had in the past with the Options for the GPU with PS, i.e., that PS crashed each time I tried to access the settings.
Well, I'll do also some research to see if I find something.
-
some new ideas what it could be?
best regards
-
Hello Patribus,
In the similar topic (http://www.agisoft.ru/forum/index.php?topic=561.msg14253#new) it has been reported that latest nVidia drivers cause this problem, so probably you should roll back for the previous driver version.
-
If that doesn't help I would suggest you do a fresh reinstall of the system, drivers, software.... based on your previous issues...
The latest version of the GPU gadget you were using was infected with a bug... ;)
-
If that doesn't help I would suggest you do a fresh reinstall of the system, drivers, software.... based on your previous issues...
The latest version of the GPU gadget you were using was infected with a bug... ;)
Yes, I would very much like to do that, but specially this workstation is my working PC for almost everything, so that would represent a lot stress reinstalling everything. Mmmm, may I'll find time in the next few weeks.
I'll try it with the downgrade of the drivers.
Cheers
-
Hi All!
Would like to ask, how fast is it compare with the purely use CPU for dense point process? 1X or 10X
:)
Cheers,
Eric
-
In my case ( I am using a GTX 560 video card that is not even close to be the fastest card ) and a Quad Core CPU, it takes around 4 to 5 times more to do the dense cloud with only CPU instead of the GPU.
Some people have reported more than 10 times faster processing in some other threads, specially if you use one of the higher end video cards.
Leo
-
I only notice a difference when building the depth maps. Building the dense point cloud still takes a huge amount of time regardless of whether I have the GPU being used:
from the log:
finished depth reconstruction in 1397.45 seconds
Device 1 performance: 83.8203 million samples/sec (CPU)
Device 2 performance: 368.878 million samples/sec (GeForce GTX 590)
Device 3 performance: 401.292 million samples/sec (GeForce GTX 590)
Total performance: 853.991 million samples/sec
That was for 375 depth maps at Medium Quality on a chunk with 430 cameras.
On another scene with 810 cameras and 730 depth maps at Medium Quality:
Device 1 performance: 104.977 million samples/sec (CPU)
Device 2 performance: 639.976 million samples/sec (Tesla k20c)
Depth maps calculated in 38 minutes
Generating the dense cloud itself still takes the most time - sometimes in excess of 60 hours (at high quality).
-
If that doesn't help I would suggest you do a fresh reinstall of the system, drivers, software.... based on your previous issues...
So, after my PS started to crash when generating dense point cloud I went crazy today.
I did reinstall Win 7 completly from scratch.
Installed newest NVIDIA drivers.... and PS did not crash any more, BUT GPU error when generating depth maps. :'( Ok, downgraded NVIDIA drivers to the former version... And finally everything is working. ;D
The latest version of the GPU gadget you were using was infected with a bug... ;)
Uii, sorry about that, I do have several anti-virus, anti-malware, etc programs running... They did not find it.... ai ai ai...
Cheers
-
Thanks all for providing the reference information.
Is there another thing, the GPU could provide in terms of quality, geometric modeling? :)
-
I just had the same problem were the Dense Cloud generation was taking too long and when I checked, VOILA the GPU was not being used. So as you guys suggested, I rolled back to an earlier version of the driver and now it works fine :0
Thanks all for the info!
Leo
-
I'm having the same problem. Using GeForce GTX 580. Which old driver should I use? How far back should I go?
Thanks.
-
I went back to driver "GeForce 332.21 Driver WHQL January 7, 2014 " on the NVidia drivers download page and it works fine.
I think that the bad one is the 340.52 so anything older than that should be fine.
Leo
-
great. thanks.
-
Hi, I've just started using Photoscan, so please bear with me.
When Im trying to build dence point cloud, I can see in the console that it says:
oc1_engine.cpp line231: clEnqueueWriteBuffer failed, CL_MEM-OBJECT_ALLOCATION_FAILURE
GPU processing failed, switching to CPU mode
My initial thoughts goes to Driver issue.
Is there anyone here who can recommend a Nvidia driver that works well with Photoscan?
I have Nvidia GTX770 2GB,
Best Regards
Marius Skancke
-
Hello Marius,
I've merged your topic with this one, as it is connected to the latest post.
Actually, the latest Nvidia drivers 340.52 causes the mentioned OpenCL processing problem, so we recommend to rollback to the previous version of drivers, while we are trying to find out if it could be fixed on our side.
-
Thanks Alexey.
Could it be a possibility to have a pinned post where the new drivers are verified as working or not working?
-
Hello everybody,
I’ve got a problem, I’ve changed GC, I installed a GTX 770 4gb, yesterday I tried it with a dset, but I got this message:
CL_MEM_OBJECT_ALLOCATON_FAILURE
Gpu processing failed, switching to cpu mode
So the soft wasn’t able to use GPU?
Any suggestion?, I think I’ve made a configuration mistake while installing the gpu…
Thanks,
Diego.
-
Hello Diego,
Please roll back from 340.52 nVidia drivers to the previous version.
-
HIya
Are people still getting GPU issues?
I have new win 7 build with 980GTX latest drivers (nov11) and getting the gpu failed issue
Cheers
-P
-
Hello pjenness,
Could you please try installing PhotoScan 1.1.0 pre-release (http://www.agisoft.com/forum/index.php?topic=2883.0) and run a short test to check if the problem is solved?
-
Hello pjenness,
Could you please try installing PhotoScan 1.1.0 pre-release (http://www.agisoft.com/forum/index.php?topic=2883.0) and run a short test to check if the problem is solved?
Success thankyou!!
And this is with a 980GTX installed, with a 970GTX in a viDock eGPU solution as a secondary GPU card. (as I share it between desktop and mobile macBook)
Running test scene to benchmark now
Cheers!!
-P