Agisoft Metashape

Agisoft Metashape => General => Topic started by: an198317 on July 08, 2014, 01:04:37 AM

Title: GPU processing during the modeling and settings
Post by: an198317 on July 08, 2014, 01:04:37 AM
Hi all,
I am curious about how much the GPU is involved in the modeling process. I can see during the dense point cloud step the GPU is used for computing. But during the mesh generation which is the most heavy computing process, I didn't see a lot of GPU involvement. So I am curious about during what the step(s) GPU is really used for processing?

And also what should I set up the OpenCL appropriately to optimize the speed? My workstation has 12 CPU cores, and Quadro 4000 has 8 cores based on PhotoScan OpenCL interface. Based on the OpenCL interface suggestion, deactivate one CPU core (using 11 of them) will use all Quadro 4000 8 cores?

Thanks,
Title: Re: GPU processing during the modeling and settings
Post by: David Cockey on July 08, 2014, 06:19:30 AM
GPU is not used during Align Photos.

GPU is used heavily during Reconstructing depth portion of Build Dense Cloud.

GPU is not used during Build Mesh.

GPU is not used during Build Texture.

With I7-3770 CPU (4 cores / 8 virtual cores) and Radeon HD 7770 GPU best performance is with 6/8 "Active CPU Cores". However during Align Photos 100% of all 8 virtual cores are used.
Title: Re: GPU processing during the modeling and settings
Post by: mwittnebel on July 11, 2014, 10:06:16 AM
Is there a possibility that Photoscan could also use GPU for more steps than Build Dense Cloud, in the near future ?

This would be a huge Performance upgrade for the Software.
Title: Re: GPU processing during the modeling and settings
Post by: an198317 on July 29, 2014, 12:35:42 AM
That's what I would hope PhotoScan Pro can change....
Title: Re: GPU processing during the modeling and settings
Post by: Lambo on July 29, 2014, 08:38:20 PM
I am confused since I have seen this a couple times now, people saying that the build mesh step is the most heavy computing step? I keep seeing the dense cloud generation to be the most intensive on my side. Even by using the GPU that is much more powerful than the CPU it takes much longer than the mesh generation.
Am I the only one seeing this?
Leo
Title: Re: GPU processing during the modeling and settings
Post by: David Cockey on July 30, 2014, 12:27:39 AM
Current project has 250 to 750 photos per chunk. Medium setting for Create Dense Cloud results in 1 million to 7 million points per chunk, and takes considerably longer then Build Mesh.
Title: no activity on my GPU ?!
Post by: Patribus on July 30, 2014, 01:50:33 PM
Hello,

although the infos given in the former posts seem quite clear, there is something I do not understand.

I'm working with an I7-2770 CPU (4 cores / 8 virtual cores) and a GTX 660 graphic card.

First I disabled 1 CPU core. When starting the depth reconstruction, no GPU activity takes place.
OK, so I thought, that has to do with the virtual cores, meaning that in practive I have to disable 2 cores. Restarted depth reconstruction, but GPU activity is still at 0%.

Now, what am I doing wrong, or what did I understand wrong?

depth reconstruction & dense point cloud generation should happen with GPU, or is it only the DPC generation?

If the first case is correct, than why does PS not use the GPU?

thanks for any hint
Title: Re: GPU processing during the modeling and settings
Post by: sitzsack on July 31, 2014, 11:44:07 AM
I would be interested in the same question... I use the GTX 770 though, also i7
Title: Re: GPU processing during the modeling and settings
Post by: Alexey Pasumansky on July 31, 2014, 02:11:44 PM
Hello Patribus,

PhotoScan uses every OpenCL supported device that is checked on in the corresponding tab of PhotoScan Preferences window.
Note that if you are using Windows Remote Desktop to connect to another machine the list of OpenCL devices will be likely empty and PhotoScan would not be able to use GPUs installed on remote computer.
Title: Re: GPU processing during the modeling and settings
Post by: Patribus on July 31, 2014, 02:16:08 PM
Hello Alexey,

I do indeed use TEamViewer as remote desktop service. But my OpenCL devices are listed in PS and respectively 'activated'.
So I would expect them also to be used.
But my GPU  remains unused.

Some other possible reason for this?

PS: just added a screenshot of my preferences.
Title: Re: GPU processing during the modeling and settings
Post by: mobilexcopter on July 31, 2014, 02:18:42 PM
Hello Patribus,

I missed the tool with one you are checking GPU usage. Can you please share the method?

Best regards,

Alex
Title: Re: GPU processing during the modeling and settings
Post by: Patribus on July 31, 2014, 02:35:54 PM
Hello Patribus,

I missed the tool with one you are checking GPU usage. Can you please share the method?

Best regards,

Alex

I just search for GPU monitor and found some widgets for windows (7 in my case) which show some (or all) parameters of my graphic card (T, Usage, Fan activity, MEmory usage, etc.). When I scroll in the browser I can see how the activity of the GPU goes up from 0% to 1% or 2%. The rest of the time it's at 0%
Title: Re: GPU processing during the modeling and settings
Post by: Alexey Pasumansky on July 31, 2014, 02:39:10 PM
Hello Patribus,

Could you please also check the console output in PhotoScan? If the GPU is used you'll see lines starting from [CPU] and [GPU] during depth maps estimation process.
Title: Re: GPU processing during the modeling and settings
Post by: mobilexcopter on July 31, 2014, 02:44:43 PM
Can you share the widget name?

I suggest you do a Bulild Dense Cloud on a dataset of 100+ photos with OpenCL enabled and another with disabled and compare the time. That is, if you can enable it now.  ;)

If you would like to test the GPU, try the GPU Shark utility. Don't know if it exact, but it shows extended usage of the GPU and temperature rising.  8)

Best regards,

Alex
Title: Re: GPU processing during the modeling and settings
Post by: Patribus on July 31, 2014, 03:01:15 PM
Hello Patribus,

Could you please also check the console output in PhotoScan? If the GPU is used you'll see lines starting from [CPU] and [GPU] during depth maps estimation process.

Yes, the lines are present.
Just strange that no activity appears in the gpu monitor.
Title: Re: GPU processing during the modeling and settings
Post by: Patribus on July 31, 2014, 03:16:23 PM
Can you share the widget name?
Well, they are all called GPU Monitor. So it's a bit difficult to distinguish them.

One is attached, the other is called GPU_Meter_V2.4 (too big to attach)

That is, if you can enable it now.  ;)

Hehe, I do not know why, it works now... Good so!
Title: Re: GPU processing during the modeling and settings
Post by: mobilexcopter on July 31, 2014, 03:25:21 PM
Thanks, will check them out.

Title: Re: GPU processing during the modeling and settings
Post by: mobilexcopter on July 31, 2014, 03:47:57 PM
I did check it out and it works for me.

GPU usage actualy moves from 1% to max, but the utility has a 1 second refresh, so it is not to accurate. Try to do the Dense Cloud with OpenCL enabled and disabled. You will see the difference.  :)

Best regards,

Alex

 
Title: Re: GPU processing during the modeling and settings
Post by: Patribus on August 01, 2014, 08:43:16 PM
Hello Patribus,

Could you please also check the console output in PhotoScan? If the GPU is used you'll see lines starting from [CPU] and [GPU] during depth maps estimation process.

Yes, the lines are present.
Just strange that no activity appears in the gpu monitor.

HEllo Alexey,

the other day I just searched quickly for the [GPU] lines, today I had a closer look, and there seems to be an error with the GPU process.

Code: [Select]
....
timings: rectify: 0.016 disparity: 0.593 borders: 0.015 filter: 0.156 fill: 0
[GPU] estimating 213x476x96 disparity using 213x476x8u tiles, offset 0
ocl_engine.cpp line 231: clEnqueueWriteBuffer failed, CL_OUT_OF_RESOURCES
GPU processing failed, switching to CPU mode
[CPU] estimating 213x476x96 disparity using 213x476x8u tiles, offset 0
timings: rectify: 0.016 disparity: 0.764 borders: 0 filter: 0.062 fill: 0
[CPU] estimating 296x587x96 disparity using 296x587x8u tiles, offset -33
timings: rectify: 0.031 disparity: 0.593 borders: 0.078 filter: 0.047 fill: 0
[GPU] estimating 375x567x96 disparity using 375x567x8u tiles, offset -15
ocl_engine.cpp line 231: clEnqueueWriteBuffer failed, CL_OUT_OF_RESOURCES
GPU processing failed, switching to CPU mode
[CPU] estimating 375x567x96 disparity using 375x567x8u tiles, offset -15
timings: rectify: 0.032 disparity: 0.781 borders: 0.047 filter: 0.109 fill: 0
[CPU] estimating 509x532x96 disparity using 509x532x8u tiles, offset -30
timings: rectify: 0.078 disparity: 0.874 borders: 0.047 filter: 0.094 fill: 0
[GPU] estimating 464x623x96 disparity using 464x623x8u tiles, offset -23
ocl_engine.cpp line 231: clEnqueueWriteBuffer failed, CL_OUT_OF_RESOURCES
GPU processing failed, switching to CPU mode
[CPU] estimating 464x623x96 disparity using 464x623x8u tiles, offset -23
timings: rectify: 0.109 disparity: 1.093 borders: 0 filter: 0.031 fill: 0
[CPU] estimating 453x528x96 disparity using 453x528x8u tiles, offset -22
...

Can you recognize this?

Best regards
Title: Re: GPU processing during the modeling and settings
Post by: Alexey Pasumansky on August 04, 2014, 12:22:22 PM
Hello Patribus,

Could you please provide the full log related to the depth maps generation? Please also specify if this problem is reproducible on any project with any reconstruction settings used.

We can suggest to use some tests for OpenCL, for example, GPU Caps should have such functionality.
Title: Re: GPU processing during the modeling and settings
Post by: Patribus on August 04, 2014, 02:23:51 PM
Hi Alexey,

please find attached the full report of the very short processing.

The error for the GPU appears every time (i.e. for all PS projects).

Also, the Software GPU Caps and another OpenCL Benchmark software I downloaded both crash during start.

There seems to be something more profound on the side of the GPU, although I have the newest drivers.

I suppose this is also related to the problems I had in the past with the Options for the GPU with PS, i.e., that PS crashed each time I tried to access the settings.

Well, I'll do also some research to see if I find something.
Title: Re: GPU processing during the modeling and settings
Post by: Patribus on August 08, 2014, 01:47:53 PM
some new ideas what it could be?

best regards
Title: Re: GPU processing during the modeling and settings
Post by: Alexey Pasumansky on August 08, 2014, 01:53:34 PM
Hello Patribus,

In the similar topic (http://www.agisoft.ru/forum/index.php?topic=561.msg14253#new) it has been reported that latest nVidia drivers cause this problem, so probably you should roll back for the previous driver version.
Title: Re: GPU processing during the modeling and settings
Post by: mobilexcopter on August 08, 2014, 02:06:16 PM
If that doesn't help I would suggest you do a fresh reinstall of the system, drivers, software.... based on your previous issues...

The latest version of the GPU gadget you were using was infected with a bug...  ;)

Title: Re: GPU processing during the modeling and settings
Post by: Patribus on August 08, 2014, 05:42:58 PM
If that doesn't help I would suggest you do a fresh reinstall of the system, drivers, software.... based on your previous issues...

The latest version of the GPU gadget you were using was infected with a bug...  ;)

Yes, I would very much like to do that, but specially this workstation is my working PC for almost everything, so that would represent a lot stress reinstalling everything.  Mmmm, may I'll find time in the next few weeks.

I'll try it with the downgrade of the drivers.

Cheers
Title: Re: GPU processing during the modeling and settings
Post by: 4xdrones on August 11, 2014, 12:23:42 PM
Hi All!

Would like to ask, how fast is it compare with the purely use CPU for dense point process? 1X or 10X

:)

Cheers,
Eric
Title: Re: GPU processing during the modeling and settings
Post by: Lambo on August 11, 2014, 07:58:20 PM
In my case ( I am using a GTX 560 video card that is not even close to be the fastest card ) and a Quad Core CPU, it takes around 4 to 5 times more to do the dense cloud with only CPU instead of the GPU.
Some people have reported more than 10 times faster processing in some other threads, specially if you use one of the higher end video cards.
Leo
Title: Re: GPU processing during the modeling and settings
Post by: mrb on August 11, 2014, 09:15:35 PM
I only notice a difference when building the depth maps.  Building the dense point cloud still takes a huge amount of time regardless of whether I have the GPU being used:

from the log:

finished depth reconstruction in 1397.45 seconds
Device 1 performance: 83.8203 million samples/sec (CPU)
Device 2 performance: 368.878 million samples/sec (GeForce GTX 590)
Device 3 performance: 401.292 million samples/sec (GeForce GTX 590)
Total performance: 853.991 million samples/sec

That was for 375 depth maps at Medium Quality on a chunk with 430 cameras.

On another scene with 810 cameras and 730 depth maps at Medium Quality:

Device 1 performance: 104.977 million samples/sec (CPU)
Device 2 performance: 639.976 million samples/sec (Tesla k20c)
Depth maps calculated in 38 minutes

Generating the dense cloud itself still takes the most time - sometimes in excess of 60 hours (at high quality).
Title: Re: GPU processing during the modeling and settings
Post by: Patribus on August 12, 2014, 01:36:23 AM
If that doesn't help I would suggest you do a fresh reinstall of the system, drivers, software.... based on your previous issues...

So, after my PS started to crash when generating dense point cloud I went crazy today.
I did reinstall Win 7 completly from scratch.

Installed newest NVIDIA drivers.... and PS did not crash any more, BUT GPU error when generating depth maps.  :'( Ok, downgraded NVIDIA drivers to the former version... And finally everything is working.  ;D

The latest version of the GPU gadget you were using was infected with a bug...  ;)

Uii, sorry about that, I do have several anti-virus, anti-malware, etc programs running... They did not find it.... ai ai ai...

Cheers
Title: Re: GPU processing during the modeling and settings
Post by: 4xdrones on August 12, 2014, 04:28:51 AM
Thanks all for providing the reference information.

Is there another thing, the GPU could provide in terms of quality, geometric modeling? :)
Title: Re: GPU processing during the modeling and settings
Post by: Lambo on August 12, 2014, 09:44:59 AM
I just had the same problem were the Dense Cloud generation was taking too long and when I checked, VOILA the GPU was not being used. So as you guys suggested, I rolled back to an earlier version of the driver and now it works fine :0
Thanks all for the info!
Leo
Title: Re: GPU processing during the modeling and settings
Post by: DCK on August 25, 2014, 11:46:11 PM
I'm having the same problem. Using GeForce GTX 580. Which old driver should I use? How far back should I go?

Thanks.
Title: Re: GPU processing during the modeling and settings
Post by: Lambo on August 26, 2014, 09:55:28 AM
I went back to driver  "GeForce 332.21 Driver WHQL     January 7, 2014 "  on the NVidia drivers download page and it works fine.
I think that the bad one is the 340.52 so anything older than that should be fine.
Leo
Title: Re: GPU processing during the modeling and settings
Post by: DCK on August 27, 2014, 01:55:00 AM
great. thanks.
Title: Not using GPU when building dence point cloud
Post by: mskancke on August 27, 2014, 02:07:40 PM
Hi, I've just started using Photoscan, so please bear with me.

When Im trying to build dence point cloud, I can see in the console that it says:

oc1_engine.cpp line231: clEnqueueWriteBuffer failed, CL_MEM-OBJECT_ALLOCATION_FAILURE
GPU processing failed, switching to CPU mode

My initial thoughts goes to Driver issue.

Is there anyone here who can recommend a Nvidia driver that works well with Photoscan?

I have Nvidia GTX770 2GB,


Best Regards

Marius Skancke
Title: Re: GPU processing during the modeling and settings
Post by: Alexey Pasumansky on August 27, 2014, 02:13:40 PM
Hello Marius,

I've merged your topic with this one, as it is connected to the latest post.

Actually, the latest Nvidia drivers 340.52 causes the mentioned OpenCL processing problem, so we recommend to rollback to the previous version of drivers, while we are trying to find out if it could be fixed on our side.
Title: Re: GPU processing during the modeling and settings
Post by: mskancke on August 27, 2014, 02:39:52 PM
Thanks Alexey.

Could it be a possibility to have a pinned post where the new drivers are verified as working or not working?
Title: Gpu processing failed CL_MEM_OBJECT_ALLOCATON_FAILURE
Post by: Diego ROn on September 04, 2014, 10:27:11 AM
Hello everybody,
I’ve got a problem, I’ve changed GC, I installed a GTX 770 4gb, yesterday I tried it with a dset, but I got this message:
CL_MEM_OBJECT_ALLOCATON_FAILURE
Gpu processing failed, switching to cpu mode
So the soft wasn’t able to use GPU?
Any suggestion?, I think I’ve made a configuration mistake while installing the gpu…

Thanks,

Diego.
Title: Re: GPU processing during the modeling and settings
Post by: Alexey Pasumansky on September 04, 2014, 11:06:25 AM
Hello Diego,

Please roll back from 340.52 nVidia drivers to the previous version.
Title: Re: GPU processing during the modeling and settings
Post by: pjenness on November 16, 2014, 04:57:24 AM

HIya

Are people still getting GPU issues?

I have new win 7 build with 980GTX latest drivers (nov11) and getting the gpu failed issue

Cheers

-P

Title: Re: GPU processing during the modeling and settings
Post by: Alexey Pasumansky on November 16, 2014, 08:42:57 AM
Hello pjenness,

Could you please try installing PhotoScan 1.1.0 pre-release (http://www.agisoft.com/forum/index.php?topic=2883.0) and run a short test to check if the problem is solved?
Title: Re: GPU processing during the modeling and settings
Post by: pjenness on November 16, 2014, 08:53:17 AM
Hello pjenness,

Could you please try installing PhotoScan 1.1.0 pre-release (http://www.agisoft.com/forum/index.php?topic=2883.0) and run a short test to check if the problem is solved?

Success thankyou!!

And this is with a 980GTX installed, with a 970GTX in a viDock eGPU solution as a secondary GPU card. (as I share it between desktop and mobile macBook)

Running test scene to benchmark now

Cheers!!

-P