Forum

Author Topic: GPU Usage in Background processing mode?  (Read 2612 times)

MikePelton

  • Newbie
  • *
  • Posts: 4
    • View Profile
GPU Usage in Background processing mode?
« on: July 28, 2021, 03:12:48 PM »
Hi - we're trying to get to the bottom of close to zero GPU usage in MetaShape and have stumbled on a possible issue - it looks like the GPU is being detected but otherwise ignored when we're running mesh and texture building in Background. Does that seem possible/likely?

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 14854
    • View Profile
Re: GPU Usage in Background processing mode?
« Reply #1 on: July 29, 2021, 05:37:37 PM »
Hello MikePelton,

Do you observe similar problem with other GPU supported stages (image matching, depth maps generation, for example)?

Build Mesh process uses GPU only when depth maps based mesh generation approach is selected, but only during certain sub-stages. Texture blending would be performed on GPU, if there's sufficient VRAM for the required operation.

Maybe you have the processing log related to the procedures in question?
Best regards,
Alexey Pasumansky,
Agisoft LLC

Matt Holmes

  • Newbie
  • *
  • Posts: 1
    • View Profile
Re: GPU Usage in Background processing mode?
« Reply #2 on: September 03, 2021, 01:20:44 AM »
We are having similar issues, but have managed to jack up GPU use for some calculations to around 66% (using GPU-Z, not taskmanager)

I can run tests using GPU-Z that push that to 100% GPU usage but hit the power limit for the card.

We run 2 X RTX 3070 cards. I have enabled both GPU's in the program and turned off GPU+CPU usage.

Wondering what other techniques' or scripts we can run that will force MetaShape to use more of the GPU's for calculations.

Looking forward to finding other methods - anything to test.

Thanks for your input

Bzuco

  • Full Member
  • ***
  • Posts: 181
    • View Profile
Re: GPU Usage in Background processing mode?
« Reply #3 on: September 03, 2021, 10:33:48 AM »
@Matt Holmes
The tasks on CUDA cores in GPU are executed in "warps", one warp means some subtask is assigned to e.g. 32 CUDA cores and executed, then another subtask to next group of 32 CUDA cores, and so on. All CUDA cores in GPU are split in SM blocks and in each SM block you have in last generations another 4 blocks of CUDA cores. So from programmers side it is harder to prepare task for GPU than for CPU. The overall GPU usage you see in gpu-z is dependent on: how many task you have for all cuda cores, how good you can split them in warps/blocks, it is also dependent on algorithm you are calculating in each group...some groups can finish sooner than others, because they got easier subtask that other groups at that time. CUDA cores are also waiting when data are sending to and from GPU. Different data sets and settings can force GPU to different % usages, etc.

If you were running some graphical test in GPU-Z, they are using whole GPU, all parts. Metashape is utilizing only CUDA cores, so less power hungry.

What you can test is slow down whole process to see that GPU usage in GPUz was really @100%, but only for fraction of a time(lock GPU frequencies to some low values 300-600MHz using nvidia-smi.exe utility).
What you can try next is switch from CUDA to OpenCL in metashape and see if your GPU usage will be slightly higher/lower...I think that the tweak settings are "main/gpu_enable_cuda" set to "false", or "main/gpu_enable_opencl" set to "true".
When I tried to switch to openCL on my data sets, if I remember I got with 3-5% boost on alignment phase, but it was slower on depth map generation...and vice versa with CUDA.

Undervolt GPU - set lower power limit and overclock GPU at the same time gives you higher frequencies and lower power consumption with lower temperatures. GPU usage will be probably ~ the same, but slightly faster computation times.

I think 66% GPU usage @maximum clock is still good portion of performance vs what can do CPUs.
I wonder if agisoft programmers will succeed in the future transfer  the depth maps filtering process from CPU to GPU  :D ;)