Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - Mak11

Pages: [1]
Bug Reports / AMD RDNA2 GPUs compute units
« on: April 06, 2022, 10:39:11 PM »

Not sure if this is an actual bug or driver issue..but I noticed that the number of compute units reported/detected by Metashape is half of what's actually supposed to be on AMD RDNA2 GPUs ( a 6700XT in my case).
Metashape reports 20 Compute Units instead of 40.
I'm not even sure if performance is affected at all btw.
Windows 10 with the latest Radeon 22.3.1 WHQL drivers.

Edit: Looks like a driver thing. Every another OpenCL app I tested is reporting the same thing (20 Compute Units).
My Radeon VII  (Vega arch GPU) is correctly reported as having 60 Compute Units. Performance doesn't seem to be affected from what I can see (the 6700XT is constantly slightly faster than the Radeon VII)


Feature Requests / The option to only generate depth maps.
« on: May 24, 2020, 05:26:04 PM »
As the thread title says. It would be great to have the option to only generate depths maps separately from the mesh generation.
(Metashape Standard edition)
I often have to generate depth maps for a bigger volume (so that all of the cameras resolution is used) & then manually stop the process, resize the bounding box (to only reconstruct what i need) & finally reconstruct the mesh re-using the depth maps I've previously generated. This would eliminate the need to wait in front of the screen until the depth maps are generated & manually cancel the process etc.




I'm encountering an issue in Metashape 1.6.1 with a project which is getting stuck exactly at the same stage each time (no way to finish it or even cancel it without terminating Metashape in Task Manager.

The project seems to switch to using the CPU halfway through during mesh reconstruction using Depth Maps and nothing happens besides the CPU being stuck at 14% usage with only one CPU Core working for 10 seconds (then another core for 10s, then another one etc..)

Same project worked fine in previous 1.6 builds.

UPDATE: Just ran another project and the exact same thing happened. This is while reconstruction on Ultra High settings. No issue on High. Doesn't make sense why the CPU is being invoked halfway through even though the kernels have just been loaded on the GPU (and CPU OpenCL is disabled in the settings tab)



Feature Requests / More control over Normal & AO map baking
« on: January 16, 2020, 05:39:01 PM »
It would be great to have more control over the Normal map baking process. The ability to tweak the dilatation width etc. As of right now they suffer from seam artifacts (the UV seams are clearly visible on the normal map under certain lighting angles). Also, can we choose the tangent space being used (Mikk/xNormal, Marmoset, Unity, UE4, Maya etc) ?

The ability to bake Bent Normal & Height map would also be great. But having more control of the Normal ( & AO) baking process would be nice.



General / De-lighting experiment
« on: November 13, 2019, 01:38:58 PM »
Just had a go for the first time with the De-lighter. I usually do de-lighting manually in Substance Designer/Painter because the source photos are well lit with no hard-shadows. But this time I had some pretty abysmal shots done in a museum with my smartphone with the subject lit by 3 different light sources with varying temperature so I decided to give the de-lighter a go.

The first step before any photogrammetry work was to tweak the photos in Lightroom. Once reconstruction was done and low poly model finished/UV Mapped etc.. I first removed shading & AO and then one pass to remove hard shadows with highlights & color artifacts suppression set to high.. Result was relatively good but not perfect either which was expected given the source. Painted the rest out in Substance Painter.




Renders (horribly compressed images sorry):


windows 10 , Metashape 1.5.4 with Radeon VII latest 19.8.1 drivers:

[GPU 1] Using device: AMD Radeon VII (gfx906), 60 compute units2019-08-20 20:12:28 Warning: CL_DEVICE_GLOBAL_FREE_MEMORY_AMD returned too big free memory! Data[8]=a8,54,ff,0,0,0,0,0 free_mem_size=16733352 global_mem_size=16580608
, free memory: 12953/16192 MB, OpenCL 2.0

Warning: CL_DEVICE_GLOBAL_FREE_MEMORY_AMD returned too big free memory! Data[8]=54,99,fe,0,0,0,0,0 free_mem_size=16685396 global_mem_size=16580608

Also happens during Refine Mesh process. Warning doesn't happen from the get go but after a job has been started and finished or canceled and a new one is started. Closing & restarting Metashape "fixes" it until it reappears.


General / Major Speed and memory managment overhaul needed
« on: July 21, 2018, 12:32:23 PM »
I've just recently decided to upgrade my gear and went from a 14.3Mp camera to a 24.2MP cam and suffice to say that Photoscan sure didn't "love" it at all. Neither did my PC or my electricity bill. The processing times are now unbearable (yes even the alignment process which can take 40 Minutes for 190 photos TIFF/16Bit in PS 1.4.2 while the exact same set takes 2 Minutes in RC which doesn't even use the GPU for this stage!).
Reconstruction is now faster compared to last year when using the experimental method but I now can't do it in ultra-high quality because I run out of memory given the resolution of the cameras so I will have to use the old method..which can take days now (build ultra-high dense cloud then high-poly mesh...). Photoscan seriously need to support out-of-core computation after all these years.

I just couldn't bring myself to use Reality Capture before because of it's horrible UI, serious lack of features compared to PS, horrible pricing and the fact that it's CUDA only. But I'm on the verge of grabbing an NVIDIA GPU & 3/month license just to process some of my scans given how slow PS has become for me.

I really hope that the Agisoft team is working on improving Photoscan's memory management in particular because everybody doesn't have 64GB in there PC (32GB here which have become nearly useless with 150+ 24MP cameras..).

Sorry for the small rant folks but PS is such an awesome piece of software especially in terms of features. A boost in perf for high res photo sets is really the only thing missing.


ps. Using chunks can be a band-aid solution but definitely not the easiest or logical especially for single object scan projects when you simply aim to create a super high-poly mesh to use for baking

General / PhotoScan Roadmap
« on: June 07, 2018, 03:23:39 PM »
Hi there Agisoft Team,

I was wondering if you guys would ever release communicate what the roadmap of PhotoScan is?
Things la new features currently being worked on, performance improvements, target dates for release of new builds etc..


Hi everybody,

As stated in the title..When is OpenCL exactly being used during the Dense Point Cloud generation?
As far as I can see it is used at the "beginning" and then the whole "filtering depth maps" is all done on the CPU. Is that right?
Also, are you guys planning on using OpenCL for more computational tasks in the near futur?


Pages: [1]