Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - geo_enth3

Pages: 1 [2]
16
General / Re: Boost computation speed for photo subalignment
« on: April 21, 2022, 01:52:35 PM »
Dear all, thanks a lot for all your input. It will take some time to try and understand those suggestions myself so I will come back to you later but in the meantime I really appreciate your help!


17
Hi,

I was wondering if there is a way to read the stored GPS coordinates from the EXIF data. My camera is equipped with a GNSS module and stores approximate camera in the EXIF data. I know it is possible to read EXIF data via this command:

Code: [Select]
camera.photo.meta["Exif/GPSLongitude"]
However, this only retuns a string of the rounded coordinates (e.g. '16,22.116E') while I need full precision (and preferably as float or double, but this is less important).

Thanks,

geo_enth

18
General / Boost computation speed for photo subalignment
« on: April 19, 2022, 10:57:54 AM »
Dear Metashape team,

My task is to subalign images to a very big project. Speficially, I want to subalign ca. 100 images (on a daily basis) to a chunk which exists of ca. 27000 photos.

Every thing works fine and the photos get subaligned accurately. Here are the main parameters I am using:

downscale = 4
generic_preselection=True
keypoint_limit=10000
tiepoint_limit= 4000
keep_keypoints=True

However, it takes quite long to compute this (several hours) which is logical considering the project size. I still wanted to ask if you have any tips to reduce the computation times. Specifically could you:

a) recommend a hardware specifically for the subalignment task. My current setup consists of (CPU: AMD EPYC 7302 16-core; GPU GeForce GTX 1650; RAM: 206GB). I know this is not the ultimate hardware for this task but observing the task  manager during the process shows me that none of my resources are fully used so would a better CPU/GPU even make sense? If so, is there a specific hardware you would recommend for subalignment tasks?

b) How much (approximately) would the usage of externally measured camera coordinates (via a GNSS antenna mounted on the camera with an accuracy of a few cm) reduce the computation times (by activating reference preselection from source)?

Thank you very much in advance!

geo_enth



19
Feature Requests / Duplicating chunk (only for selected cameras)
« on: April 05, 2022, 04:47:54 PM »
Hi Metashape Team,

I think this topic has been brought up already but I wanted to emphasize how nice it would be to allow chunk duplicating but only for selected camera. I think that would save many people a lot of time and nerves.

Cheers and all the best,

geo_enth3

20
Dear Paulo,

thank you very much! That works :)


21
Hi Alexey,

my code for creating the dense cloud is the following:

Code: [Select]
chunk = doc.chunks[-1] # selecting the last chunk
task = Metashape.Tasks.BuildDepthMaps(downscale=downScalingDenseMatching, filter_mode=Metashape.AggressiveFiltering)
task["pm_enable"] = False
task.apply(chunk)
chunk.buildDenseCloud()

where the choice of the var downScalingDenseMatching seems to have no effect. I tried 1, 4, 16 as values for it but the result is always "Medium Quality" and "Mild filtering". Looking at the task confirms that the parameters are not interpreted correctly:

Code: [Select]
>>> task = Metashape.Tasks.BuildDepthMaps(downscale=16, filter_mode=Metashape.AggressiveFiltering)
>>> task
2022-04-04 11:21:16 <BuildDepthMaps(quality = Medium, depth filtering = Mild, PM version)>

What am I missing here?



All the best

22
Thanks a lot! It now runs on my GPU (but as predicted by you the computation time has not decreased very much).
I have one more question:
I have changed the "downscale" parameter in the BuildDepthMaps function several times (1, 4, 16) but the Dense Point cloud result was always the same ("medium quality" in GUI). Could it be that the downscaling requires a different input in this case?

Here's my code:
Code: [Select]
    task = Metashape.Tasks.BuildDepthMaps(downscale=16, filter_mode=Metashape.AggressiveFiltering)
    task["pm_enable"] = False
    task.apply(doc.chunks[-1])

This results in dense point cloud with "medium" quality while it should have "lowest", right?

23
Dear Alexey,

thank you very much for the hint!
Could you also tell me how I can utilize this tweak in the python API?

Cheers!

24
Hi I would really appreciate any information on this issue. I am even willing to change my GPU but as other operations such as keypoint matching seem to work I want to make sure that the GPU is really my problem.

Here is the (GPU related) console output for the image matching of a random chunk (which runs smoothly):

Code: [Select]
Found 1 GPUs in 2.045 sec (CUDA: 0 sec, OpenCL: 2.045 sec)
Using device: AMD Radeon HD 7700 Series (Capeverde), 8 compute units, free memory: 974/1024 MB, OpenCL 1.2
  driver version: 3240.6, platform version: OpenCL 2.1 AMD-APP (3240.6)
  max work group size 256
  max work item sizes [1024, 1024, 1024]
  max mem alloc size 652 MB
  wavefront width 64
[GPU] processing 8256x5504 image using 4128x5504 tiles

But during depth map estimation all camera samples are filtered (which then does cause the zero resolution error, I suppose). Here's one example from the filtering output (they all result in "final filtering: 0%").

Code: [Select]
[GPU 1] Camera 1587 samples after final filtering: 0% = 100% - 1% (not matched) - 18% (bad matched) - 2% (no neighbors) - 37% (no cost neighbors) - 42% (inconsistent normal) - 0% (estimated bad angle) - 0% (found bad angle) - 0% (speckles filtering)
Thanks!

25
Dear Metashape-Team,

When building my dense cloud with the GPU I am always getting the "zero resolution" error. As it works flawlessly (but very slow) on my CPU I suspect it's because of my GPU (GPU AMD Radeon HD 7700 Series Discrete). My GPU drivers are up to date. Is there a solution to this problem?

Thanks!

26
Python and Java API / Gradual model selction - Python API
« on: March 28, 2022, 11:56:30 AM »
Dear Metashape Team,

I was wondering if the tool "Gradual Selection" for models is also accessible via the Python API? I couldn't find such a function in the documentation but I might have overseen it. Specifically I am searching for the "Connected Component Size" criterion to "filter" my model.

Thanks!

Pages: 1 [2]