Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - maurello

Pages: [1] 2
1
I am trying to automate a Reconstruction Uncertainty process via script but failing. Basically I would like to run Reconstruction Uncertainty, set a starting value (eg 70), remove those tie points, then run Camera Alignment Optimisation, re-run Reconstruction Uncertainty at a value of 60, remove those tie points, re-run Camera Alignment Optimisation, and so forth until Reconstruction Uncertainty gives back a set value (eg 20). Such script should also check that the tie point removal step would not remove more than 50% of the total available tie points.

First challenge I find is that I do not know how to refer to tie points for this action, it seems the Python library only refers to points in the point cloud. Even then the script I tried is not working. Have to admit I tried to load the Python documentation 2.1.1 in GPT and use some old forum conversation as starting point since I am not proficient with such scripting.

This below is the script I have been trying. It gives an error that 'Metashape.PointCloud' object has no attribute 'points'.

import Metashape

# Function to optimize camera alignment and clean up point cloud based on reconstruction uncertainty
def optimize_and_clean(chunk, start_uncertainty=70, end_uncertainty=20, step=-1):
    # Ensure a point cloud is available
    if len(chunk.point_cloud.points) == 0:
        print("The chunk does not contain a point cloud.")
        return
   
    # Iterate over the specified range of reconstruction uncertainty values
    for threshold_RU in range(start_uncertainty, end_uncertainty + step, step):
        pointCount = len(chunk.point_cloud.points)
       
        # Initialize filter
        f = Metashape.PointCloud.Filter()
        f.init(chunk, criterion=Metashape.PointCloud.Filter.ReconstructionUncertainty)
        f.selectPoints(threshold_RU)
        pointSelected = len([p for p in chunk.point_cloud.points if p.selected])
       
        # Check if selected points are less than half of total points
        if pointSelected < (pointCount / 2):
            f.removePoints(threshold_RU)
            print("Points removed with RU Threshold:", threshold_RU)
        else:
            print("Not enough points selected for threshold:", threshold_RU, "- adjusting threshold.")
       
        # Optimize camera alignment with estimation of tie point covariance after adjustment
        chunk.optimizeCameras()
        print("Camera alignment optimized with RU Threshold:", threshold_RU)
       
# Assuming 'chunk' is the currently active chunk
chunk = Metashape.app.document.chunk
optimize_and_clean(chunk)

2
General / Re: How to use ruler for precise measurements?
« on: September 13, 2022, 08:08:16 PM »
Hello maurello,

If you need to take measurements in a specific plane you can create three markers that lay in that plane - just input their coordinates to the Source values tab of the Reference pane, check them on, but do not define any projections on the images. Then select those markers in the Model view, right-click and select Set Drawing Plane option from the context menu. You will see the plane (defined by the certain color) and new shapes being drawn as well as the ruler measurements will be performed based on the point placed on that plane.

Let me know, if you are able to set up the drawing plane and the results of the measurements on it fit your needs.

This sounds quite tricky. How can I secure the plane create is parallel to the model?
We recreate underwater models of shipwrecks, so the input material has a lot of noise in it. Furthermore we do not have "ground" or any reference point to create plane. It would be an arbitrary choice of coordinates in space. Unless I did not understand what you suggest.

3
General / Re: How to use ruler for precise measurements?
« on: August 26, 2022, 09:17:12 AM »
Any help on this one?

4
General / How to use ruler for precise measurements?
« on: August 02, 2022, 06:08:01 PM »
I used scale bars in the last model created in order to then take precise measurements afterwards.

How do you take a measurement in Metashape in a plane or parallel to a plane (= like the plane of the display)? I have not been able to discover that. I try to make engineering drawings of the model. With great care, I placed the model parallel with X-Y plane as much as it was possible. For the model dimensions, dimensions A, B and C need to be in or parallel to X-Y plane. That is that A and (B & C) are ortogonal defining max dimensions in X- and Y- directions.
But the Metashape "ruler"-tool doesn't allow free measurements. It doesn't give you the option to freely measure distances between free points (whatever coordinates) in 3D space (like points in X-Y plane). The "ruler"-tool sticks only to model points, which means that it is practically impossible to measure distances of points projected into a preferred plane, but the measured distance has always some angle to wanted plane. Hence the measured distance has also a component in Z-direction. So there will be always an error caused by that angle (= ~cosine of the angle between measured distance and X-Y plane).


5
Feature Requests / prioritise nodes based on CPU vs GPU capabilities
« on: July 24, 2022, 05:22:40 PM »
In Network Monitor it is possible to prioritise nodes, in a generic way. It would be good to be able to prioritise nodes based on specific workflow tasks. For example GPU intensive tasks could be prioritised on heavy GPU equipped nodes, while CPU intensive tasks to heavy CPU equipped nodes.

6
General / Re: Texture problems with large project
« on: July 12, 2022, 08:59:16 AM »
Buzco, would appreciate guidance on this topic. You have lots of experience.
I tried to now run texture with size 3840 (same as resolution width of 4K frames) and count 8, because I used reduced overlap with 8 as surface coverage. My assumption is that Metashape retains at least 8 cameras to cover a specific surface, so I used 8 as texture count to "use all the cameras". A bit of creative assumption, if you know better, let me know.

7
General / Re: match photos using only 25-50% of GPU capacity
« on: July 12, 2022, 08:13:24 AM »
Clear, I did some tests with bigger size cameras taken via stills and it does indeed use more the GPU. Thanks!

8
Feature Requests / Number of cameras removed during optimisation
« on: July 10, 2022, 04:47:05 PM »
Sometimes when removing key points through projection accuracy or other optimisations tools, Metashape indicates that some cameras will be removed if we run a cameras optimisation after removing points. It would be good to know how many cameras would be removed if we press yes. Usually such information is displayed only when the process is running in the log.

9
General / Re: match photos using only 25-50% of GPU capacity
« on: July 10, 2022, 02:23:28 PM »
Hi, how many nodes are you using and what are the HW specs.? and alignment parameters?

Matching points process speed depends on Key point limit value...with lower values 5000-10000 GPU has very little amount of computation to do and after each batch of matching task some calculations are needed on CPU...and during that moment GPU is not utilized...therefor you can not see GPU utilization 100% all the time, but only 25-50 as you described in your case.
But if you increase Key point limit to e.g. 40000, then you will see, that GPU will spending more time @100%.

Key point limit doesn't make sense set to high value if you don't have large photos e.g. 24Mpix or higher. If you are using 4k photos from camera, then it is almost impossible to increase GPU utilization during matching points task. Maybe, if it is possible, devs could increase matching points batch size for GPU, but I don't see more in depth that process  :)

In other alignment processing subtask you can also see low GPU utilization, which can be caused by low disk read speeds, or by fact, that some process are CPU single threaded => GPU is not fed fast enough with data(first phase - detecting points).

Clear, that is indeed my case. At the moment I am processing models extracting frames from 4K videos. I use the standard settings 40000 key points and 4000 tie points. In you message you mentioned that if I'd use 40000 as key point limit my GPU would run 100%, that's not the case. Maybe you meant tie point limit?

I do not see my disk drive nor CPU being a limiting factor at the moment. Disk reads/writes are far below the performance limit, so goes for the CPU. The overall feeling is that the processing idles at 25-50% capacity utilisation of my server resources, and I do not understand what causes the bottleneck.

10
General / Re: Texture problems with large project
« on: July 06, 2022, 06:29:12 AM »
I wrote two posts in this topic https://www.agisoft.com/forum/index.php?topic=14145.0

If your workflow is only in metashape, the fastest way how to guess the size and count is simply do one texture 4k or 8k size which does not take so much time and then compare the result with original photos. If the result looks like 4 times less sharper, then increase 4x the count or 4x the resolution...so one quick test, one quick visual guess/comparisson and then one final texture calculation :)

Hey Bzuco, this is an interesting topic for me too. I'd like to understand more. I usually take 4K videos underwater 3840x2160 and extract frames using Metashape. What would be the best texture resolution and count in Metashape to get the best out of the frames?

11
General / match photos using only 25-50% of GPU capacity
« on: July 05, 2022, 11:03:57 PM »
I noticed that my nodes do not fully utilise GPUs during certain steps where GPU is engaged. For example Matching Photos during the alignment process does use GPU but only 25-50% of its capacity. CPU is also barely used, so it cannot be a bottleneck.

Is there an issue or something to set to fully utilise GPUs?

12
Feature Requests / GPU load in Network Monitor
« on: July 05, 2022, 01:37:06 PM »
It would be good to see also load on GPU(s) in the Network Monitor app

13
General / Re: model confidence disappear after textures are processed
« on: June 26, 2022, 10:49:30 PM »
Hello maurello,

I was not able to reproduce the problem on any system including macOS. Do you still observe the loss of confidence information in 1.8.4 version pre-release?
https://s3-eu-west-1.amazonaws.com/download.agisoft.com/metashape-pro_1_8_4.dmg

I tried with this version and it now works. No idea what the issue was.

14
General / Re: building textures does not use GPU on macOS
« on: June 26, 2022, 10:46:38 PM »
I tried to install MoltenVK, without luck. Metashape still does not use GPU for texture blending in macOS.

15
I always use Reduce Overlap to simplify the model reducing the number of cameras covering the same area of my models. However, I noticed that the Reduce Overlap functionality does not take into account the estimated quality of the cameras. This means that for example out of 10 cameras the Reduce Overlap might disable 8 of them keeping enabled only 2 but not necessarily the 2 with highest quality. As a rule of thumb I try to discard all cameras with less than 0.6 estimated quality (if possible). Now this process has to be done manually, which takes long time when one has thousands of cameras.

It would be great to have an added functionality in Reduce Overlap to prioritise the cameras with highest quality, best would be if the quality value could be input as a preference. Hence the feature would try to disable only the lowest quality cameras retaining as enable the highest quality ones, based on the value input if possible (a preference, not a hard mandatory requirement to avoid loosing part of the model).

Pages: [1] 2