Hello fellow Metashape users,
I'm dealing with a persistent issue in Metashape Professional 2.1.0 while processing a drone dataset of over 50,000 images captured using my Mavic 3 Enterprise. These images contain highly accurate RTK data, including a mix of low-altitude and handheld photos (4-5 thousand) alongside a considerable amount of oblique and nadir aerial shots (around 50k). I consistently use the highest and ultra settings for processing, yet I encounter a recurring issue with the depth maps.
Problem Description:Throughout the dataset, the depth maps at a very specific location associated with images that overlap significantly either turn black or nearly black. Curiously, when I isolate this specific area (a pier) in a separate project with a smaller dataset but the same images, the results appear normal. Generally the model looks good at the alignment stage but falters during depth map generation. And those specific images that turn out black are the same images that don't align during the initial alignment, but then have a lot of matches at the end.
My Workflow:- Verify image accuracy and quality.
- Create camera groups for all clusters of photos from one location / one flight / one type (low altitude, high altitude, nadir, etc.)
- Add all images to one chunk or just smaller sub-chunk regions and align with the following settings:
- General Alignment Settings:
- Accuracy: Highest
- Generic preselection: Yes
- Reference preselection: Disabled (I've disabled this because the dataset includes a mix of low-altitude and handheld photos which typically should align well with the many oblique aerial photos. My thought was, that this would have to be disabled, because the images should align with the aerial photos, but the distance might prevent them from doing so if enabled. However, I suspect this might also contribute to the issue as I've noticed some matches between completely unrelated photos across the entire island.)
- Reset current alignment: No
- Advanced Alignment Settings:
- Key point limit: 90,000
- Tic point limit: 6,000
- Apply masks to: None
- Exclude stationary tie points: Yes
- Guided image matching: No
- Adaptive camera model fitting: Yes
- Remove all poor tie points.
- Identify and realign any photos that didn't align properly initially (Usually didn't have to use this, but now many images with very obvious alignment possibility don't align...
- Either subdivide the chunk into smaller segments or use the Block Model approach to generate all depth maps once before building smaller meshes.
Model Building Settings:- Source data: Depth maps
- Surface type: Arbitrary
- Depth maps quality: Ultra high
- Face count: High
- Advanced:
- Depth filtering: Moderate
- Interpolation: Enabled (default)
- Point classes: All
- Calculate vertex colors: No
- Use strict volumetric masks: No
- Reuse depth maps: No
- Replace default model: No
Interestingly, no matter the settings I adjust, the same images consistently exhibit these issues. This problem persists even when I limit the processing area to just the pier. I tried also removing a lot of pictures from surrounding areas and just aligning a smaller subset, but it looks the same.
Attachments:- Matches on a photo with a black depth map.
- Screenshot of the tie points at the location.
- Screenshot of the model and black depth maps for a specific point.
I hope to gain insights or suggestions on how to resolve this issue, as it appears consistently regardless of the varied settings or the scope of images processed.
Thank you in advance for any help you can provide!