Forum

Author Topic: Deleted tie points still create mesh, how to clean up?  (Read 3980 times)

Martin888

  • Newbie
  • *
  • Posts: 2
    • View Profile
Deleted tie points still create mesh, how to clean up?
« on: December 29, 2022, 03:48:27 PM »
Call me an idiot but I spent two days trying to figure out what should be obvious, but to me it's not: deleting tie points through selection>delete or gradual selection are still used when building a new mesh or point cloud. I understand why: depth maps are used, not tie points. But:

- Is deleting tie points only needed to reoptimize the camera? (should I reoptimize the camera at all since I calibrated my mavic 2 pro with the chessboard?)

- But then how to remove garbage baked into the mesh before it is created?

Masking photos as suggested here is not an option: (a) too many small creeks (b) I want the textures and (c) no holes in my model. It seemed so simple: reflections from trees in the water appear as though underground so tie points can be easily deleted.

Looking forward to any clues, thanks.

« Last Edit: December 29, 2022, 05:57:43 PM by Martin888 »

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 15307
    • View Profile
Re: Deleted tie points still create mesh, how to clean up?
« Reply #1 on: December 30, 2022, 03:32:31 PM »
Hello Martin,

Tie points removal doesn't have direct impact on the mesh generation. Based on the tie points Metashape only estimates the overlapping image pairs and also use the point distribution to estimate the scene depth in general.

Can you demonstrate the results of the mesh generation that you need to clean up? Have you checked, if using depth maps based mesh generation works fine on your data?
Best regards,
Alexey Pasumansky,
Agisoft LLC

Martin888

  • Newbie
  • *
  • Posts: 2
    • View Profile
Re: Deleted tie points still create mesh, how to clean up?
« Reply #2 on: December 30, 2022, 05:33:33 PM »
Hi Alexey, thank you for your time. Attached are two images: underwater.jpg shows the blobs - these are reflections in the water. The other image shows the results after masking, which works better than expected (masking just a few photos and using strict volumetric masks).

I was just hoping to fill the gaps of the mesh and use the image of the water as a texture (I know water doesn't work well, but in other areas it is pretty straight with a nice looking texture).

Nowater.jpg shows a medium mesh based on depth maps and that works great (it's a test dataset with motion blur, amazing how Metashape performs).

If there is a way to texture the water I'm eager to learn, but this is no longer a deal breaker: I'll create a water plane and add a manual texture.

P.S. When working with a checkerboard-calibrated camera, is it still advisable to optimize lens parameters in Optimize camera alignment?