Forum

Author Topic: What is best practice if final mesh has unwanted surroundings next to it ?  (Read 1104 times)

Steve003

  • Full Member
  • ***
  • Posts: 169
    • View Profile
Hi,
metashape 1.8.2
I had 20 mins to photo something I had planned for 40 mins at. Even so what I have is good and perhaps enough to model from in CAD.

I have aligned photos in Photoscan, as metashape rejected 19 of them. (it most often fails on quite a few, photoscan never fails), imported the project then to Metashape 1.8.2
deleted most of the scene which was background and unwanted.
I also removed the unwanted points visible in the open parts of the object, an old window frame without glass.
next step I chose was create mesh from depth maps, skipping the Dense Point Cloud stage, its better at detail and quicker.
Result sees some background next to the item I wish to remove., also the items visible through the window frame despite deleting those points has been modelled !

Should I delete the unwanted mesh adjacent and touching the window frame then run make mesh, but surely that will look at same data source and put back in those bits.

What are the exact steps to get rid of parts I can only see when mesh is made, and not have them reoccur ?

Cheers

Steve003

Steve003

  • Full Member
  • ***
  • Posts: 169
    • View Profile
Please anyone.

What are the exact steps to get rid of parts I can only see when mesh is made, and not have them reoccur ?

A fundamental thing when using the program but how is it done ?

Steve003

JJ

  • Newbie
  • *
  • Posts: 36
    • View Profile
sounds like you use MS similar to me - rough site scan and then rebuild in CAD and/or other freeform modeller?

my understanding is that the SParse Cloud (generated when aligning cameras) will not influence the final mesh or dense cloud at all.  AFAICT they are 2 distinct processes.  this is to align the cameras only so that the next steps can be performed

so the process is ALign Cameras, then optimise them.  how ever few points you're left with will have no effect on the next steps, as long removing points/optimising does not reduce the number of cameras.

once aligned if you generate the mesh from depth maps, the depth maps are created from the photos, not the sparse cloud, so deleting some points will have no effect on the final model.

if you want to remove parts of the model you need to do it after the model has been generated.

the only way to limit the model size before model generation is to rotate/resize the bounding box.

users with more experience will chime in hopefully but this is my understanding