Forum

Author Topic: How does Metashape decide which cameras to use for texturing?  (Read 1084 times)

jenkinsm

  • Jr. Member
  • **
  • Posts: 72
    • View Profile
How does Metashape decide which cameras to use for texturing?
« on: January 31, 2022, 01:03:20 PM »
I found this note from Alexey in an old post from 2013 and I was wondering is anyone has more information about how Metashape chooses which camera to use for texturing:


"For texturing PhotoScan uses cameras that are looking in the direction parallel to the surface normal in the area being textured."

My question is: What happens if two (or more) cameras are looking in the direction parallel to the surface normal?

In my case, I captured a road using an iPhone (this is the high quality dataset) and a 360 GoPro (reference dataset for GPS and image alignment)

I wasn't expecting to use the 360 photos for mesh and texture generation, but it turns out that the 360 cam contributes to the tree meshes in areas that were out of frame on the iPhone, so I'd like to use them.

So what I need to k ow is how to create the textures in such a way that the road and surroundings are textured from the iPhone images, while the treetops are textured from the GoPro.

My assumption is that I will have to mask out the bottom half of the GoPro images, but I would love to hear any insights into how Metashape handles thus.