I read recommendations of masking photos from model on more than one occasion on this forum, but whenever I tried the workflow (align unmasked photos first, build low/med quality mesh and use it to generate masks for photos), there would be enough imprecisions in the mask to scare me away from using this feature. Mask encompassing some of the background is not the end of the world, but often mask would clip tip of the nose, or edge of ears etc.
And so, my questions are:
- where do these imprecisions in masking from model come from? Could it be that masks are generated for undistorted photos but the resulting mask is displayed over original photo that still carries optical distortions? In this case, mask could actually be perfect even if it appears to be off.
- does Photoscan respect photo masks down to pixel precision? Or is there some threshold that causes PS to scan for points even if they fall just slightly behind mask edge?
Cheers,
Andrew