I'm wondering if the camera angle relative to the subject (and thus to the resulting normal of the constructed face(s)) makes a significant difference in the quality of the pixels that get mapped onto that face.
Like if you took photos at 90 degrees to the surface of, say, a wall, and modeled that, but then also repeated the process but angled the camera at some angle other than 90 to the surface (say, 70 or 60 degrees) - does the software do more interpolating of the texturing pixels as the camera angle (relative to the face) moves away from 90 degrees?
I'm just trying to get a sense of how the textures get mapped to the faces, and how to optimize that process during the shoot.