You can right click one of your photos and export diffuse, depth and normal. I usually need more than that so I run everything through xNormal. Baking some maps from Photoscan and some in any other apps usually give tiny offsets that don't perfectly align in the end, so I would advice against it.
Oh wow, thanks for that!!
I hadnt seen that feature. Results look interesting, will have to try again with intentional photos.
The main caveat with this approach seems to be it requires using the supplied camera positions, so you have to choose one.
Is there a way to for example supply a UVed mesh into the project that is say an average ground plane, or if a tree trunk , low poly cylinder wrapped into the space. And Extract the same colour, depth and normals from the dense data to the UVed mesh?
The only way I can think of at the moment is generating the highest mesh you can handle in zBrush etc, retopo, UV, bring back in and extract. But you always lose some of the detail in the shape edges (bark,rock edge etc) this way. If we could extract from the depth information that is per pixel the result should be more accurate right? (tho potentially a little noisy or holes in places of now data)
-P