Hi there!
Just have a project where I shot alot of still photos on set to recreate a dummy version of this set. Set is a large square with small surrounding buildings. Recreating the model in Photoscan worked fine so far.
Now my idea for doing the actual shots (most of them locked cameras) was to get 1 frame from each shot, import it into my final Photoscan scene, add the camera data from the film camera (focal length, sensor size, etc.) and have them aligned as well, so I can export the camera positions for all the shots with my model and add all the additional stuff in relation to this dummy set and render with the cameras I got for all the shots.
But there is one problem I see with this process: camera calibration/lensdistortion
If I get correct camera positions for all my shots, they'll match to the undistorted image but not to my original footage of the shots.
Normaly when using camera tracking software I'll use a matching lensdistortion node in Nuke to apply the lensdistortion to the 3D rendering, so it fits on my original footage. For Syntheyes or other apps there are nodes for Nuke that uses the same algorithm as the camera tracking software. But how do I apply the Photoscan lensdistortion in Nuke? Is there a node/algorithm to use that gives the same result as when doing the undistort in Photoscan?
Using Photoscan for this kind of VFX work (especially with shots with locked camera) would be really helpful, but if I can't get the resulting model (or more precise: renderings of this model) to match my original footage, it's quite useless.
Any ideas? Thanks!