1
General / Cameras from PhotoScan to Blender
« on: January 31, 2017, 05:59:20 PM »
PhotoScan not only manages to automagically determine camera positions, it also saves them in an XML format together with some calibration and region info, e.g.:
How would one use this information to replicate the camera setup in Blender, in such a way that allows rendering an object exported from PhotoScan from multiple cameras that correspond to the original cameras?
Reading the data into Blender is simple enough, e.g.:
However, creating cameras in Blender requires explicit positions and Euler angles, e.g.:
How do you figure out that data (and the lens parameters) from the information provided by PhotoScan?
Code: [Select]
[...]
<sensors>
<sensor id="0" label="EOS 700D (50 mm)" type="frame">
<resolution width="3476" height="5208"/>
<property name="focal_length" value="50"/>
<property name="fixed" value="false"/>
<calibration type="frame" class="adjusted">
<resolution width="3476" height="5208"/>
<fx>12203.080961106312</fx>
<fy>12203.080961106312</fy>
<cx>1738</cx>
<cy>2604</cy>
</calibration>
</sensor>
</sensors>
[...]
<cameras>
<camera id="0" label="CAM01.tiff" sensor_id="0" enabled="true">
<transform>8.6286025325444304e-01 2.2784429461927105e-01 4.5117531045384701e-01 -3.4971964005367551e+00 -1.4353836822127128e-01 9.6634264388175029e-01 -2.1349152550863368e-01 2.1643585846070095e+00 -4.8463276839484676e-01 1.1945238392378640e-01 8.6652305668855178e-01 7.4260627770935783e-01 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 1.0000000000000000e+00</transform>
</camera>
<camera id="1" label="CAM02.tiff" sensor_id="0" enabled="true">
<transform>7.6467032539386148e-01 2.3526678856041383e-01 5.9994069012071138e-01 -4.4155923884498929e+00 -9.9125727675139899e-02 9.6283739634956111e-01 -2.5123343388902708e-01 2.3044878071764003e+00 -6.3675221521006109e-01 1.3264119417159839e-01 7.5957417678184158e-01 1.3527794460798552e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 1.0000000000000000e+00</transform>
</camera>
[...]
</cameras>
[...]
<region>
<center>-4.4272670928260860e-01 1.1203212821847490e+00 6.3181234130290411e+00</center>
<size>2.7849106232325234e+00 1.8806472222010293e+00 1.1353885094324743e+00</size>
<R>-2.6297166709341913e-01 -9.6314773970068590e-01 -5.6500741725820057e-02 -7.7401942195951801e-01 1.7564821840744785e-01 6.0831047812752914e-01 -5.7596860742193368e-01 2.0370109199449452e-01 -7.9168556156133685e-01</R>
</region>
[...]
How would one use this information to replicate the camera setup in Blender, in such a way that allows rendering an object exported from PhotoScan from multiple cameras that correspond to the original cameras?
Reading the data into Blender is simple enough, e.g.:
Code: [Select]
for element in root.findall('./chunk/cameras/camera'):
transformation = element.find('transform').text;
matrix = np.reshape(np.matrix(transformation), (4, 4))
transformations.append(matrix * global_transformation)
However, creating cameras in Blender requires explicit positions and Euler angles, e.g.:
Code: [Select]
bpy.ops.object.camera_add(location = (x, y, z), rotation = rot)
How do you figure out that data (and the lens parameters) from the information provided by PhotoScan?