Forum

Author Topic: Export Camera positions and calibration for pytorch3d  (Read 2598 times)

mrohr

  • Newbie
  • *
  • Posts: 2
    • View Profile
Export Camera positions and calibration for pytorch3d
« on: December 02, 2024, 07:25:40 PM »
Hello,

I collected a dataset of facial images from different angles (sequentially taken), that I used to create a model in metashape. Now I want to compute a reprojection error of a texture that I computed outside of metashape using the camera positions. For the computation I use pytorch3d.
The pseudocode for loading the positions and calibration is the following:

R_cam,t_cam, R_global, t_global, scale_global = parse_metashape_camera(“Metashape XML”) # the global variables are from the chunk transformation. Metashape XML is created by Exporting cameras
C_in, resolution = parse_opencv_calib(“OpenCV XML“) # C: [1.84421015914974055e+03, 0., 6.11500000000000000e+02, 0, 1.84421015914974055e+03, 5.11500000000000000e+02, 0., 0., 1]

t = (torch.matmul(R_global,t_cam) +t_global)/scale_global  # I don't know why, but dividing by scale_global here and multipliyng later works for scans that were scaled in metashape
R = torch.matmul(R_global,R_cam)

R_in_inv = torch.transpose(R,dim0=1,dim1=2)
t_in_inv = -torch.matmul(R,t)*scale_global  # here

cameras =  p3dutils.cameras_from_opencv_projection(R_inv,t_inv,C_in, resolution)


I render the model from all camera angles and overlayed them with the original images.
The render is slightly off from all of those.

To me it seems like metashape does something slighlty different internally, as the texture alignment with the model insided metashape looks quite good.
In a different shoot, I tried precalibrating the cameras with a checker board (that is where I also used scaling inside metashape to achieve the same scan size as in the version without external calibration. Thus the scaling in the code). To me it seems like this precalibration slighlty decreased the error. I also loaded the camera positions inside blender using the .abc format. But there the error seems to be the same.

Can you tell me if  metashape uses some other parameters than me to texture the model from the camera views? Or is there some script that works well that I could use?
It would also be helpful to know if the camera calibration could be the reason. I haven't used a perfect calibration procedure before, because I was told that metashape would calibrate internally anyways.

Best,
Maurice



ilia

  • Jr. Member
  • **
  • Posts: 79
    • View Profile
Re: Export Camera positions and calibration for pytorch3d
« Reply #1 on: December 05, 2024, 02:57:30 PM »
Approximated distortion coefficients weren't taken into account?
How approximated camera calibration parameters look like? And what is the format of C_in pytorch3D uses. Does it take distortion coefficients also into account?

James

  • Hero Member
  • *****
  • Posts: 769
    • View Profile
Re: Export Camera positions and calibration for pytorch3d
« Reply #2 on: December 06, 2024, 05:36:11 PM »
I can't help with the matrix maths, or the internals of Metashape, and I don't follow what you're doing...

But if I did understand I might try using File -> Export -> Convert Images in Metashape to create a set of 'input' images without distortion.

You get a kind of 'pinhole equivalent' version of your input images, where distortion parameters are effectively made zero and the image is centred on the actual principal axis, ready to compare to model renders.

mrohr

  • Newbie
  • *
  • Posts: 2
    • View Profile
Re: Export Camera positions and calibration for pytorch3d
« Reply #3 on: December 10, 2024, 01:13:18 PM »
Thanks for your responses. The camera calibration parameters (including distortion) are given in the attached xml files in metashape format.
I found that the main problem was a bad calibration result inside metashape. I now use a proper calibration target and the error is almost 0.
The distortion could explain the remainder, so I will try exporting the "distortion-free" images from metashape.