Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - mrohr

Pages: [1]
1
General / Re: Export Camera positions and calibration for pytorch3d
« on: December 10, 2024, 01:13:18 PM »
Thanks for your responses. The camera calibration parameters (including distortion) are given in the attached xml files in metashape format.
I found that the main problem was a bad calibration result inside metashape. I now use a proper calibration target and the error is almost 0.
The distortion could explain the remainder, so I will try exporting the "distortion-free" images from metashape.

2
General / Export Camera positions and calibration for pytorch3d
« on: December 02, 2024, 07:25:40 PM »
Hello,

I collected a dataset of facial images from different angles (sequentially taken), that I used to create a model in metashape. Now I want to compute a reprojection error of a texture that I computed outside of metashape using the camera positions. For the computation I use pytorch3d.
The pseudocode for loading the positions and calibration is the following:

R_cam,t_cam, R_global, t_global, scale_global = parse_metashape_camera(“Metashape XML”) # the global variables are from the chunk transformation. Metashape XML is created by Exporting cameras
C_in, resolution = parse_opencv_calib(“OpenCV XML“) # C: [1.84421015914974055e+03, 0., 6.11500000000000000e+02, 0, 1.84421015914974055e+03, 5.11500000000000000e+02, 0., 0., 1]

t = (torch.matmul(R_global,t_cam) +t_global)/scale_global  # I don't know why, but dividing by scale_global here and multipliyng later works for scans that were scaled in metashape
R = torch.matmul(R_global,R_cam)

R_in_inv = torch.transpose(R,dim0=1,dim1=2)
t_in_inv = -torch.matmul(R,t)*scale_global  # here

cameras =  p3dutils.cameras_from_opencv_projection(R_inv,t_inv,C_in, resolution)


I render the model from all camera angles and overlayed them with the original images.
The render is slightly off from all of those.

To me it seems like metashape does something slighlty different internally, as the texture alignment with the model insided metashape looks quite good.
In a different shoot, I tried precalibrating the cameras with a checker board (that is where I also used scaling inside metashape to achieve the same scan size as in the version without external calibration. Thus the scaling in the code). To me it seems like this precalibration slighlty decreased the error. I also loaded the camera positions inside blender using the .abc format. But there the error seems to be the same.

Can you tell me if  metashape uses some other parameters than me to texture the model from the camera views? Or is there some script that works well that I could use?
It would also be helpful to know if the camera calibration could be the reason. I haven't used a perfect calibration procedure before, because I was told that metashape would calibrate internally anyways.

Best,
Maurice



Pages: [1]