Forum

Author Topic: Clarifications Needed on Multi-Camera System Calibration and Data Interpretation  (Read 693 times)

ManyPixels

  • Newbie
  • *
  • Posts: 40
    • View Profile
Dear Agisoft Community,

I am currently working on a project involving the calibration of a multi-camera system integrated with a laser scanner . While I have successfully completed the calibration process, I am encountering uncertainties regarding the interpretation of the calibration results, specifically in the camera calibration pane.

My primary concern revolves around the representation of angles. The system, comprising four cameras and a laser scanner, appears visually correct (see attachment, 0 is the master). However, the lack of reference materials on how angles are displayed in the calibration pane is perplexing. In the Slave Offsets, we see XYZ and OPK. When examining the XYZ translation values, they suggest an unusual orientation of the cameras, seemingly facing towards -Y. This orientation is atypical and raises questions about the accuracy of these readings.

The confusion escalates when delving into the OPK values. It is unclear how these values are applied - whether in the sequence of O->P->K or P->O->K, and whether they correspond to the XYZ axis or transformed axes (XY'Z''). Although a conversion function exists in the Python API, it provides little clarity without a foundational understanding of the initial parameters. Adding the initial orientation (where is facing OPK 0,0,0?) does not help...

My ultimate goal is to determine accurate levers in XYZ and orientations in quaternion format. This data is essential for correctly positioning the cameras and scans within an E57 file. A significant challenge I face is establishing the laser scan as the reference point. I have aligned it by converting point coordinates into spherical coordinates and interpreting these as a depth map in JPEG format with a visible color scale. This method enables alignment using Ground Control Points (GCPs). However, this approach seems effective only when the laser scan is not set as the master camera. It appears that the alignment process in a multi-camera setup disregards matches in slave cameras.

As a workaround, I am considering translating and rotating the point cloud during the final E57 file construction. Since the orientation of the point cloud is not a concern for my purposes, this seems viable. However, this again hinges on understanding how OPK angles translate into XYZ angles and hox to apply a correct translation.

I would greatly appreciate any insights or references that could help clarify these issues. Understanding these details is crucial for the success of my project.

Thank you for your assistance.

Paulo

  • Hero Member
  • *****
  • Posts: 1350
    • View Profile
Hello Manypixels,

I think the following extract from latest user manual will help you in understanding the slave to master offsets:

Code: [Select]
Metashape uses the following coordinate system for slave camera offsets:
• X axis points to the right side of the image,
• Y axis points to the bottom of the image,
• Z axis points along the viewing direction of the master camera.
Slave Camera Offset is calculated by the following formula:
Pmaster = Rx(omega) * Ry(phi) * Rz(kappa) * Pslave + T
Where:
Pslave – point coordinates in slave camera coordinate system,
Pmaster – point coordinates in master camera coordinate system,
Rx(omega), Ry(phi, Rz(kappa) – rotation matrices around corresponding axes in clockwise direction,
T – slave camera offset.
Best Regards,
Paul Pelletier,
Surveyor