Forum

Author Topic: Coordinates system with renderImage  (Read 1530 times)

Tom2L

  • Newbie
  • *
  • Posts: 13
    • View Profile
Coordinates system with renderImage
« on: July 08, 2022, 12:10:45 PM »
Hi all,
Im trying to understand better how metashape deals with computer vision.
I am currently working with render image from custom location and orientation in order to capture virtual image of my model. Model is created in georeferenced chunk.
Code: [Select]
            position = Metashape.Vector((['X'], ['Y'],['Z']) # position vector X, Y, Z in chunk.crs 'WGS 84 + EGM96 height (EPSG::9707)'
            orientation = Metashape.Vector('yaw', 'pitch', 'roll'))  # orientation vector
            # calculate relevant transform matrix t
            position = ref_model.crs.unproject(position)  # position in ECEF
            orientation = ref_model.crs.geogcs.localframe(position).rotation().t() * Metashape.Utils.ypr2mat(
                orientation)  # orientation matrix in ECEF
            transform = Metashape.Matrix.Translation(position) * Metashape.Matrix.Rotation(orientation)
            transform = ref_model.transform.matrix.inv() * transform * Metashape.Matrix.Diag((1, -1, -1, 1))
            cameraT = Metashape.Matrix.Translation(transform.translation()) * Metashape.Matrix.Rotation(
                transform.rotation())  # 4x4 transform matrix 3 translations and 3 rotations
            # Capture img to new folder
            image = ref_model.model.renderImage(cameraT, ref_model.sensors[0].calibration)

But i have fews questions regarding transformation martix, coordinates system :
- the line 'position = ref_model.crs.unproject(position)' gives me the position in ECEF CS. Is it the world coordiantes system that we use in comuter vision ?
- What is the transform matrix ? Is it like a computational matrix for CameraT ?
- Is the cameraT 4x4 matrix equivalent to the extrinsinc camera matrix ?
- Does the renderImage convert camera coordinates system to Image coordinate system ? Does it inherit the calibration parameters of image used to create my model ?
- Also, when i import captured images to a new chunk, it appear in local coordinates. What is the local coordinates system of a chunk ? Is it relative to the first picture imported, which center is considered as center ?  Is it the Camera coordinate system (3D) that we use computer vision ?

I hope all of this is clear and thanks for your help,