Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Tom2L

Pages: [1]
1
I will. Thanks a lot, as always

2
Hi, i was trying to generate a mission path trough the python API :
Code: [Select]
        plan_mission_task = Metashape.Tasks.PlanMission()
        # mission parameters
        plan_mission_task.sensor = 0  # choose the camera that was used for input photos (DJI for x2 test)
        plan_mission_task.min_altitude = 5
        plan_mission_task.capture_distance = 30
        plan_mission_task.horizontal_zigzags = True
        plan_mission_task.min_waypoint_spacing = 1
        plan_mission_task.overlap = overlap
        plan_mission_task.attach_viewpoints = True
        plan_mission_task.safety_distance = 10
        home = ref_model.shapes[0]
        plan_mission_task.home_point = home.key

But i got :
Code: [Select]
runTimeError: buildRoundTrip : traversal point not reachableAny idea what could cause it ?
Thanks in advance

3
Python and Java API / Coordinates system with renderImage
« on: July 08, 2022, 12:10:45 PM »
Hi all,
Im trying to understand better how metashape deals with computer vision.
I am currently working with render image from custom location and orientation in order to capture virtual image of my model. Model is created in georeferenced chunk.
Code: [Select]
            position = Metashape.Vector((['X'], ['Y'],['Z']) # position vector X, Y, Z in chunk.crs 'WGS 84 + EGM96 height (EPSG::9707)'
            orientation = Metashape.Vector('yaw', 'pitch', 'roll'))  # orientation vector
            # calculate relevant transform matrix t
            position = ref_model.crs.unproject(position)  # position in ECEF
            orientation = ref_model.crs.geogcs.localframe(position).rotation().t() * Metashape.Utils.ypr2mat(
                orientation)  # orientation matrix in ECEF
            transform = Metashape.Matrix.Translation(position) * Metashape.Matrix.Rotation(orientation)
            transform = ref_model.transform.matrix.inv() * transform * Metashape.Matrix.Diag((1, -1, -1, 1))
            cameraT = Metashape.Matrix.Translation(transform.translation()) * Metashape.Matrix.Rotation(
                transform.rotation())  # 4x4 transform matrix 3 translations and 3 rotations
            # Capture img to new folder
            image = ref_model.model.renderImage(cameraT, ref_model.sensors[0].calibration)

But i have fews questions regarding transformation martix, coordinates system :
- the line 'position = ref_model.crs.unproject(position)' gives me the position in ECEF CS. Is it the world coordiantes system that we use in comuter vision ?
- What is the transform matrix ? Is it like a computational matrix for CameraT ?
- Is the cameraT 4x4 matrix equivalent to the extrinsinc camera matrix ?
- Does the renderImage convert camera coordinates system to Image coordinate system ? Does it inherit the calibration parameters of image used to create my model ?
- Also, when i import captured images to a new chunk, it appear in local coordinates. What is the local coordinates system of a chunk ? Is it relative to the first picture imported, which center is considered as center ?  Is it the Camera coordinate system (3D) that we use computer vision ?

I hope all of this is clear and thanks for your help,


4
General / Re: Directional vector from yaw, pitch, and roll?
« on: June 05, 2022, 10:03:37 PM »
Did you manage to get it ? currently trying !

5
Did you manage to get it ? currently trying

6
Hi all,
I'd like to add markers in circle around bbox center of my model. the circle would turn around Z axis at custom altitude.
To do so, i have created a custom function that use model center as center of the circle and distance between center and corners as radius. It return df cointaing X and Y position of points around center.
Code: [Select]
def pic_in_circle(center_x, center_y, radius, snapshots_number):
    '''
    :param center_x: X coordinates of the center of the bbox   
    :param center_y: Y coordinates of the center of the bbox   
    :param radius: distance between corners and center of the bbox
    :param snapshots_number: number of markers to create
    :return: df containing X and Y coordinates of selected points with lenght based on number of snapshots
    '''
    arr=[]
    for i in range(360):
        x = center_x + radius*math.cos(i)
        y = center_y + radius*math.sin(i)
        #Create array with all the x-co and y-co of the circle
        arr.append([x,y])
    df = pd.DataFrame(arr)
    df.columns = ['X', 'Y']

    indexes = np.arange(0, len(df), step=(360/snapshots_number))
    df = df.iloc[indexes]
    return df

I call this function with :
Code: [Select]
circle = pic_in_circle(center.position[0], center.position[1], radius, 50) with center of the Bbox containing ref model is Vector([4.521869862724644, 50.6898008733669, 4.897874583534293])
It creates a df with X and Y position that are NOT in the same crs, such as :
             X           Y
0    38.008388 -199.830643
7    30.059443 -178.610007
14   10.125042 -167.834073
...
I now try to add those points to my project with :
Code: [Select]
for index, row in circle.iterrows():
    pos = Metashape.Vector((row['X'], row['Y'], 0))
    circleM = ref_model.addMarker()
    circleM.label = "circle " + str(index + 1)
    circleM.reference.location = ref_model.crs.project(ref_model.transform.matrix.mulp(pos))
    circleM.reference.enabled = True
    print(circleM.reference.location)

But it creates a circle that turn around Y axis instead of Z axis. What is weird is that i set altitude as 0 but it turns it into negative and positive values :
Vector([4.521858890254145, 50.68958594593813, 6.024067006404548])
Vector([4.5218386528562995, 50.68962537116644, 18.59051823922667])
Vector([4.521751947695833, 50.68971443077309, 25.186119392769566])

I suppose that my error came from the crs transform.matrix function but i'm unable to see what i did wrong. Could anyone help me ?
Code: [Select]
circleM.reference.location = ref_model.crs.project(ref_model.transform.matrix.mulp(pos))
Thanks a lot,
Thomas

7
Python and Java API / Re: yaw pitch roll from quaternions
« on: May 25, 2022, 03:35:17 PM »
UPDATE:
I manage to make it work :
my custom function is :
Code: [Select]
# create function to transform quaternion into rotation matrix ---------------------------------------------------------
def euler_from_quaternion(x, y, z, w):
    """
    Convert a quaternion into euler angles (roll, pitch, yaw)
    roll is rotation around x in radians (counterclockwise)
    pitch is rotation around y in radians (counterclockwise)
    yaw is rotation around z in radians (counterclockwise)
    """
    t0 = +2.0 * (w * x + y * z)
    t1 = +1.0 - 2.0 * (x * x + y * y)
    roll_x = math.atan2(t0, t1)*(180/math.pi)

    t2 = +2.0 * (w * y - z * x)
    t2 = +1.0 if t2 > +1.0 else t2
    t2 = -1.0 if t2 < -1.0 else t2
    pitch_y = math.asin(t2)*(180/math.pi)

    t3 = +2.0 * (w * z + x * y)
    t4 = +1.0 - 2.0 * (y * y + z * z)
    yaw_z = math.atan2(t3, t4)*(180/math.pi)

    return -yaw_z, roll_x, -pitch_y   # in degrees in fact it's alpha nu kappa (weird)

and my code to capture image is :
Code: [Select]
doc = Metashape.Document()
doc.open(path=r"C:\Users\Thomas\Documents\projet_x2.psx")
chunk = doc.chunk
position = Metashape.Vector((4.522160467, 50.68980499, 132.3963475)) # position vector X, Y, Z in chunk.crs 'WGS 84 + EGM96 height (EPSG::9707)'
orientation = Metashape.Vector(euler_from_quaternion(0.3773313583,0.3340252627,0.5725150359,0.646741605)) # orientation vector Yaw, Pitch, Roll in chunk.crs
#calculate relevant transform matrix t
position = chunk.crs.unproject(position)  # position in ECEF
orientation = chunk.crs.geogcs.localframe(position).rotation().t() * Metashape.Utils.ypr2mat(orientation) # orientation matrix in ECEF
transform = Metashape.Matrix.Translation(position) * Metashape.Matrix.Rotation(orientation)
transform = chunk.transform.matrix.inv() * transform * Metashape.Matrix.Diag((1, -1, -1, 1))
cameraT = Metashape.Matrix.Translation(transform.translation()) * Metashape.Matrix.Rotation(transform.rotation()) # 4x4 transform matrix 3 translations and 3 rotations

image = chunk.model.renderImage(cameraT, chunk.sensors[0].calibration)
image.save(r"C:\Users\Thomas\Documents\render.jpg")

What is weird is that what the api call yaw pitch roll correspond to the alpha nu kappa of the GUI application.
Anyway, i get the same world transformation matrix in both case.
Hope it would help someone !

8
Python and Java API / yaw pitch roll from quaternions
« on: May 25, 2022, 12:43:19 PM »
Hi all,
I'm still trying to recreate the capture track option of the plan mission in the metashape python API. I think i'm very close but i need to transform the quaternions data from the .path file to yaw pitch roll.
I'm able to capture an image of my model using the below code :
Code: [Select]
doc = Metashape.Document()
doc.open(path=r"C:\Users\Thomas\Documents\projet_x2.psx")
chunk = doc.chunk
position = Metashape.Vector((4.522160467, 50.68980499, 132.3963475)) # position vector X, Y, Z in chunk.crs 'WGS 84 + EGM96 height (EPSG::9707)'
orientation = Metashape.Vector((276.698,60.522,0)) # orientation vector Yaw, Pitch, Roll in chunk.crs

#calculate relevant transform matrix t
position = chunk.crs.unproject(position)  # position in ECEF
orientation = chunk.crs.geogcs.localframe(position).rotation().t() * Metashape.Utils.ypr2mat(orientation) # orientation matrix in ECEF
#orientation = chunk.crs.geogcs.localframe(position).rotation().t() * quaternion_rotation_matrix(0.3773313583,0.3340252627,0.5725150359,0.646741605) # orientation matrix in ECEF
transform = Metashape.Matrix.Translation(position) * Metashape.Matrix.Rotation(orientation)
transform = chunk.transform.matrix.inv() * transform * Metashape.Matrix.Diag((1, -1, -1, 1))
cameraT = Metashape.Matrix.Translation(transform.translation()) * Metashape.Matrix.Rotation(transform.rotation()) # 4x4 transform matrix 3 translations and 3 rotations

image = chunk.model.renderImage(cameraT, chunk.sensors[0].calibration)
image.save(r"C:\Users\Thomas\Documents\render.jpg")
Unfortunately, the .path file contain only quaternions infos as qX, qY, qZ, qW.

In order to automate my process, i'm trying to convert the quaternions into yaw pitch roll, knowing that the metashape notation of those is [-yaw,pitch, roll] from traditional [roll, pitch,yaw] euler angles.
I try to do so with a custom def:
Code: [Select]
def euler_from_quaternion(x, y, z, w):
    """
    Convert a quaternion into euler angles (roll, pitch, yaw)
    roll is rotation around x in radians (counterclockwise)
    pitch is rotation around y in radians (counterclockwise)
    yaw is rotation around z in radians (counterclockwise)
    """
    t0 = +2.0 * (w * x + y * z)
    t1 = +1.0 - 2.0 * (x * x + y * y)
    roll_x = math.atan2(t0, t1)*(180/math.pi)

    t2 = +2.0 * (w * y - z * x)
    t2 = +1.0 if t2 > +1.0 else t2
    t2 = -1.0 if t2 < -1.0 else t2
    pitch_y = math.asin(t2)*(180/math.pi)

    t3 = +2.0 * (w * z + x * y)
    t4 = +1.0 - 2.0 * (y * y + z * z)
    yaw_z = math.atan2(t3, t4)*(180/math.pi)

    return -yaw_z, roll_x, pitch_y   # in degrees

print(euler_from_quaternion(0.3773313583,0.3340252627,0.5725150359,0.646741605)) # quaternions corresponding to (276.698,60.522,0)

Unfortunately, i do not manage to get the correct values : i end up with (-83.03244255204716, 60.5215387197955, 1.81473867143059e-05) from my custom function instead of (-276.968, 60.522, 0.00) that i get from the animation panel of the GUI metashape application.
I suppose that the pitch and roll values are correct (only approximation), but my yaw value seem false. So, how is yaw computed into the software ?
Is there maybe a way to convert quaternions data to world rotation matrix without converting it to yaw, pitch, roll ?
 

9
Amazing man!
As always thank you for the support you provide

10
Hi all,
Once again, thanks for the amazing software and support that you all provide.
I'm actually trying to create the capture photos ... option of the plan mission in python using metashape API.
In order to do so, i'd like to export the camera track as a .path file, then use it the viewpoint and capture viewpoint class.

Unfortunately, i do not manage to export the camera track as a .path file.
I actually  have a metashape project with an animation generated thanks to the mission planning that i am trying to export using python
My code is :
Code: [Select]
doc = Metashape.Document()
doc.open(path=r"C:\Users\Thomas\Documents\projet_x2.psx")
chunk = doc.chunk
camtrack = Metashape.CameraTrack
camtrack.chunk = chunk
camtrack.save(r"C:\Users\Thomas\Documents\CameraTrack.path")
and i always get the error message
Code: [Select]
TypeError: descriptor 'save' for 'Metashape.Metashape.CameraTrack' objects doesn't apply to a 'str' object
that i can't solve.
In the api reference, the save option is really confusing and reclame a path as a string, that i can't solve :
Quote
save(path[, file_format, max_waypoints, projection])
Save camera track to file.
Parameters
• path (string) – Path to camera track file
• file_format (string) – File format. “deduce”: - Deduce from extension, “path”: Path,
“earth”: Google Earth KML, “pilot”: DJI Pilot KML, “trinity”: Asctec Trinity CSV, “au-
topilot”: Asctec Autopilot CSV, “litchi”: Litchi CSV
• max_waypoints (int) – Max waypoints per flight
• projection (CoordinateSystem) – Camera track coordinate system.

I also tried loading an existing .path file with :
Code: [Select]
camtrack.load(r"C:\Users\Thomas\Documents\CameraTrack_001.path")
but i get the same error.
Is anyone able to help me with this ?

11
Python and Java API / Re: align model to model.py without GUI
« on: May 17, 2022, 04:45:01 PM »
Thanks a lot for your answer. Unfortunately, i don't get it very well : should i export my models as .ply file and then read then as numpy array ?
similary to :
Code: [Select]
                    self.chunk.exportModel(path=filename, binary=True,
                                      save_texture=False, save_uv=False, save_normals=False, save_colors=False,
                                      save_cameras=False, save_markers=False, save_udim=False, save_alpha=False,
                                      save_comment=False,
                                      format=Metashape.ModelFormatPLY)
                else:
                    self.chunk.dense_cloud = None
                    for dense_cloud in self.chunk.dense_clouds:
                        if dense_cloud.key == key:
                            self.chunk.dense_cloud = dense_cloud
                    assert(self.chunk.dense_cloud is not None)
                    self.chunk.exportPoints(path=filename,
                                       source_data=Metashape.DenseCloudData, binary=True,
                                       save_normals=False, save_colors=False, save_classes=False, save_confidence=False,
                                       save_comment=False,
                                       format=Metashape.PointsFormatPLY)

            v1 = read_ply(tmp1.name)
            v2 = read_ply(tmp2.name)

12
Python and Java API / align model to model.py without GUI
« on: May 17, 2022, 10:37:37 AM »
Hi all,
I'd like to use the align model to model. py script in a larger scale python project.
In odrer to do so, i've extracted the functions from align model to model. py to my script, but i get the error :

Code: [Select]
Traceback (most recent call last):
  File "C:/Users/Thomas/Desktop/pycharm/crash_code.py", line 532, in <module>
    align_two_point_clouds(chunk.models[-1],chunk.models[1], scale_ratio = 0.99358)
  File "C:/Users/Thomas/Desktop/pycharm/crash_code.py", line 27, in align_two_point_clouds
    assert(isinstance(points1_source, np.ndarray) and isinstance(points2_target, np.ndarray))
AssertionError


I'm fairly new to python so i don't have enough experience to see what went wrong, that's why i'm asking for help.
Could anyone help me with this ?
I've attached the whole code below
Thanks in advance !

13
General / Mission planning requirement
« on: May 03, 2022, 03:21:08 PM »
Hi,
I am currently experiencing with the Mission planning tool.
Unfortunately, I can't figure what are the requirement to use it. Do i need the dense cloud ? The 3D model ? Georeferenced pictures ?
So, what do i need to use this tool ?
Thanks in advance for your help,
Tom

Pages: [1]