Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - geo_enth3

Pages: [1] 2
1
Hello Alexey,

ah now I see, I overread this in your initial post. My project is not in local coordinates but it uses a higher order reference system (EPSG:: 31256). What would need to be changed in this case?

Best,

geo_enth

2
Dear Alexey,

thank you very much for your help and the code! That looks very promising, however, I still have some troubles as the results are not as expected. Using the "current view" option, for example, does not deliver a correct orthomosaic but it is arbitrarily oriented and not according to "current view" (please see screenshots).

Also the measured coordinates do not match the expected coordinates. I understand most parts of the code, I am only having trouble understanding what the following code is doing exactly:

Code: [Select]
lf = chunk.crs.localframe(T.mulp(Metashape.Vector([-1000000, -1000000, 0])))
....
proj.matrix = Rot * Metashape.Matrix.Rotation(lf.rotation())

But this code is definetely already going into the right direction so I think a little bit more counselling will resolve our problem. Thank you very much again!

PS: if needed I can of course also share my  project (with a testing chunk)

3
Hello,

When researching about transforming Orthophotos (2D) to realworld coordinates I stumbled over the OrthoProjection class which should contain the information I need (i.e. the transformation matrix to transform the derived orthophoto (of a vertical surface) into the crs of the chunk). However, I do not fully understand how to use this function. The "source" and "target" parameter must be both of type (Orthoprojection) which confuses me. Maybe you can give an example on how to use this function.

I would also appreciate information on the defintion of the OrthoProjection.matrix() . Which transformation is this matrix defining exactly. I so far had no luck retrieving meaningful results by applying this matrix in my project. Maybe because I am misinterpreting it.

Any help is very much appreciated

Best,

geo_enth

4
To make my problem clearer I am providing an example of what I am looking for:

Screenshot A: marker placed on the 3D model
Screenshot B: same marker displayed in the orthophoto

between A and B must be a transformation (matrix) which I am trying to access.

I hope this makes my problem clearer


5
Done

6
Thanks for the example.   Could you maybe provide me with the complete code as well?

Best,

geo_enth

7
Hi,

thanks for the quick reply and the help! After doing some research I am now one step closer to the solution of my problem:

Within the OrthoProjection class there is also the transform function (see screenshot). As I understand this function allows me to apply the transformation from the 2D Ortho coordinates to the 3D world coordinates (by applying the transformation matrix you posted in your answer)

In my understanding, applying the following code should return me the 3D world coordinates (in the chunk's crs) of the upper left corner of the orthophoto (0,0)

Code: [Select]
point2D_img = Metashape.Vector([0,0])
source_crs = chunk.orthomosaic.projection.crs
target_crs = chunk.crs
point3D_world = chunk.orthomosaic.projection.transform(point2D_img, source_crs, target_crs)

I am, however, receiving the following type error;
Code: [Select]
TypeError: transform() argument 2 must be Metashape.OrthoProjection, not Metashape.CoordinateSystem
Do you have an idea to solve this issue?

Best,

geo_enth

8
Hi,

I am currently facing the problem that I have to create orthophotos of vertical structures (walls). I also need these orthophotos to be georeferenced which is why I wanted to ask how and where I can access the transformation parameters from the 2D Orthophoto/Orthomosaic image coordinates to the 3D real world coordinates. I know Metashape stores them somewhere because polygons that are drawn on the (georeferenced) 3D model are displayed perfectly on the orthophoto but I don't know how to access those. Ideally the transformation parameters would be accessible via the python api.

If you have any hints on how to achieve this, I would be very happy.

Best,

geo_enth

9
I solved the issue myself.

Just in case somebody faces the same issue here is the code which solved the problem explained above:

Code: [Select]
pt_2d  = Metashape.Vector([3091,2034])
img_3d = camera.unproject(pt_2d)
ray = chunk.crs.project(chunk.transform.matrix.mulp(img_3d)) - chunk.crs.project(chunk.transform.matrix.mulp(camera.center))
b = ray/ray.norm()

C_plane = Metashape.Vector(C)
a1 = np.asarray(R).reshape(3,3)[0,0:3]
a2 = np.asarray(R).reshape(3,3)[1,0:3]



A = np.transpose(np.array([a1,a2,-b]))
param = np.matmul(np.linalg.inv(A),(C_camera-C_plane))

S1 = a1*param[0] + a2*param[1] + C_plane

S2 = b*param[2] + C_camera

10
Python and Java API / Re: Mesh: Select faces by "Polygon size"
« on: May 25, 2022, 01:40:03 PM »
Thanks Alexey,

I tried exactly this but it is indeed taking much too long...
not a big deal, I solved it for me by adapting the bounding box before generating the mesh.

Cheers!

11
Hi,

So I am trying to compute 3D world coordinates for certain pixels within my aligned photo. However, I do not want to intersect the camera ray with the 3D model (as the pickPoint() function does) but I want to intersect it with a custom plane (which I have computed from the sparse point cloud). The reason is that the 3D model is not available or sometimes has holes.

For intersecting the camera ray of a certain pixel (X/Y) with the plane I need to compute the direction vector (cameraCenter to Pixel). As I understand the upper right 3X3 matrix of
Code: [Select]
rotation = np.asarray(segmentCam.transform).reshape(4,4)[0:3,0:3]    contains the rotation matrix that allows the transformation of the pixel coordinates in the 3D world coordinates. On this I then apply the chunk transformation matrix to derive the real world coordinates:

Code: [Select]
M = np.asarray(chunk.transform.matrix).reshape(4,4)[0:3,0:3]                 # extracting the transformation from internal Metashape to world coordinates
rotation_M = np.matmul(M,rotation)   

and this Matrix I finally apply on the internal camera coordinates which I picked (and corrected using the camera calibration parameters):

Code: [Select]
b = np.matmul(np.transpose(rotation_M), b_loc)       
However, the results are not correct and I strongly suspect that I am misunderstanding the two transformation matrices. Could somebody help me with estimating the direction vector (cameraCenter to Pixel)?

Thanks!

Here is the complete code for a better understanding:

Code: [Select]
segmentCam = chunk.cameras[0]
R, C = getProjectionPlane(chunk)                                             # derive plane from sparse point cloud (works fine)
                                                                             # R....3x3 matrix containting the plane defining direction vectors
                                                                             # C...barymetric center of the plane (EPSG:31256)

C_camera = chunk.crs.project(chunk.transform.matrix.mulp(segmentCam.center)) # 3D coordinates of Camera center [m] (EPSG: 31256)
row = np.asarray([3000,1000])                                                # image coordinates [px]
b_temp1 = np.append(row,0)                                                   # homogeneous image coordinates [px]
b_temp2 = np.array([segmentCam.calibration.cx, segmentCam.calibration.cy,    # calibration parameters of the camera
                    segmentCam.calibration.f])
b_loc = b_temp1-b_temp2                                                      # image coordinates in the "local" camera coordinate system

rotation = np.asarray(segmentCam.transform).reshape(4,4)[0:3,0:3]            # extractin the camera transformation parameters (3X3 matrix)
M = np.asarray(chunk.transform.matrix).reshape(4,4)[0:3,0:3]                 # extracting the transformation from internal Metashape to world coordinates
rotation_M = np.matmul(M,rotation)                                           # combining the matrices by multiplying
b = np.matmul(np.transpose(rotation_M), b_loc)                               # applying the transformations to the picked camera coordinates
b_norm = b / np.linalg.norm(b)                                               # normalizing (not really necessary)

C_plane = Metashape.Vector(C)                                                # barycenter of the plane
a1 = np.asarray(R).reshape(3,3)[0,0:3]                                       # first plane direction vector
a2 = np.asarray(R).reshape(3,3)[1,0:3]                                       # second plane direction vector



A = np.transpose(np.array([a1,a2,-b]))                                       # solve equation (intersect plane with image ray)
param = np.matmul(np.linalg.inv(A),(C_camera-C_plane))

S1 = a1*param[0] + a2*param[1] + C_plane                                     # apply parameters to the equation and retrieve the
                                                                             # intersection of the image ray with the plane -> S1==S2
S2 = b*param[2] + C_camera

12
Python and Java API / Mesh: Select faces by "Polygon size"
« on: May 17, 2022, 11:41:59 AM »
Dear all,

I was wondering if it is possible to perform something like the Gradual Selection -> polygon size also with the Python API (please find the corresponding GUI screenshot attached). I would like to remove very large faces from my model as they are usually erroneous. I didn't find any function in the API docs that would allow me to do this but maybe somebody has some code already to perform a similar operation.

Thanks in advance,

geo_enth

13
Python and Java API / Re: gradual selection > model confidence
« on: April 22, 2022, 12:25:59 PM »
Thanks for the hint!

I managed to implement such a function. For anybody interested I paste it here:

Code: [Select]
def filterModelBasedOnConfidence(model, confidenceThreshold=4):
    for face in model.faces:
        for i in face.vertices:
            if model.vertices[i].confidence <= confidenceThreshold:
                face.selected = True
                continue
    model.removeSelection()


14
Python and Java API / gradual selection > model confidence
« on: April 21, 2022, 06:02:51 PM »
Hi,

I was delighted to see that in version 1.8 it is now possible to perform gradual model selection with "confidence" as parameter.

I wanted to ask if this is also accessible via the python API (like for the dense cloud filtering). I cpuldn't find it in the python API manual

Thanks!

15
I solved this already myself. its as simple as this:

Code: [Select]
camera.reference.location

Pages: [1] 2