Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - geo_enth3

Pages: [1]
1
Hello,

When researching about transforming Orthophotos (2D) to realworld coordinates I stumbled over the OrthoProjection class which should contain the information I need (i.e. the transformation matrix to transform the derived orthophoto (of a vertical surface) into the crs of the chunk). However, I do not fully understand how to use this function. The "source" and "target" parameter must be both of type (Orthoprojection) which confuses me. Maybe you can give an example on how to use this function.

I would also appreciate information on the defintion of the OrthoProjection.matrix() . Which transformation is this matrix defining exactly. I so far had no luck retrieving meaningful results by applying this matrix in my project. Maybe because I am misinterpreting it.

Any help is very much appreciated

Best,

geo_enth

2
Hi,

I am currently facing the problem that I have to create orthophotos of vertical structures (walls). I also need these orthophotos to be georeferenced which is why I wanted to ask how and where I can access the transformation parameters from the 2D Orthophoto/Orthomosaic image coordinates to the 3D real world coordinates. I know Metashape stores them somewhere because polygons that are drawn on the (georeferenced) 3D model are displayed perfectly on the orthophoto but I don't know how to access those. Ideally the transformation parameters would be accessible via the python api.

If you have any hints on how to achieve this, I would be very happy.

Best,

geo_enth

3
Hi,

So I am trying to compute 3D world coordinates for certain pixels within my aligned photo. However, I do not want to intersect the camera ray with the 3D model (as the pickPoint() function does) but I want to intersect it with a custom plane (which I have computed from the sparse point cloud). The reason is that the 3D model is not available or sometimes has holes.

For intersecting the camera ray of a certain pixel (X/Y) with the plane I need to compute the direction vector (cameraCenter to Pixel). As I understand the upper right 3X3 matrix of
Code: [Select]
rotation = np.asarray(segmentCam.transform).reshape(4,4)[0:3,0:3]    contains the rotation matrix that allows the transformation of the pixel coordinates in the 3D world coordinates. On this I then apply the chunk transformation matrix to derive the real world coordinates:

Code: [Select]
M = np.asarray(chunk.transform.matrix).reshape(4,4)[0:3,0:3]                 # extracting the transformation from internal Metashape to world coordinates
rotation_M = np.matmul(M,rotation)   

and this Matrix I finally apply on the internal camera coordinates which I picked (and corrected using the camera calibration parameters):

Code: [Select]
b = np.matmul(np.transpose(rotation_M), b_loc)       
However, the results are not correct and I strongly suspect that I am misunderstanding the two transformation matrices. Could somebody help me with estimating the direction vector (cameraCenter to Pixel)?

Thanks!

Here is the complete code for a better understanding:

Code: [Select]
segmentCam = chunk.cameras[0]
R, C = getProjectionPlane(chunk)                                             # derive plane from sparse point cloud (works fine)
                                                                             # R....3x3 matrix containting the plane defining direction vectors
                                                                             # C...barymetric center of the plane (EPSG:31256)

C_camera = chunk.crs.project(chunk.transform.matrix.mulp(segmentCam.center)) # 3D coordinates of Camera center [m] (EPSG: 31256)
row = np.asarray([3000,1000])                                                # image coordinates [px]
b_temp1 = np.append(row,0)                                                   # homogeneous image coordinates [px]
b_temp2 = np.array([segmentCam.calibration.cx, segmentCam.calibration.cy,    # calibration parameters of the camera
                    segmentCam.calibration.f])
b_loc = b_temp1-b_temp2                                                      # image coordinates in the "local" camera coordinate system

rotation = np.asarray(segmentCam.transform).reshape(4,4)[0:3,0:3]            # extractin the camera transformation parameters (3X3 matrix)
M = np.asarray(chunk.transform.matrix).reshape(4,4)[0:3,0:3]                 # extracting the transformation from internal Metashape to world coordinates
rotation_M = np.matmul(M,rotation)                                           # combining the matrices by multiplying
b = np.matmul(np.transpose(rotation_M), b_loc)                               # applying the transformations to the picked camera coordinates
b_norm = b / np.linalg.norm(b)                                               # normalizing (not really necessary)

C_plane = Metashape.Vector(C)                                                # barycenter of the plane
a1 = np.asarray(R).reshape(3,3)[0,0:3]                                       # first plane direction vector
a2 = np.asarray(R).reshape(3,3)[1,0:3]                                       # second plane direction vector



A = np.transpose(np.array([a1,a2,-b]))                                       # solve equation (intersect plane with image ray)
param = np.matmul(np.linalg.inv(A),(C_camera-C_plane))

S1 = a1*param[0] + a2*param[1] + C_plane                                     # apply parameters to the equation and retrieve the
                                                                             # intersection of the image ray with the plane -> S1==S2
S2 = b*param[2] + C_camera

4
Python and Java API / Mesh: Select faces by "Polygon size"
« on: May 17, 2022, 11:41:59 AM »
Dear all,

I was wondering if it is possible to perform something like the Gradual Selection -> polygon size also with the Python API (please find the corresponding GUI screenshot attached). I would like to remove very large faces from my model as they are usually erroneous. I didn't find any function in the API docs that would allow me to do this but maybe somebody has some code already to perform a similar operation.

Thanks in advance,

geo_enth

5
Python and Java API / gradual selection > model confidence
« on: April 21, 2022, 06:02:51 PM »
Hi,

I was delighted to see that in version 1.8 it is now possible to perform gradual model selection with "confidence" as parameter.

I wanted to ask if this is also accessible via the python API (like for the dense cloud filtering). I cpuldn't find it in the python API manual

Thanks!

6
Hi,

I was wondering if there is a way to read the stored GPS coordinates from the EXIF data. My camera is equipped with a GNSS module and stores approximate camera in the EXIF data. I know it is possible to read EXIF data via this command:

Code: [Select]
camera.photo.meta["Exif/GPSLongitude"]
However, this only retuns a string of the rounded coordinates (e.g. '16,22.116E') while I need full precision (and preferably as float or double, but this is less important).

Thanks,

geo_enth

7
General / Boost computation speed for photo subalignment
« on: April 19, 2022, 10:57:54 AM »
Dear Metashape team,

My task is to subalign images to a very big project. Speficially, I want to subalign ca. 100 images (on a daily basis) to a chunk which exists of ca. 27000 photos.

Every thing works fine and the photos get subaligned accurately. Here are the main parameters I am using:

downscale = 4
generic_preselection=True
keypoint_limit=10000
tiepoint_limit= 4000
keep_keypoints=True

However, it takes quite long to compute this (several hours) which is logical considering the project size. I still wanted to ask if you have any tips to reduce the computation times. Specifically could you:

a) recommend a hardware specifically for the subalignment task. My current setup consists of (CPU: AMD EPYC 7302 16-core; GPU GeForce GTX 1650; RAM: 206GB). I know this is not the ultimate hardware for this task but observing the task  manager during the process shows me that none of my resources are fully used so would a better CPU/GPU even make sense? If so, is there a specific hardware you would recommend for subalignment tasks?

b) How much (approximately) would the usage of externally measured camera coordinates (via a GNSS antenna mounted on the camera with an accuracy of a few cm) reduce the computation times (by activating reference preselection from source)?

Thank you very much in advance!

geo_enth



8
Feature Requests / Duplicating chunk (only for selected cameras)
« on: April 05, 2022, 04:47:54 PM »
Hi Metashape Team,

I think this topic has been brought up already but I wanted to emphasize how nice it would be to allow chunk duplicating but only for selected camera. I think that would save many people a lot of time and nerves.

Cheers and all the best,

geo_enth3

9
Dear Metashape-Team,

When building my dense cloud with the GPU I am always getting the "zero resolution" error. As it works flawlessly (but very slow) on my CPU I suspect it's because of my GPU (GPU AMD Radeon HD 7700 Series Discrete). My GPU drivers are up to date. Is there a solution to this problem?

Thanks!

10
Python and Java API / Gradual model selction - Python API
« on: March 28, 2022, 11:56:30 AM »
Dear Metashape Team,

I was wondering if the tool "Gradual Selection" for models is also accessible via the Python API? I couldn't find such a function in the documentation but I might have overseen it. Specifically I am searching for the "Connected Component Size" criterion to "filter" my model.

Thanks!

Pages: [1]