Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - mitchgalea

Pages: [1]
Bug Reports / Re: Assertion "3478972301441" failed at line 720
« on: March 11, 2023, 07:54:44 AM »
I get the same error on my project, I've tested it on both Metashape 2.0.0 and Metashape 2.0.1

Python and Java API / Metashape 2.0 python laser scan depth map
« on: February 23, 2023, 06:58:27 AM »

Is there a way to access the depth maps for the terrestrial laser scans in metashape 2.0? I have attached a screenshot of what the depth map looks like when opening the tls spherical image. When accessing `` though it is None. I assume that the depth maps are computed based on the laser scan and not stored? If so what would be the calculation per pixel, as my goal is to access the depth of certain pixels?


General / Re: Metashape 2.0 Laser Scan Referencing
« on: February 21, 2023, 08:12:16 AM »
I figured out how to fix this. I had to compute the chunk transform that would ensure that the laser scans are in the correct position. To do this prior to alignment I duplicated the chunk, and then aligned the original chunk. From there I had to use the python console.

Code: [Select]
# gets chunks to use
non_aligned_chunk =[1]
aligned_chunk =[0]

# function to get camera based on label
def get_camera(chunk,label):
    for camera in chunk.cameras:
        if camera.label == label:
            return camera
    return None

# gets a spherical from a laser scan for both chunks
non_aligned_camera = get_camera(non_aligned_chunk, "Scan_001")
aligned_camera = get_camera(aligned_chunk, "Scan_001")

# non aligned camera is in the correct position so we work out chunk transform to equal non aligned camera position
aligned_chunk.transform.matrix = non_aligned_camera.transform * aligned_camera.transform.inv()

General / Metashape 2.0 Laser Scan Referencing
« on: February 21, 2023, 02:32:49 AM »

I am using Metashape 2.0 Terrestrial laser scans feature to align TLS scans captured with the Leica RTC360, with images captured from other sources. I have followed the procedure listed here . It works and I can align the TLS scans with the images. My issue though is the position of the laser scans after alignment is different, this makes sense because we reset alignment, but is there a way to add the reference information to the laser scans so that after alignment the laser scans keep the initial position. I have tried adding reference information to the laser scan images based on the initial position, but this doesn't do anything (seems that images attached to laser scans are ignored when it comes to reference).

Any help would be appreciated

Can this be answered? Seems like its either a yes or no on whether it is currently possible?


I've experimented with Metashape 2.0 laser scan features and really like how powerful it can be. We would like to use this in a pipeline that uses the python API.

The steps are:
 - importing the point clouds: Can be done in python with chunk.importPointCloud(...)
 - adding laser scan group: ?
 - setting laser scan group to fixed: ?
 - adding laser scans to group: ?
 - unlocking laser scan group transfrom: ?

I have attached screenshots of these steps in the GUI but would like to know how to do all of these steps in python if this is actually possible


I have a question about the 'cameras' argument for matchPhotos function.

I have a project where I am using the 'cameras' argument, it will only detect points and generate matches with these cameras but it generates matches between these cameras and other cameras already in the project that have keypoints detected. I would expect the argument would limit matches to only be generated between the cameras in the list

I am trying to access camera yaw pitch and roll estimated values (as appear in GUI reference pane)

To get the transformed camera position I am using the following code:
Code: [Select]
chunk =
camera = chunk.cameras[2]
transform = chunk.transform.matrix * camera.transform
# this works as expected, values are matching those in reference pane
pos = transform.translation()

rot = transform.rotation()
# this is not working, expected values are 93.583, -0.937, -44.785, outputted values are 273.58297001960716, 1.784351814162492, -135.21509554607456
ypr = Metashape.Utils.mat2ypr(rot)

I can do this using external libraries, but thought there would be a way to do this inside of metashape. This issue is persistent over different documents

Bug Reports / Unexpected Error on Startup
« on: June 16, 2022, 05:34:40 PM »

When opening Metashape (this error started a couple of weeks ago), I have been getting this error:
`Unexpected error occurred.
Details: System clock has been set back (-40)
This causes Metashape to not be activated, and I cannot activate it manually either. I have done some research and found that it is an issue with License managers where a file somewhere in the filesystem is corrupted or the time of the file has been modified, however I haven't had any luck fixing this. I have tried searching for files modified in both the future and way back in the past and modifying the dates, I have also tried repairing my Ubuntu install, nothing has been successful. I have only noticed errors with Metashape. Other than reinstalling ubuntu does anyone have any suggestions on fixing this error?



General / Re: Cylindrical Orthomosaic Generation for long think wall scan
« on: January 26, 2022, 05:48:25 PM »
Thanks for the feedback. I manually found an appropriate value for f and it flattened out the model. Tried on multiple datasets and worked a treat. Thankyou!

General / Cylindrical Orthomosaic Generation for long think wall scan
« on: January 24, 2022, 02:16:27 PM »

We have been capturing some datasets of a port quay wall underwater. The datasets are single strips going across approximately 60m in length. Due to the high distortion of our lens and the data being captured underwater the resulting models expresses the 'bowl effect'. This is not too much of an issue as we are not interested in a model but of an orthomosaic. We tried generating a planar orthomosaic but it resulted in the edges being quite dark due to the curve of the model. I have been experimenting with the cylindrical orthomosaic generation process and have read through the following article
This was quite helpful to give background but I haven't been able to get it to work. I've been trying to use the marker method. To do this I have added IMU readings to align the model, then used a circle fitting algorithm with the data to get the center of the cylinder so that I can generate an axis. When creating markers and giving them the center axis coordinates and trying to generate the orthomosaic i get the follwing error 'location of marker is undefined'.

I have attached an image of the model curve, the mesh and planar orthomosaic.

What would be the best method to generate an orthomosaic of this model?

Thanks in advance

Pages: [1]