Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - mitchgalea

Pages: [1]
General / Cleaning Tiled Model (Remove Floating blobs)
« on: May 29, 2023, 01:53:15 AM »

We have internally imaged a large water tank (80m) in diameter. Due to the close proximity to the ceiling we have a lot of images in the dataset (we are using around 12000 images). We want a high-resolution model. I am experimenting with tiled models for this (I haven't had any prior use cases for tiled models).
The settings we used were:
- Source Data: Depth Maps
- Quality: High
- Pixel resolution: 0.004m
- Surface count: High
- Ghosting filter: False

There are a lot of black noise blobs under and behind the tiled model. Using regular cleaning approaches such as gradual selection is not available.

1. Is there any way to clean a tiled model?
2. Would enabling ghosting filter reduce the noise?
3. Would producing a regular mesh that cleaning it than using source data as mesh for tiled model work?
4. I am also considering dividing the model into chunks and separately processing, but would like to deliver a single model at the end, is this possible?

I have attached a screenshot.

General / Dense Cloud / Laser Scan Rendering quality
« on: May 07, 2023, 04:19:00 AM »

For our workflows we have a step where we import quite alot of laser scans into a chunk. We then select laser scans to remove as we don't need that many. The problem is when trying to view the chunk to select the laser scans to remove Metashape is very slow, unusable. We have to hide the laser scans and select scans based on the camera locations. Is there a way to reduce the rendering quality of laser scans so that Metashape will be smoother to use?

Python and Java API / Metashape 2.0 python laser scan depth map
« on: February 23, 2023, 06:58:27 AM »

Is there a way to access the depth maps for the terrestrial laser scans in metashape 2.0? I have attached a screenshot of what the depth map looks like when opening the tls spherical image. When accessing `` though it is None. I assume that the depth maps are computed based on the laser scan and not stored? If so what would be the calculation per pixel, as my goal is to access the depth of certain pixels?


General / Metashape 2.0 Laser Scan Referencing
« on: February 21, 2023, 02:32:49 AM »

I am using Metashape 2.0 Terrestrial laser scans feature to align TLS scans captured with the Leica RTC360, with images captured from other sources. I have followed the procedure listed here . It works and I can align the TLS scans with the images. My issue though is the position of the laser scans after alignment is different, this makes sense because we reset alignment, but is there a way to add the reference information to the laser scans so that after alignment the laser scans keep the initial position. I have tried adding reference information to the laser scan images based on the initial position, but this doesn't do anything (seems that images attached to laser scans are ignored when it comes to reference).

Any help would be appreciated


I've experimented with Metashape 2.0 laser scan features and really like how powerful it can be. We would like to use this in a pipeline that uses the python API.

The steps are:
 - importing the point clouds: Can be done in python with chunk.importPointCloud(...)
 - adding laser scan group: ?
 - setting laser scan group to fixed: ?
 - adding laser scans to group: ?
 - unlocking laser scan group transfrom: ?

I have attached screenshots of these steps in the GUI but would like to know how to do all of these steps in python if this is actually possible


I have a question about the 'cameras' argument for matchPhotos function.

I have a project where I am using the 'cameras' argument, it will only detect points and generate matches with these cameras but it generates matches between these cameras and other cameras already in the project that have keypoints detected. I would expect the argument would limit matches to only be generated between the cameras in the list

I am trying to access camera yaw pitch and roll estimated values (as appear in GUI reference pane)

To get the transformed camera position I am using the following code:
Code: [Select]
chunk =
camera = chunk.cameras[2]
transform = chunk.transform.matrix * camera.transform
# this works as expected, values are matching those in reference pane
pos = transform.translation()

rot = transform.rotation()
# this is not working, expected values are 93.583, -0.937, -44.785, outputted values are 273.58297001960716, 1.784351814162492, -135.21509554607456
ypr = Metashape.Utils.mat2ypr(rot)

I can do this using external libraries, but thought there would be a way to do this inside of metashape. This issue is persistent over different documents

Bug Reports / Unexpected Error on Startup
« on: June 16, 2022, 05:34:40 PM »

When opening Metashape (this error started a couple of weeks ago), I have been getting this error:
`Unexpected error occurred.
Details: System clock has been set back (-40)
This causes Metashape to not be activated, and I cannot activate it manually either. I have done some research and found that it is an issue with License managers where a file somewhere in the filesystem is corrupted or the time of the file has been modified, however I haven't had any luck fixing this. I have tried searching for files modified in both the future and way back in the past and modifying the dates, I have also tried repairing my Ubuntu install, nothing has been successful. I have only noticed errors with Metashape. Other than reinstalling ubuntu does anyone have any suggestions on fixing this error?



General / Cylindrical Orthomosaic Generation for long think wall scan
« on: January 24, 2022, 02:16:27 PM »

We have been capturing some datasets of a port quay wall underwater. The datasets are single strips going across approximately 60m in length. Due to the high distortion of our lens and the data being captured underwater the resulting models expresses the 'bowl effect'. This is not too much of an issue as we are not interested in a model but of an orthomosaic. We tried generating a planar orthomosaic but it resulted in the edges being quite dark due to the curve of the model. I have been experimenting with the cylindrical orthomosaic generation process and have read through the following article
This was quite helpful to give background but I haven't been able to get it to work. I've been trying to use the marker method. To do this I have added IMU readings to align the model, then used a circle fitting algorithm with the data to get the center of the cylinder so that I can generate an axis. When creating markers and giving them the center axis coordinates and trying to generate the orthomosaic i get the follwing error 'location of marker is undefined'.

I have attached an image of the model curve, the mesh and planar orthomosaic.

What would be the best method to generate an orthomosaic of this model?

Thanks in advance

Pages: [1]