Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Alexey Pasumansky

Pages: 1 2 [3] 4 5 ... 876
General / Re: Depth maps being "stuck" for long time
« on: October 17, 2021, 09:27:43 PM »
Hello cadforcam,

If you have CPU enabled for the GPU-supported stages, it could be a CPU that is trying to finishe the job. According to the provided log sub-tasks of depth maps generation on CPU could take tens times longer, than on GPU, so you can try to disable "use CPU" option in the GPU Preferences tab of Metashape and check on a similar project, if the process takes less time.

Bug Reports / Re: corrupt .laz dense_cloud export? (1.6.5 linux)
« on: October 17, 2021, 08:32:32 PM »
Hello Andy,

Were there any errors/warnings in the export log? If you can rather easily reproduce the problem with the large point clouds on certain nodes, can you please check, if the problem persists in 1.7.5 release version?

Hello MichalDV,

We have analyzed the issue and it appears that the problem is related to the mixture of different image formats (8-bit and floating point data in your case) in tiled model generation procedure (during the image preprocessing stage in particular). The fix will be included to the next version update.

Хорошо, что заработало! Если вдруг будут какие-то проблемы с использованием составной СК с этим геоидом, пожалуйста, сообщите.

Feature Requests / Re: Edit size of 'Show Cameras' rectangles
« on: October 15, 2021, 08:20:20 PM »
Hello CheeseAndJamSandwich,

Yes, it was SHIFT + mouse-wheel, I've fixed the typo in the previous post.

As for the color, it cannot be changed at the moment, but as a workaround, you can select all cameras - and the placeholders will be shown in red color.

Hello Stef,

I'm not sure, how you wish to keep the original registration of TLS data, but have it georeferenced in geographic coordinates?

If you mean keeping relative orientation of the TLS positions, then I could suggest the following workaround that, as I assume, should work for this task, although, I haven't yet tested it on real data:
- align TLS data with the drone imagery as described in the tutorial,
- set coordinate system for the chunk (including cameras and markers) to Local Coordinates in the Reference pane settings dialog,
- place marker projections for GCPs both on the drone photos and on the spherical panoramas,
- uncheck all drone cameras and markers in the Reference pane,
- check on TLS cameras (which have local coordinates) in the Reference pane,
- press Update button on the Reference pane toolbar
- reset alignment for TLS cameras,
- use script ( to apply the source registration information to the TLS cameras,
- uncheck TLS cameras in the Reference pane,
- input markers' coordinates to the Reference pane in geographic/projected coordinate system,
- check on all markers in the Reference pane,
- in the Reference pane settings dialog switch the chunks' coordinate system to the geographic/projected system used for the GCPs measurement,
- press Update button on the Reference pane toolbar.

I think I haven't missed any important steps here, but as mentioned above, haven't yet tried it on real data myself. If you are going to try that on your project data, you may start with the sub-set to reduce the processing time.

Maybe you can falso ind some additional ideas in the following thread related to the TLS data processing:

General / Re: Masking
« on: October 15, 2021, 07:07:24 PM »
Hello Steve,

Would it be possible for you to share some screenshots (if not a Metashape project itself), that demonstrate the result of default model reconstruction and also show the model that you would like to get (maybe with some rough manual editing of the mesh you can give an idea, of what you are trying to get). Also the overview of the thumbnails for the original images would be helpful to understand better the project specifics.

Thought I had seen a youtube video of a statue where she masks one and applies to all cameras.
If it was our old tutorial regarding 3D Model Generation (for PhotoScan Standard 0.9), then it showed how to draw the masks manually and how to import the masks (from B&W images) for all the images. But some users were confused and thought that after drawing the single mask, the same object has been masked out on all the other photos (however, those masks have been just imported from the files - and it has been actually mentioned in the video).

General / Re: What needs doing to be able to use Reduce Overlap command ?
« on: October 15, 2021, 06:43:26 PM »
Hello Steve,

I have just done mesh from sparse point cloud, and I am unable to click on a camera, what am I doing wrong ? You said they can be selected in a model window with mesh made, but I cant get them to . I am using the arrow tool immediate left icon to the selection dashed line tools, the one that can tumble a perspective view or moves ortho views etc. The arrow tool as we know it.
You need to switch from Navigation tool (arrow) to Selection instruments and draw the selection, that includes the cameras (the ends of the black lines - opposite to the blue rectangles, representing the cameras).

Hello MichalDV,

Looks like the problem occurs, when the source images are being read. If the issue is caused by corrupted files, you can try to disable half of the images in the set and try again, if it crashes, repeat with the second half of the dataset.

Hello Stef,

If you can get structured scans from Leica RTC 360 in E57 format, then I'd recommend to follow the approach mentioned in the article that you are referring to. Following it you should be able to align the photos from your drone with the laser scans and reference the complete solution using the GCPs. Then generate the surface (mesh, DEM or dense cloud) using the depth maps source, that will use both depth data fro the laser scanner and from the photogrammetric reconstruction.

General / Re: Masking
« on: October 15, 2021, 05:50:54 PM »
Hello Steve,

Actually, I have meant, that you can import the masks from the external source using alpha-band or B&W images. Following this approach, Metashape wouldn't create new masks that are different from the existing shape of the external masks. But if you need to mask out the same area on multiple images (for example, some fixed element in the image corner, that moves together with the camera), then it can be done with the single mask applied to multiple cameras.

We are planning to publish some tutorials related to different masking approaches, not sure, though, that it would be videos - mot likely text with the screenshots and, maybe, animated illustrations.

General / Re: Agisoft Metashape 1.8.0 pre-release
« on: October 15, 2021, 05:43:50 PM »
Hello fjgarciam,
Only for flir format? non dji format?
Yes, currently only FLIR format for R-JPEG images is supported.

Hello Steve,

Masking from model can be helpful in the following cases:

1. In projects, where the object is scanned in several sessions, for example: top and bottom separately, after rotating the object itself. Then each sub-set of images can be processed individually in separate chunks up to the model (for example, in Medium quality), then everything not related to the object of interest (including the background, on which the object is standing) should be removed from the mesh in each chunk and the masks from the model are generated in each chunk. After that the chunks are merged (not caring about relative orientation, as it's only needed to bring the images and masks to the single chunk) and the processing is started from scratch for the complete dataset with the masks applied.

2. For projects, where some features are detected in the background or around the model, which are present even after mesh reconstruction. The mesh model for such project can be manually cleaned up (or with the help of Gradual Selection -> Connected Components filter), masks are generated from the clean model and the processing is re-started either from the very beginning or from the depth maps generation stage (to make them more clean and focused on the object of interest only).

3. Also this approach can be used even in common cases, when after building the mesh model and getting the acceptable quality you would like to try the processing with the masks applied in order to check, if it would give a better result, when the key and tie points are detected only on the surface of the object of interest.

Hello Steve,

As I have said before, you can generate a rough model from the sparse cloud, which would take about a minute of calculations. Then use this model for the Reduce Overlap feature. Unfortunately, Metashape currently doesn't allow to identify excessive or unsuitable imagery automatically before any processing is started.

The selection of the cameras in the Model view should be performed with the selection tools (like Free-form selection, for example).

Feature Requests / Re: Edit size of 'Show Cameras' rectangles
« on: October 15, 2021, 05:17:23 PM »
Hello CheeseAndJamSandwich,

You can change the size of camera placeholders by using mouse-wheel in the Model view, while holding SHIFT key on the keyboard.

Pages: 1 2 [3] 4 5 ... 876