Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - ilia

Pages: [1]
1
Hi,

I have a quite weird issue related to multicamera system. I was using a camera with 6 synced modules. Meshing went really well, but textures had visible artifacts. I've tried to narrow down by simplifying the mesh and throwing away more and more cameras and ended up with a single multicamera, simple mesh and the same issue.

One of slave cameras despite having a proper relative orientation estimation to master camera doesn't provide nice textures for part of its view. Some polygons in the view of this slave camera are properly textured (and for texturing this slave camera is used as no other cameras see these polygons), but other polygons just have a weird splat of single color.

Here is the project uploaded to GDrive with the data:
https://drive.google.com/drive/folders/1iV6ubNF6bi8TzZXByRm-VK4u4704lhds?usp=sharing

I suspect here a gimbal lock issue. Two angles of slave offset angles for this camera are close to 180.

I will try to roll this camera data 180 degrees to see if the problem disappear. This didn't help.

Metashape 2.0.3, also tested with 2.1.1 and got the same issues. Windows 10, RTX 4090, driver  536.99.

2
Hi

I have a project which is challenging to get matched together properly. I use NN masks to hide some objects, some time more agressively (like trees or bushes). Sometimes I build a mesh and use it as a source for masks to avoid matching points on a water for an example.

In this iterations sometimes I would like to clean up mask and rematch images using more keypoints (I apply "mask keypoints option" with alignment). But I don't see anywhere we I can reset keypoints for the image and have it recomputed if I updated masks to have them less conservative. If I reset alignment it still leaves keypoints stored for these photos (this setting is one and useful most of the time for my projects). But sometimes I want to enforce recomputing key points even with having this option on.

I've tried to look for Python API and it seems that it is not possible also to clean up keypoints for specific set of images or reset their alignment on the lower level, consider them in the project as the ones without keypoints to recalculate them.

May I ask you how to do that? Only one option I have in mind is exporting project, cleaning up keypoints for some photos and importing it back.

3
Hi,

I'm using Python API to load calibrated rig parameters into Metashape. This includes also slave sensors positions relative to the master sensor in multicamera setup.
I can fix down intrinsics parameters. This option is available in GUI and this option is available in Python API.

But I can't find the way (reading documentation and also experimenting in console) how to fix slave sensor positions to some predefined by calibration procedure values. I mean angles and lever arm (x, y, z). The same problem I have in GUI, but I hoped that I can make it through using Python API.

4
Python and Java API / Import feature points / matches
« on: February 28, 2023, 07:31:48 PM »
Hi,

I'm trying to figure out somewhere in Python API the capability to import matches. I'm experimenting with other feature detection and matching algorithms, but I would like to bring back this matches into Metashape if possible.

Is there a simple way to utilize Python API to do so? What would be the easiest and fastest way to have them imported through Python script?

5
General / How to merge tiled models
« on: December 21, 2022, 10:41:16 PM »
Hi all!

Because of the size of dataset I was reconstructing I've splitted the whole dataset (mesh, to be specific) on smaller subparts each of them was Tiled individually with their own cameras.

I would like to merge them back. So far it is not clear how I can potentially merge them in a single tiled model but for some reason I thought it can be done quite easy. But it seems like it is doable only if you keep working on the same Tiled model and select "merge" checkmark during new tiling process.

My goal is the merge 4 big Tiled models in a single bigger one somehow in Metashape without spending much time on running tiling again for this parts. Is it doable?

6
Hi all,

I'm playing around with Tiled model generation for a big project.
And I've noticed that having for an example 3mm precision instead of 5mm precision significantly increase the computation time. Like it makes it 4 times slower.

I generate Tiled Model from mesh. And from what I observe most of the time Metashape spends selecting cameras. At this stage:
Quote
Processing window [144, 48, 16] - [160, 64, 32]
filtered 2812 cameras by frustrum and 549 by depth out of 3446
selected 85 cameras in 359.183 sec
And during this stage it loads CPU on 100%

Are there any ways to speed up the process? Or maybe to get a sense how long it will take? Maybe by making tiled model less granular I will have it faster (with different Tile size, for an example)?

Maybe there are some jump in complexity if you select texture precision below some value when you have scene with objects at various distance from the camera (I have ground data and areal data merged in the same model)?

7
I'm trying to predefine reconstruction region on the map by selecting some rectangular area on the map. From this area I have two corner diagonal points in format (lat0, lon0, alt0) and (lat1, lon1, alt1).

Based on this I would like to define some projection which on the next step will give a region if I also give some height for BBox. Idea is the get reconstruction region on the map, without running Metashape's GUI for it. I've tried to find possible solutions for this, but haven't found anything suitable for me. The closest topic I found so far is here:
https://www.agisoft.com/forum/index.php?topic=13222.0

But I would like to understand better this conversion from (lat, lon, height) into local coordinates (x,y,z) for selected model like WGS 84 or something. Is there any guide about such transforms in Metashape?

For an example I know that Metashape is capable quite fast calculate ruler's endpoints into (lat, lon, height). I would like to know how it does this transformation, which functions are applied and but most important how to do reverse this transform, if it not obvious to derive it from knowing transform in the opposite direction.

8
Python and Java API / Camera Rig configuration through Python API
« on: May 26, 2022, 11:53:31 AM »
Hi!

I didn't find any opportunity to set up the rig, including offsets through Python API.
May I ask where to look for? I have precalibrated camera rig to experiment with, but I have to go manually loading the whole configuration in Metashape everytime.

Also, are there functions to call if I want to load all cameras as a rig in the same manner when I click "Load Folder" and select the rig configuration?

9
Hi!

I'm trying to establish texturing of big models with a decent texture quality and solution I found it to divide already meshed model into smaller meshes, texture them separately with a maximum atlas which fits into GPU and merge them after.

Worked out well so far. But there are single issue which bothers me a lot.

It seems that the only way to select part of the mesh from the script is to use Shape. There are no other ways to subdivide a mesh, is it correct? For an example, I can subdivide sparse point cloud by putting smaller regions which do have 3D form as it done in split_in_chunks_dialog.py. Here we have full control (up to Rectangular cuboid) in 3D. But with Shapes I'm able to select only x and y and project them into some "common plane". I believe the plane is which this model is referenced to and if I set reference frame as local it is a grid I see in the viewer.

Problems arrive when I use georeferenced models. Sometimes model got tilted to this grid. And using division mesh by Shapes it's got projected on this inclined referenced geoframe. To solve it I need to go to local reference, align the model with axes of grid and do this division.

My questions are:
1. Is there a way to select parts of mesh and divide it not by Shapes? I can split sparse point cloud in chunks by using .region, but splitting the mesh requires Shapes.
2. I don't think I understand shapes fully. I see that I'm able to draw them in 3D, but all shapes at the end should be projected into some plane: image plane or reference plane. Are there ways to do some sort of "3D Shapes" without being forced to project one of three coordinates? I read both Python API and Pro documentations and still I don't have a full picture of this selection capabilities and Shapes especially.
3. I also would like to select and switch on-off cameras based on their proximity to subparts of the mesh. For this I would like to use some spherical selection. But it seems that so far I can use only cylindrical selection based on Shapes.
4. Is it possibly somehow to use the same selection tool as in GUI from Python? Select the center of the point, draw circle, project it based on the current view and select cameras/polygons to operate on.

Pages: [1]