Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - ilia

Pages: [1]
1
General / How to merge tiled models
« on: December 21, 2022, 10:41:16 PM »
Hi all!

Because of the size of dataset I was reconstructing I've splitted the whole dataset (mesh, to be specific) on smaller subparts each of them was Tiled individually with their own cameras.

I would like to merge them back. So far it is not clear how I can potentially merge them in a single tiled model but for some reason I thought it can be done quite easy. But it seems like it is doable only if you keep working on the same Tiled model and select "merge" checkmark during new tiling process.

My goal is the merge 4 big Tiled models in a single bigger one somehow in Metashape without spending much time on running tiling again for this parts. Is it doable?

2
Hi all,

I'm playing around with Tiled model generation for a big project.
And I've noticed that having for an example 3mm precision instead of 5mm precision significantly increase the computation time. Like it makes it 4 times slower.

I generate Tiled Model from mesh. And from what I observe most of the time Metashape spends selecting cameras. At this stage:
Quote
Processing window [144, 48, 16] - [160, 64, 32]
filtered 2812 cameras by frustrum and 549 by depth out of 3446
selected 85 cameras in 359.183 sec
And during this stage it loads CPU on 100%

Are there any ways to speed up the process? Or maybe to get a sense how long it will take? Maybe by making tiled model less granular I will have it faster (with different Tile size, for an example)?

Maybe there are some jump in complexity if you select texture precision below some value when you have scene with objects at various distance from the camera (I have ground data and areal data merged in the same model)?

3
I'm trying to predefine reconstruction region on the map by selecting some rectangular area on the map. From this area I have two corner diagonal points in format (lat0, lon0, alt0) and (lat1, lon1, alt1).

Based on this I would like to define some projection which on the next step will give a region if I also give some height for BBox. Idea is the get reconstruction region on the map, without running Metashape's GUI for it. I've tried to find possible solutions for this, but haven't found anything suitable for me. The closest topic I found so far is here:
https://www.agisoft.com/forum/index.php?topic=13222.0

But I would like to understand better this conversion from (lat, lon, height) into local coordinates (x,y,z) for selected model like WGS 84 or something. Is there any guide about such transforms in Metashape?

For an example I know that Metashape is capable quite fast calculate ruler's endpoints into (lat, lon, height). I would like to know how it does this transformation, which functions are applied and but most important how to do reverse this transform, if it not obvious to derive it from knowing transform in the opposite direction.

4
Python and Java API / Camera Rig configuration through Python API
« on: May 26, 2022, 11:53:31 AM »
Hi!

I didn't find any opportunity to set up the rig, including offsets through Python API.
May I ask where to look for? I have precalibrated camera rig to experiment with, but I have to go manually loading the whole configuration in Metashape everytime.

Also, are there functions to call if I want to load all cameras as a rig in the same manner when I click "Load Folder" and select the rig configuration?

5
Hi!

I'm trying to establish texturing of big models with a decent texture quality and solution I found it to divide already meshed model into smaller meshes, texture them separately with a maximum atlas which fits into GPU and merge them after.

Worked out well so far. But there are single issue which bothers me a lot.

It seems that the only way to select part of the mesh from the script is to use Shape. There are no other ways to subdivide a mesh, is it correct? For an example, I can subdivide sparse point cloud by putting smaller regions which do have 3D form as it done in split_in_chunks_dialog.py. Here we have full control (up to Rectangular cuboid) in 3D. But with Shapes I'm able to select only x and y and project them into some "common plane". I believe the plane is which this model is referenced to and if I set reference frame as local it is a grid I see in the viewer.

Problems arrive when I use georeferenced models. Sometimes model got tilted to this grid. And using division mesh by Shapes it's got projected on this inclined referenced geoframe. To solve it I need to go to local reference, align the model with axes of grid and do this division.

My questions are:
1. Is there a way to select parts of mesh and divide it not by Shapes? I can split sparse point cloud in chunks by using .region, but splitting the mesh requires Shapes.
2. I don't think I understand shapes fully. I see that I'm able to draw them in 3D, but all shapes at the end should be projected into some plane: image plane or reference plane. Are there ways to do some sort of "3D Shapes" without being forced to project one of three coordinates? I read both Python API and Pro documentations and still I don't have a full picture of this selection capabilities and Shapes especially.
3. I also would like to select and switch on-off cameras based on their proximity to subparts of the mesh. For this I would like to use some spherical selection. But it seems that so far I can use only cylindrical selection based on Shapes.
4. Is it possibly somehow to use the same selection tool as in GUI from Python? Select the center of the point, draw circle, project it based on the current view and select cameras/polygons to operate on.

Pages: [1]