Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrh87

Pages: [1]
1
General / Re: Poor reconstruction
« on: March 22, 2025, 03:49:29 AM »
I am adding an image to better illustrate the case. It can be seen 2 vertical passes taken with little overlap in the middle. Due to obstructions, the middle area cannot be captured. But, as mentioned, I believe metashape should still be able to stitch both sides of the facades, just not sure about the best settings for this case.

2
General / Re: Poor reconstruction
« on: March 20, 2025, 04:31:52 PM »
I have noticed that, for building facades where there is an obstacle in the middle, such that the images can be only taken from the left and right side missing a good overlapping in the middle, but adding topdown images to cover the entire facade, Metashape tends to provide poor results (i.e. 2 different non-connected areas) as compared to other 3D engines. Are there any settings that could help improve the reconstruction?

I am using these settings:

chunk.addPhotos(images) #, Metashape.MultiplaneLayout)
chunk.matchPhotos(downscale=2, generic_preselection=True, reference_preselection=False)
chunk.alignCameras()
chunk.buildDepthMaps(downscale=4, filter_mode=Metashape.AggressiveFiltering)
chunk.buildModel(source_data=Metashape.DepthMapsData, surface_type=Metashape.Arbitrary, interpolation=Metashape.EnabledInterpolation)
chunk.buildUV(mapping_mode=Metashape.GenericMapping)
chunk.buildTexture(blending_mode=Metashape.MosaicBlending, texture_size=4096)

3
General / Re: Segmentation
« on: March 20, 2025, 03:53:44 AM »
Yes, given a 3D model containing different buildings, I need some automated way to extract each building separately. Note that the buildings may overlap with each other, so they are mostly different by height, textures, etc.

4
General / Re: Segmentation
« on: March 19, 2025, 04:46:10 PM »
I was looking for some automated, as in either changing the settings or having some AI segmentation models.

5
General / Segmentation
« on: March 18, 2025, 06:48:26 PM »
Is there any way to apply auto-segmentation on a 3D model? For example, I am taking drone pictures of a building, but other buildings in the background appear in the scene and I would like to have a method to automatically segment only the building of interest and remove the rest. Alternatively, can Metashape only generate the 3D model of the area at the front and do not generate it for the background (e.g. some sort of masking).

6
Python and Java API / Re: Orientating mesh as per camera view
« on: March 18, 2025, 09:25:26 AM »
This should make it then:

T1 = np.array(camera.transform).reshape(4,4)[:3,:3]
T2 = np.array(chunk.transform.matrix).reshape(4,4)[:3,:3]
rotation_matrix = (T2.dot(T1)).T

7
General / Re: Poor reconstruction
« on: March 14, 2025, 08:42:31 AM »
Thanks! I have done so and, even if significantly better, the reconstruction is still not ideal.

Reference preselection refers to using GPS coordinates to better find matches between images, is that right?

For downscale, using 1 or 2 should be referring to matchPhotos. I assume buildDepthMaps can be further downscaled for efficiency (e.g. 4)?

8
Python and Java API / Orientating mesh as per camera view
« on: March 13, 2025, 01:20:08 PM »
Hello,

I am capturing a building facade using drone images. From the resulting 3D model, I would like to obtain a planar view of the facade as per the drone view. How could I orientate the mesh accordingly?

My current approach is, for one of the (5280x3956) pictures looking straight at the facade, I project 3 points to build a plane. Then I try to create the rotation matrix accordingly as per the script below. But the results are wrong.

import Metashape
import numpy as np

doc = Metashape.Document()
doc.open(path=project_path)
chunk = doc.chunk

chunk.crs = None # Set <CoordinateSystem 'Local Coordinates (m)'>
T = chunk.transform.matrix # Get the transformation matrix
origin = Metashape.Vector([T[0, 3], T[1, 3], T[2, 3]])
T = Metashape.Matrix.Translation(-origin)*T
chunk.transform.matrix = T  # Preserve transformations

for camera in chunk.cameras:

    if camera.label == 'DJI_0268_V':

        T = chunk.transform.matrix

        p1 = camera.unproject(Metashape.Vector([0,0]))
        p2 = camera.unproject(Metashape.Vector([5280,0]))
        p3 = camera.unproject(Metashape.Vector([0,3956]))

        ortho = np.cross(np.array(p2-p1), np.array(p3-p1))

        p1 = p1/np.linalg.norm(p1)
        p2 = p2/np.linalg.norm(p2)
        p3 = Metashape.Vector(ortho/np.linalg.norm(ortho))

        p1 = chunk.crs.project(T.mulp(p1))
        p2 = chunk.crs.project(T.mulp(p2))
        p3 = chunk.crs.project(T.mulp(p3))

        rotation_matrix = np.column_stack((p1, p2, p3))

        break

Any suggestion?

9
General / Poor reconstruction
« on: March 12, 2025, 05:49:52 AM »
I am often encountering scenarios where the reconstruction fails. For example, having taking pictures of a single building facade, the resulting reconstruction comes out with multiple parts not properly aligned / connected (see screenshot). Does anyone have any suggestions of what's going wrong? The script I use is as follows:

import Metashape
import os

img_path = '/Users/User/Desktop/Data/images/'
project_path = '/Users/User/Desktop/Data/project.psz'

# Create project
doc = Metashape.Document()
doc.save(path=project_path)
chunk = doc.addChunk()

# Get images
images = []
for root, _, files in os.walk(img_path):
    for file in files:
        if file.endswith(".JPG"):
            images.append(os.path.join(root, file))

# Processing
chunk.addPhotos(images) #, Metashape.MultiplaneLayout)
chunk.matchPhotos(downscale=4, generic_preselection=True, reference_preselection=True)
chunk.alignCameras()
chunk.buildDepthMaps(downscale=8, filter_mode=Metashape.AggressiveFiltering)
chunk.buildModel(source_data=Metashape.DepthMapsData, surface_type=Metashape.Arbitrary, interpolation=Metashape.EnabledInterpolation)
chunk.buildUV(mapping_mode=Metashape.GenericMapping)
chunk.buildTexture(blending_mode=Metashape.MosaicBlending, texture_size=4096)
texture_format=Metashape.ImageFormat.ImageFormatJPEG, save_texture=True)

# Save
doc.save()

10
Thanks for the suggestion Paulo. Looks like the script below does the trick ;)

chunk.crs = None # Set the local CRS <CoordinateSystem 'Local Coordinates (m)'>
T = chunk.transform.matrix # Get the transformation matrix
origin = Metashape.Vector([T[0, 3], T[1, 3], T[2, 3]])
T = Metashape.Matrix.Translation(-origin)*T
chunk.transform.matrix = T  # Preserve transformations

chunk.exportModel(path=mesh_path, format=Metashape.ModelFormatOBJ, texture_format=Metashape.ImageFormat.ImageFormatJPEG, save_texture=True, crs=chunk.crs)

11
Hi Paulo,

The chunk.crs is <CoordinateSystem 'WGS 84 (EPSG::4326)'>

I am exporting my model using chunk.exportModel(path=mesh_path, format=Metashape.ModelFormatOBJ, texture_format=Metashape.ImageFormat.ImageFormatJPEG, save_texture=True).

Thanks!

12
I want to project pixel coordinates from an image to the 3D mesh model as per the scripts below. However, no matter the approach or the conversions, it is not returning the results I can see when opening the mesh and selecting a point either with Cloudcompare or Open3D (as per attached screenshot). What am I missing?

import Metashape

mesh_path = "model.obj"

def project_pixel_to_mesh(camera, pixel_coord, chunk):

    mesh = chunk.model  # Get the 3D model (mesh)

    T = chunk.transform.matrix
    pt_2d = Metashape.Vector(pixel_coord)
    pt_3d = mesh.pickPoint(camera.center, camera.unproject(pt_2d))
    pt_3d_world = chunk.crs.project(T.mulp(pt_3d))
    print('Camera_center', camera.center)
    print('pt_3d', pt_3d)
    print('pt_3d_world', pt_3d_world)

doc = Metashape.Document()
doc.open(path="test.psx")
chunk = doc.chunkcamera = chunk.cameras[0]
pixel_coord = (4000, 3000)  # Example pixel coordinates
project_pixel_to_mesh(camera, pixel_coord, chunk)

The console output is:

Camera_center Vector([0.25185996974317976, -9.252179636717418, -2.411546712801472])
pt_3d Vector([1.6831388473510742, -9.926933288574219, -15.181743621826172])
pt_3d_world Vector([113.96841070820898, 22.387860599545277, 89.6002808787151])

Whereas the projected coordinates should be something close to: [-21, -23, 85]

Pages: [1]