Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - jrh87

Pages: [1]
1
General / Segmentation
« on: March 18, 2025, 06:48:26 PM »
Is there any way to apply auto-segmentation on a 3D model? For example, I am taking drone pictures of a building, but other buildings in the background appear in the scene and I would like to have a method to automatically segment only the building of interest and remove the rest. Alternatively, can Metashape only generate the 3D model of the area at the front and do not generate it for the background (e.g. some sort of masking).

2
Python and Java API / Orientating mesh as per camera view
« on: March 13, 2025, 01:20:08 PM »
Hello,

I am capturing a building facade using drone images. From the resulting 3D model, I would like to obtain a planar view of the facade as per the drone view. How could I orientate the mesh accordingly?

My current approach is, for one of the (5280x3956) pictures looking straight at the facade, I project 3 points to build a plane. Then I try to create the rotation matrix accordingly as per the script below. But the results are wrong.

import Metashape
import numpy as np

doc = Metashape.Document()
doc.open(path=project_path)
chunk = doc.chunk

chunk.crs = None # Set <CoordinateSystem 'Local Coordinates (m)'>
T = chunk.transform.matrix # Get the transformation matrix
origin = Metashape.Vector([T[0, 3], T[1, 3], T[2, 3]])
T = Metashape.Matrix.Translation(-origin)*T
chunk.transform.matrix = T  # Preserve transformations

for camera in chunk.cameras:

    if camera.label == 'DJI_0268_V':

        T = chunk.transform.matrix

        p1 = camera.unproject(Metashape.Vector([0,0]))
        p2 = camera.unproject(Metashape.Vector([5280,0]))
        p3 = camera.unproject(Metashape.Vector([0,3956]))

        ortho = np.cross(np.array(p2-p1), np.array(p3-p1))

        p1 = p1/np.linalg.norm(p1)
        p2 = p2/np.linalg.norm(p2)
        p3 = Metashape.Vector(ortho/np.linalg.norm(ortho))

        p1 = chunk.crs.project(T.mulp(p1))
        p2 = chunk.crs.project(T.mulp(p2))
        p3 = chunk.crs.project(T.mulp(p3))

        rotation_matrix = np.column_stack((p1, p2, p3))

        break

Any suggestion?

3
General / Poor reconstruction
« on: March 12, 2025, 05:49:52 AM »
I am often encountering scenarios where the reconstruction fails. For example, having taking pictures of a single building facade, the resulting reconstruction comes out with multiple parts not properly aligned / connected (see screenshot). Does anyone have any suggestions of what's going wrong? The script I use is as follows:

import Metashape
import os

img_path = '/Users/User/Desktop/Data/images/'
project_path = '/Users/User/Desktop/Data/project.psz'

# Create project
doc = Metashape.Document()
doc.save(path=project_path)
chunk = doc.addChunk()

# Get images
images = []
for root, _, files in os.walk(img_path):
    for file in files:
        if file.endswith(".JPG"):
            images.append(os.path.join(root, file))

# Processing
chunk.addPhotos(images) #, Metashape.MultiplaneLayout)
chunk.matchPhotos(downscale=4, generic_preselection=True, reference_preselection=True)
chunk.alignCameras()
chunk.buildDepthMaps(downscale=8, filter_mode=Metashape.AggressiveFiltering)
chunk.buildModel(source_data=Metashape.DepthMapsData, surface_type=Metashape.Arbitrary, interpolation=Metashape.EnabledInterpolation)
chunk.buildUV(mapping_mode=Metashape.GenericMapping)
chunk.buildTexture(blending_mode=Metashape.MosaicBlending, texture_size=4096)
texture_format=Metashape.ImageFormat.ImageFormatJPEG, save_texture=True)

# Save
doc.save()

4
I want to project pixel coordinates from an image to the 3D mesh model as per the scripts below. However, no matter the approach or the conversions, it is not returning the results I can see when opening the mesh and selecting a point either with Cloudcompare or Open3D (as per attached screenshot). What am I missing?

import Metashape

mesh_path = "model.obj"

def project_pixel_to_mesh(camera, pixel_coord, chunk):

    mesh = chunk.model  # Get the 3D model (mesh)

    T = chunk.transform.matrix
    pt_2d = Metashape.Vector(pixel_coord)
    pt_3d = mesh.pickPoint(camera.center, camera.unproject(pt_2d))
    pt_3d_world = chunk.crs.project(T.mulp(pt_3d))
    print('Camera_center', camera.center)
    print('pt_3d', pt_3d)
    print('pt_3d_world', pt_3d_world)

doc = Metashape.Document()
doc.open(path="test.psx")
chunk = doc.chunkcamera = chunk.cameras[0]
pixel_coord = (4000, 3000)  # Example pixel coordinates
project_pixel_to_mesh(camera, pixel_coord, chunk)

The console output is:

Camera_center Vector([0.25185996974317976, -9.252179636717418, -2.411546712801472])
pt_3d Vector([1.6831388473510742, -9.926933288574219, -15.181743621826172])
pt_3d_world Vector([113.96841070820898, 22.387860599545277, 89.6002808787151])

Whereas the projected coordinates should be something close to: [-21, -23, 85]

Pages: [1]