Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - glennk

Pages: [1]
1
General / Re: Transfer camera parameters to a minimal OpenGL renderer
« on: February 15, 2023, 06:03:52 PM »
Thank you Alexey,

I ended up with this guide : https://strawlab.org/2011/11/05/augmented-reality-with-OpenGL/

I had to transpose the matrix before using it.
Also, since cx and cy are already centered :
(width - 2*cx)/width -> -2*cx/width
and
(-height + 2*cy)/height -> 2*cy/height

2
General / Transfer camera parameters to a minimal OpenGL renderer
« on: February 14, 2023, 01:41:46 PM »
Hello,

I want to export the model and cameras acquired by Metashape to a minimal OpenGL renderer. In this renderer I want to add a background image and add some geometry matching the background image (example :  Metashape's 3D model).

Thanks to Metashape's python API, I exported
  • the cameras transforms
  • the cameras focal lengths
  • the pixels widths and heights

How to create the camera Projection matrix and camera Model matrix from these informations ?

3
Python and Java API / Set custom UVs
« on: May 02, 2022, 04:01:04 PM »
Hello,

I would like to set custom UV coordinates to my model via Python API.

I tried the following :
Code: [Select]
for i in range(len(uv)):
        model.tex_vertices[i].coord.x = uv[i, 0]
        model.tex_vertices[i].coord.y = uv[i, 1]

uv being the [#vertices x 2] numpy array containing the uv coordinates I want to apply to model

As a result, the model.tex_vertices are not modified. (It keeps being a list of (0,0) tuples)
I checked my custom uv and they are right.

What do I do wrong here ?

Thanks,

Glenn

PS : Maybe I should mention I am working with a multiframe chunk

4
General / Re: UV transfer
« on: April 14, 2022, 05:58:18 PM »
Thank you Bzuco,

I get your point, but texture stretching is not a big issue in my case.

5
General / UV transfer
« on: April 14, 2022, 03:24:12 PM »
Hello,

I am working with a multi-frame chunk. Is there a way to transfer the uv from one frame the others ? Moreover, is it possible to make the texture in one block (without seams) ?

Thanks.

Best regards,

Glenn

6
General / Re: Confidence math
« on: April 08, 2022, 10:54:30 AM »
Hello Alexey,

This is very clear and helpful, thank you.

Best regards,

Glenn

7
General / Re: Confidence math
« on: April 07, 2022, 03:55:22 PM »
Hello Alexey,

Thank you for the quick reply  :)

I do not know the difference between dense cloud points confidence or mesh vertices confidence, so I guess I mean both.

Best regards,

Glenn

8
General / Confidence math
« on: April 05, 2022, 06:28:27 PM »
Hello,

I can't find any documentation on how Metashape's confidence is computed.
It is a value in [0, 255] in the GUI, in Python the range is [0, 33].

This thread does not help much since I looking for an idea of the math behind confidence.

Can someone explain me a little more in depth what really is confidence ?

I am using this value for research purposes, that's why I would like to know more about it.

Best regards,

Glenn

9
Python and Java API / Re: Multiframe color calibration
« on: April 02, 2022, 01:01:12 AM »
Struggling for hours, and finding a solution minutes after the post...  ::)

Setting the frame as active worked :

Code: [Select]
for frame in doc.chunk.frames:
    doc.chunk.frame = frame.key
    frame.calibrateColors(ms.PointCloudData, white_balance=True)

I hope it may be helpful for others.

10
Python and Java API / Multiframe color calibration
« on: April 01, 2022, 11:34:21 PM »
Hello,

I am trying to calibrate the colors of each frame of my multi-frame chunk.

This is my code :
Code: [Select]
for frame in doc.chunk.frames:
    frame.calibrateColors(Metashape.PointCloudData)


Only the colors of the first frame get calibrated.

What am I doing wrong here ?

Best regards,

Glenn

PS :  Using Metashape.ModelData instead of Metashape.PointCloudData give me very weird results.


11
Hello Alexey, thank you for your answer.

I am afraid this is not exactly related to my question which is more Python-oriented. In fact, even if I did not test your solution, I imagine that it will not be any faster since the model is temporarily stored on disk.

I lowered the vertex density of my models and my solution now takes 0.5 seconds for the transfer Metashape -> Numpy (Trimesh). It is more acceptable in my case than the 15 seconds I was experiencing with bigger models.

It is common for 3D libraries with Python bindings to have a Numpy compatible interface (e.g. Trimesh, Open3D). I guess Metashape's Python bindings are designed for easy script automation, rather than as a Python 3D library.
Therefore, I see why there is no need to have such compatibility in Metashape.

Best regards,

Glenn

12
Hello,

I am using Metashape in python as part of a bigger python project.

I would like to access to the Models attributes (vertices, faces, texture coordinates)  in numpy ndarrays.

For the moment, I loop over the vertices/faces/texture_coords like so:

Code: [Select]
# Vertices
verts = []
for v in model.vertices:
    c = v.coord
    verts.append([c.x, c.y, c.z])
verts = np.array(verts)

# Faces
tris = []
for f in model.faces:
    tris .append(f.vertices)
tris = np.array(tris )

# Texture coordinates
uvs= []
for tv in model.tex_vertices:
    uvs.append(tv.coord)
uvs= np.array(uvs)

But it is not optimized at all, so I would like to know if there is a way to access this data in faster way ?

As subsidiary question, when aligning the chunks for example :
Code: [Select]
doc.alignChunks(reference=0, method=2, fit_scale=True)the vertices of the Model are not affected by the alignment.
In fact, I have to apply the transforms manually, vertex by vertex :

Code: [Select]
for chunk in doc.chunks:
    model = chunk.models[0]
    matrix = chunk.transform.matrix
    for i, v in enumerate(model.vertices):
        model.vertices[i].coord = matrix.mulp(v.coord)

Is there a better solution for this too ?

Thanks,

Glenn

13
General / Questions about Metashape's 4D processing tool
« on: March 04, 2022, 07:32:48 PM »
Hello,

I am scanning human eyes on several poses.

I used the 4D processing tool (GUI) to align my scans and I does not work very well.
https://agisoft.freshdesk.com/support/solutions/articles/31000155179-4d-processing
I tried a simple ICP algorithm with outliers management and the alignment behaves better (see attachments).

I have several questions (they might be correlated) :

  • How does 4D processing align the frames ?
  • Does 4D processing affect the final textures ? (by taking information in each frame somehow)
  • How to perform 4D processing from python ?
  • What are the differences between 4D processing and processing all the chunks followed with a chunk alignment ?

Thanks,

Glenn

14
Face and Body Scanning / Re: Image masking for chunk alignment
« on: March 04, 2022, 01:00:58 PM »
Update :

I made a fixed mask for all of my cameras. As a result, the scanned surfaces are bounded to what I am interested in.
The scans can then be aligned with "align chunks" or with custom alignment algorithm such as ICP.

15
Face and Body Scanning / Image masking for chunk alignment
« on: March 01, 2022, 07:13:39 PM »
Hello,

I am scanning human eyes in various poses for research purposes using Metashape pro for Windows.

I want to align the raw meshes to isolate the deformations between each pose for a same participant.
When using Metashape's alignChunks from 40 000 points, the scans are aligned according to the chin rest device because the points are denser in these areas.
Since the participants might move a bit during the capture session, the alignment is wrong.

Is it possible to define a mask in image space to exclude areas for the chunk alignment process ?
Here I attached a photo of mine and an ideal mask I would like to use for alignment.

Regards,

Glenn

 

Pages: [1]