Hi!
I’m new-ish to photogrammetry and am currently creating 3D models of small objects (usually between 2.5 to 7cm) using a turntable. I am aiming for complete 360 degrees models (texture isn’t necessary) that are scaled and can be imported into MeshLab for further analysis. More specifically I am measuring some areas of the objects (hence the need for correct scaling) and need to select those areas by checking their surface elevations. For this I usually apply the “discrete curvatures” filter in MeshLab.
I created some models previously but didn’t scale them and the filter worked no problem.
Now that I repeated the process in Metashape and added a scale, I get a model of the same object with a similar number of vertices and faces as before, but the filter just colours everything bright green, and doesn’t show the elevations. In MeshLab the values of the “curvature range” are also all displayed as 0.
Since the filter works with the older models, I’m assuming it’s a problem with the way I create the models (maybe the scaling? the model exports?) and not with MeshLab. The first models were created a while back so I might have also changed the workflow since then (but apparently not for the better).
My current workflow is:
1. Creating separate chunks for the different camera angles I used (i.e., top, bottom, side etc.)
2. Aligning the photos of each chunk (medium, source, generic preselection disabled, key point limit: 40,000 and tie point limit: 10,000, apply masks to: key points, exclude stationary tie points and adaptive camera model fitting are enabled)
3. Building the dense clouds for each chunk (quality: medium, depth filtering: mild, calculate point colours and point confidence is enabled)
4. Building the meshes for each chunk (source data: dense cloud, surface type: arbitrary 3D, face count: medium, interpolation is enabled, calculate vertex colors is also ticked)
5. Importing masks for the photos of each respective chunk (method: from model, operation: replacement)
6. Remove the meshes again
7. Usually at this point I select one of the chunks and automatically detect the markers (I’m using CHI scales in the photos), make sure that they are correct and if not make sure that at least two points are consistent throughout the photos and delete the others. I change the accuracy to 0.0001 and create a scale bar. Refresh and make sure the object is scaled correctly.
8. Align the chunks (point based, fix scale and generic preselection is disabled, accuracy: high, key point limit: 150,000, apply masks to: key points)
9. Merge the chunks (merge dense clouds)
10. I then usually try to get rid of photos that have high a high error, recalibrate the cameras and try to bring down the RMS reprojection error using the gradual selection.
11. Build a new dense cloud for the merged chunk and delete points that have low confidence
12. Build mesh (face count on high this time)
13. Export model. I’ve tried different file formats but usually I export as PLY. (Local coordinate system, load defaults, export parameters: vertex colors, vertex normal, and vertex confidence, raster transform: binary encoding enabled)
Any ideas at what point I lose the curvatures?
-ornithorinc