Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - 3create

Pages: [1] 2 3
1
Thanks, but that "reddish" bounding box has nothing to do with the region I define in Metashape.
It's probably more of an indication that something is wrong with Metashape's internal point cloud transformation.
Btw., this bug is really easy to reproduce, just import a large mesh (500m?), i.e. as obj.
As mentioned, I'm using MS 2.1.1

2
Bug Reports / Bug when creating dense point cloud from large mesh
« on: May 24, 2024, 06:17:58 PM »
The attached screenshot should acually be a hemisphere.

I've noticed this bug on medium to large scale scenes (i.e. 100+ m).
If I crop to a small region, the dense point cloud is fine, however, when the region is larger, these strange cropping and distortion errors occur.
To reproduce:
Select a large mesh within a current Agisoft project or import a large mesh into a scene -> Build point cloud from mesh.
Metashape Pro 2.1.1

3
General / Re: Hole filling and cleaning up of human skull
« on: January 04, 2024, 07:35:22 AM »
It's a bit hard to tell the exact cause of the problems without seeing the whole dataset.
But one alternative approach would be to create masks within the individual chunks after mesh reconstruction and basic  mesh editing (Tools -> Mesh -> Generate Masks), create a 4th chunk with all the images and import those masks from the 3 chunks.
Then process that 4th chunk (alignment and mesh reconstruction); this solution bypasses merging chunks.

Blurry textures: seems to be that parts of the skull images are out of focus (probably due to small depth of field?). The blurry parts of the images would also need to be masked.
Tools -> Mesh ->Generate Masks ->"Mask defocus areas" usually does a pretty good job.

4
I'd also like to add, that Reality Capture's camera export behaves just as one would expect, and the resulting NeRF is therfore in the correct orientation/scale (NeRFs then internally scale to a 0 -1 space, but this transformation is documented and can be correctly reverted for further steps).

And a another motivition I didn't mention in the OP: it's not only Metashape's alignment process that is of interest, but also its superior mesh reconstruction and texturing: with known transformations, the mesh can be used for pricise 3d/2d compostiing work with the NeRF renderings, rather than doing guess work with the Blender NeRF-plugin.

5
As has been mentioned in a few forum discussions here, exporting the aligned cameras as XML results in strange transformations: Metashape uses it's local coordinate system with no obvious correlation/transformation matrix to the "real" coordniates. Only option is exporting as "omega phi kappa", but this lacks other information, such as camera intrinsics.

Both instant NGP and Nerfstudio have scripts for converting Metashape-camera.xml into their own format (transforms.json).  This resulting file however inherits the "arbitary" transformations of the original Metashape xml

Sure, Nerfs are still in an expiremental phase, but IMHO the future potential is obvious! Thus, the combination of the solid alignment from Metashape (with all the options in this phase, i.e. individual pre-calibration, masks, and and and) make it an ideal fit for Nerfs.

So PLEASE, adjust the XML export, so it obviously reflects the user-defined local coordinats! That would be such an opportunity!

6
General / Re: Wide angle lens for full frame Nikon mirrorless camera
« on: December 16, 2022, 09:08:07 PM »
Just on a note concerning the 20mm 1.8 Nikkor (I use it): it is great for photogrammetry, distortion is no issue in Metashape and it has very low chromatic abberation (someting which shouldn't be neglected for photogrammetry).
Wide angle lenses are generally benifitial for architectural reconstructions, as every image has more information on surfaces parallel to the viewing direction (i.e. pillars sticking out of the facade), less images are needed (increasing the likelyhood of more consistent lighting outdoors) etc.
However, there is the issue of GSD and texture detail (i.e. walls with little texture variation for alignment and reconstruction) with 24 MP.

Concerning GoPros: this is a completely different topic (the distortion is close to fish-eye, not mentioning quality of lens, sensor size, trigger delay...)
If I get around to it, I can hopefully post more details on GoPros soon, as they also have their use cases.

Guy

7
Feature Requests / Re: AprilTag
« on: November 25, 2022, 09:33:21 AM »
+1 | would be really useful!

8
General / Re: How rotate region 0,0,0 please
« on: September 05, 2022, 09:13:43 PM »
Actually, I've often wished that "Reset region" does precisely what mauovernet is asking for: reset the rotation of the region to 0 0 0
A workflow example:

- one moves and rotates the object using ortho views, no problem
- then one uses "Reset region" to have the region aligned to world space

Using the "copy bounding box" script from a reference chunk is of course fine for more elaborate projects.
But for average workflows and users without the Pro license? Bzuco's advice is cool, but maybe not quite as intuitive as pressing a simple button ;)

Guy


9
General / Re: Camera export: distortion model
« on: August 15, 2022, 09:21:21 PM »
Thanks for the workaround advice Alexey!

But I agree with jedfrechette's comment: "Easy access to undistorted photos is a very important aspect of some workflows".
It really is a workflow issue, I'm using this method all the time (matching 3d-reconstructed data with the original image footage).

Guess we could start a poll?!? ;)

10
General / Re: Camera export: distortion model
« on: August 14, 2022, 03:51:06 PM »
Thanks for the reply Bzuco.
I figured out what the problem was: I loaded my generic pre-calibaration XML for the camera-lens-combo into 1.7.2 instead of the field calibration of the 1.8.4 project.
This explains the slight mismatches.

But this does pose the general question of why the useful undistortion option has been removed. Hence I wouldn't need to be using a second (older) installalltion, exporting und importing calibration models or alternatively use scripting.
@ Agisoft: would love this feture back in the "Convert Images"  in the latest builds  :)

11
General / Camera export: distortion model
« on: August 12, 2022, 10:12:01 PM »
To set the stage:
my previous workflow was to export the cameras (i.e. fbx) and export the undistorted images out of Metashape.
Exporting the undistorted images is no longer directly supported in the latest Metashape builds (only via scripting).

So I undistorted the images with the calibration parameters from a 1.8.4 project in 1.7.2. Works fine.
However, I noticed, that these undistorted images don't match with the exported scene/model.

So now I am a bit confused.
In short: what is currently the process of exporting the model with matching cameras? And of course the camera images need to be undistorted for correct alignment in 3rd party apps?!

Thanks, Guy

12
Brilliant, thanks!

13
Haven't followed every Metashape change log in the past, so briefly the question:
is it now possible to extract the stills from videos as "image_0001", "image_0002" (instead of "image_1", "image_2") or do I still need to use an external program for this?
Thanks, Guy

14
Adrian,

could you post an example image with the coded targets on the object you are reconstructing?

Guy

15
There's the script "split in chunks.py" on Github for not manually having to split the scene into chunks.

But dealing with highres meshes is a general pain point. Is there an advantage for retopolgy (with more animatable/UV-editable quad-layout) of your scene, or would a mesh decimation in Agisoft be enough (or better, due to curverture based reduction)?

Guy

Pages: [1] 2 3