Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - 3create

Pages: [1] 2
1
General / Re: Hole filling and cleaning up of human skull
« on: January 04, 2024, 07:35:22 AM »
It's a bit hard to tell the exact cause of the problems without seeing the whole dataset.
But one alternative approach would be to create masks within the individual chunks after mesh reconstruction and basic  mesh editing (Tools -> Mesh -> Generate Masks), create a 4th chunk with all the images and import those masks from the 3 chunks.
Then process that 4th chunk (alignment and mesh reconstruction); this solution bypasses merging chunks.

Blurry textures: seems to be that parts of the skull images are out of focus (probably due to small depth of field?). The blurry parts of the images would also need to be masked.
Tools -> Mesh ->Generate Masks ->"Mask defocus areas" usually does a pretty good job.

2
I'd also like to add, that Reality Capture's camera export behaves just as one would expect, and the resulting NeRF is therfore in the correct orientation/scale (NeRFs then internally scale to a 0 -1 space, but this transformation is documented and can be correctly reverted for further steps).

And a another motivition I didn't mention in the OP: it's not only Metashape's alignment process that is of interest, but also its superior mesh reconstruction and texturing: with known transformations, the mesh can be used for pricise 3d/2d compostiing work with the NeRF renderings, rather than doing guess work with the Blender NeRF-plugin.

3
As has been mentioned in a few forum discussions here, exporting the aligned cameras as XML results in strange transformations: Metashape uses it's local coordinate system with no obvious correlation/transformation matrix to the "real" coordniates. Only option is exporting as "omega phi kappa", but this lacks other information, such as camera intrinsics.

Both instant NGP and Nerfstudio have scripts for converting Metashape-camera.xml into their own format (transforms.json).  This resulting file however inherits the "arbitary" transformations of the original Metashape xml

Sure, Nerfs are still in an expiremental phase, but IMHO the future potential is obvious! Thus, the combination of the solid alignment from Metashape (with all the options in this phase, i.e. individual pre-calibration, masks, and and and) make it an ideal fit for Nerfs.

So PLEASE, adjust the XML export, so it obviously reflects the user-defined local coordinats! That would be such an opportunity!

4
General / Re: Wide angle lens for full frame Nikon mirrorless camera
« on: December 16, 2022, 09:08:07 PM »
Just on a note concerning the 20mm 1.8 Nikkor (I use it): it is great for photogrammetry, distortion is no issue in Metashape and it has very low chromatic abberation (someting which shouldn't be neglected for photogrammetry).
Wide angle lenses are generally benifitial for architectural reconstructions, as every image has more information on surfaces parallel to the viewing direction (i.e. pillars sticking out of the facade), less images are needed (increasing the likelyhood of more consistent lighting outdoors) etc.
However, there is the issue of GSD and texture detail (i.e. walls with little texture variation for alignment and reconstruction) with 24 MP.

Concerning GoPros: this is a completely different topic (the distortion is close to fish-eye, not mentioning quality of lens, sensor size, trigger delay...)
If I get around to it, I can hopefully post more details on GoPros soon, as they also have their use cases.

Guy

5
Feature Requests / Re: AprilTag
« on: November 25, 2022, 09:33:21 AM »
+1 | would be really useful!

6
General / Re: How rotate region 0,0,0 please
« on: September 05, 2022, 09:13:43 PM »
Actually, I've often wished that "Reset region" does precisely what mauovernet is asking for: reset the rotation of the region to 0 0 0
A workflow example:

- one moves and rotates the object using ortho views, no problem
- then one uses "Reset region" to have the region aligned to world space

Using the "copy bounding box" script from a reference chunk is of course fine for more elaborate projects.
But for average workflows and users without the Pro license? Bzuco's advice is cool, but maybe not quite as intuitive as pressing a simple button ;)

Guy


7
General / Re: Camera export: distortion model
« on: August 15, 2022, 09:21:21 PM »
Thanks for the workaround advice Alexey!

But I agree with jedfrechette's comment: "Easy access to undistorted photos is a very important aspect of some workflows".
It really is a workflow issue, I'm using this method all the time (matching 3d-reconstructed data with the original image footage).

Guess we could start a poll?!? ;)

8
General / Re: Camera export: distortion model
« on: August 14, 2022, 03:51:06 PM »
Thanks for the reply Bzuco.
I figured out what the problem was: I loaded my generic pre-calibaration XML for the camera-lens-combo into 1.7.2 instead of the field calibration of the 1.8.4 project.
This explains the slight mismatches.

But this does pose the general question of why the useful undistortion option has been removed. Hence I wouldn't need to be using a second (older) installalltion, exporting und importing calibration models or alternatively use scripting.
@ Agisoft: would love this feture back in the "Convert Images"  in the latest builds  :)

9
General / Camera export: distortion model
« on: August 12, 2022, 10:12:01 PM »
To set the stage:
my previous workflow was to export the cameras (i.e. fbx) and export the undistorted images out of Metashape.
Exporting the undistorted images is no longer directly supported in the latest Metashape builds (only via scripting).

So I undistorted the images with the calibration parameters from a 1.8.4 project in 1.7.2. Works fine.
However, I noticed, that these undistorted images don't match with the exported scene/model.

So now I am a bit confused.
In short: what is currently the process of exporting the model with matching cameras? And of course the camera images need to be undistorted for correct alignment in 3rd party apps?!

Thanks, Guy

10
Brilliant, thanks!

11
Haven't followed every Metashape change log in the past, so briefly the question:
is it now possible to extract the stills from videos as "image_0001", "image_0002" (instead of "image_1", "image_2") or do I still need to use an external program for this?
Thanks, Guy

12
Adrian,

could you post an example image with the coded targets on the object you are reconstructing?

Guy

13
There's the script "split in chunks.py" on Github for not manually having to split the scene into chunks.

But dealing with highres meshes is a general pain point. Is there an advantage for retopolgy (with more animatable/UV-editable quad-layout) of your scene, or would a mesh decimation in Agisoft be enough (or better, due to curverture based reduction)?

Guy

14
General / Re: Cylindrical Orthomosaic Generation for long think wall scan
« on: January 25, 2022, 10:25:56 PM »
Another approach for non-planar surfaces (in this case unintended) would be to create a simplyfied UV-mapped mesh model (in 3d software) and use the textured (bent) model as source for transferring its texture onto this new one (in Metashape). The resulting texture represents an accurate unwrapping of the source, as far as the  UV-coordinates of the simplyfied mesh are dimensionally correct (i.e. not stretched).

Guy

15
General / Re: SfM from Archival images with inconsistent lighting
« on: January 08, 2022, 09:09:54 PM »
Hi danuhl,

I've worked on projects with archival images: very challenging!
Basically, SfM alone won't get you very far, there are usually too many textural changes between archival and current images for dense stereo matching to work.
The main work is a manual process, not suited for sfm apps.

The first step is to know more about the intrinsic parameters of the archival images. This can be done by matching well spread points of the old and new images (with known coordinates, i.e. from laser scans), a process known as "reverse calibration".

A  well known "classic" in photogrammetry is the reconstruction of the destroyed Buddha statues, maybe that will give you a few hints:
https://www.researchgate.net/publication/227635047_Photogrammetric_Reconstruction_of_the_Great_Buddha_of_Bamiyan_Afghanistan

Guy

Pages: [1] 2