Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - 3create

Pages: [1] 2 3
1
Feature Requests / Re: Export COLMAP in standard
« on: October 07, 2024, 04:44:00 PM »
I'd like to chime in here:

1. dense point clouds exported from Metashape as points3d work fine in both Nerfstudio and Postshot.
I have a customized version of the gaussian export script which does exactly this. A friend programmed this for me, but it's a kind of a workaround which we therefore couldn't offer Open Source (unlike the Masks implementation we added to the script).
But @ PolarNick, I can of course provide you this script version for internal examination.
One has to install open3d for Metashape manually, and the script internally creates a tmp ply version, reads it back in an then exports it as points3d. A hack which Metashape developers can probably solve in a cleaner fashion, with direct access to the dense cloud points via Python API.
The script chooses the currently active Dense Point cloud in Metashape for export (which I find is a neat solution if the project contains multiple dense point clouds).

I guess replacing the points3d.txt is still compatible with colmap, unlike changing the images.txt with its matching points (although that 2d-matching information isn't needed for Gaussian Splatting reonstructions).
And I personally favour such a points3d approach, rather than mixing this with a ply file or similar.

2. Yes, it would be _extremely_ useful to be able to disable the export of images (like in the script version), due to the long time undistorting takes.
There are many use cases for this (as also mentioned in post above).

3. Looking ahead, it's only a matter of time until fisheye camera models are fully supported for Gaussian Splatting.
The first steps were already made in Nerfstudio's latest GSplat version (1.4).
That's why I think an option to:
a) create idealized fisheye images on export, with the cameras.txt parameters following the OpenCV Fisheye convention, or
b) leave the images as-is, but adapt the cameras.txt  (p1/2, b1/2 intrinsics get lost in this case, but something which is acceptable for OpenCV compatibility?!)

Point 3) should probably be in an advanced section of the export dialog, as users will need to know what they are doing. Maybe also point 2)?

For any discussions of this closer to the 3dGS applications, I'm on the Nerfstudio and Postshot Discord servers as @pics23d

Thanks!

2
General / Re: Splitting up a multicam rig after the alignment process?
« on: September 19, 2024, 09:23:50 PM »
Yes, being able to "break apart" the master slave structure after successful alignment would be really useful!
This should by default then be compatible with the colmap export, without any changes needed for that?!
@ Alexey, I can send you an example if required.

3
Python and Java API / Re: Metashape Script - Export Issues
« on: August 27, 2024, 07:48:22 PM »
Hi Egor,

I'd be grateful to know some details about your changes to the script and how they relate to Denislaq's original issue (error message)?

@ DenisIaq: I may be stating the obvious, but the export images part of the script undistorts the images, so that for Gaussian Splatting the OpenCV PINHOLE convention with zero distortions can be used (-> cameras.bin). So if the original images were distorted, using them with this script will produce subpar results.

Thanks, Guy

4
Bug Reports / Re: Bug when creating dense point cloud from large mesh
« on: June 30, 2024, 04:18:47 PM »
Thanks for the fix, can verify that this is now solved in 2.1.2  :)

5
Thanks, but that "reddish" bounding box has nothing to do with the region I define in Metashape.
It's probably more of an indication that something is wrong with Metashape's internal point cloud transformation.
Btw., this bug is really easy to reproduce, just import a large mesh (500m?), i.e. as obj.
As mentioned, I'm using MS 2.1.1

6
The attached screenshot should acually be a hemisphere.

I've noticed this bug on medium to large scale scenes (i.e. 100+ m).
If I crop to a small region, the dense point cloud is fine, however, when the region is larger, these strange cropping and distortion errors occur.
To reproduce:
Select a large mesh within a current Agisoft project or import a large mesh into a scene -> Build point cloud from mesh.
Metashape Pro 2.1.1

7
General / Re: Hole filling and cleaning up of human skull
« on: January 04, 2024, 07:35:22 AM »
It's a bit hard to tell the exact cause of the problems without seeing the whole dataset.
But one alternative approach would be to create masks within the individual chunks after mesh reconstruction and basic  mesh editing (Tools -> Mesh -> Generate Masks), create a 4th chunk with all the images and import those masks from the 3 chunks.
Then process that 4th chunk (alignment and mesh reconstruction); this solution bypasses merging chunks.

Blurry textures: seems to be that parts of the skull images are out of focus (probably due to small depth of field?). The blurry parts of the images would also need to be masked.
Tools -> Mesh ->Generate Masks ->"Mask defocus areas" usually does a pretty good job.

8
I'd also like to add, that Reality Capture's camera export behaves just as one would expect, and the resulting NeRF is therfore in the correct orientation/scale (NeRFs then internally scale to a 0 -1 space, but this transformation is documented and can be correctly reverted for further steps).

And a another motivition I didn't mention in the OP: it's not only Metashape's alignment process that is of interest, but also its superior mesh reconstruction and texturing: with known transformations, the mesh can be used for pricise 3d/2d compostiing work with the NeRF renderings, rather than doing guess work with the Blender NeRF-plugin.

9
As has been mentioned in a few forum discussions here, exporting the aligned cameras as XML results in strange transformations: Metashape uses it's local coordinate system with no obvious correlation/transformation matrix to the "real" coordniates. Only option is exporting as "omega phi kappa", but this lacks other information, such as camera intrinsics.

Both instant NGP and Nerfstudio have scripts for converting Metashape-camera.xml into their own format (transforms.json).  This resulting file however inherits the "arbitary" transformations of the original Metashape xml

Sure, Nerfs are still in an expiremental phase, but IMHO the future potential is obvious! Thus, the combination of the solid alignment from Metashape (with all the options in this phase, i.e. individual pre-calibration, masks, and and and) make it an ideal fit for Nerfs.

So PLEASE, adjust the XML export, so it obviously reflects the user-defined local coordinats! That would be such an opportunity!

10
General / Re: Wide angle lens for full frame Nikon mirrorless camera
« on: December 16, 2022, 09:08:07 PM »
Just on a note concerning the 20mm 1.8 Nikkor (I use it): it is great for photogrammetry, distortion is no issue in Metashape and it has very low chromatic abberation (someting which shouldn't be neglected for photogrammetry).
Wide angle lenses are generally benifitial for architectural reconstructions, as every image has more information on surfaces parallel to the viewing direction (i.e. pillars sticking out of the facade), less images are needed (increasing the likelyhood of more consistent lighting outdoors) etc.
However, there is the issue of GSD and texture detail (i.e. walls with little texture variation for alignment and reconstruction) with 24 MP.

Concerning GoPros: this is a completely different topic (the distortion is close to fish-eye, not mentioning quality of lens, sensor size, trigger delay...)
If I get around to it, I can hopefully post more details on GoPros soon, as they also have their use cases.

Guy

11
Feature Requests / Re: AprilTag
« on: November 25, 2022, 09:33:21 AM »
+1 | would be really useful!

12
General / Re: How rotate region 0,0,0 please
« on: September 05, 2022, 09:13:43 PM »
Actually, I've often wished that "Reset region" does precisely what mauovernet is asking for: reset the rotation of the region to 0 0 0
A workflow example:

- one moves and rotates the object using ortho views, no problem
- then one uses "Reset region" to have the region aligned to world space

Using the "copy bounding box" script from a reference chunk is of course fine for more elaborate projects.
But for average workflows and users without the Pro license? Bzuco's advice is cool, but maybe not quite as intuitive as pressing a simple button ;)

Guy


13
General / Re: Camera export: distortion model
« on: August 15, 2022, 09:21:21 PM »
Thanks for the workaround advice Alexey!

But I agree with jedfrechette's comment: "Easy access to undistorted photos is a very important aspect of some workflows".
It really is a workflow issue, I'm using this method all the time (matching 3d-reconstructed data with the original image footage).

Guess we could start a poll?!? ;)

14
General / Re: Camera export: distortion model
« on: August 14, 2022, 03:51:06 PM »
Thanks for the reply Bzuco.
I figured out what the problem was: I loaded my generic pre-calibaration XML for the camera-lens-combo into 1.7.2 instead of the field calibration of the 1.8.4 project.
This explains the slight mismatches.

But this does pose the general question of why the useful undistortion option has been removed. Hence I wouldn't need to be using a second (older) installalltion, exporting und importing calibration models or alternatively use scripting.
@ Agisoft: would love this feture back in the "Convert Images"  in the latest builds  :)

15
General / Camera export: distortion model
« on: August 12, 2022, 10:12:01 PM »
To set the stage:
my previous workflow was to export the cameras (i.e. fbx) and export the undistorted images out of Metashape.
Exporting the undistorted images is no longer directly supported in the latest Metashape builds (only via scripting).

So I undistorted the images with the calibration parameters from a 1.8.4 project in 1.7.2. Works fine.
However, I noticed, that these undistorted images don't match with the exported scene/model.

So now I am a bit confused.
In short: what is currently the process of exporting the model with matching cameras? And of course the camera images need to be undistorted for correct alignment in 3rd party apps?!

Thanks, Guy

Pages: [1] 2 3