Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Limpopo_River

Pages: [1] 2
1
General / Re: Normal map visualization in Metashape Pro
« on: June 29, 2021, 04:48:06 PM »
Thanks Wojtek, you've pointed me to the right explanation.  However, I wasn't attempting to bake the normal map onto a lower-resolution mesh, I only wanted to visualize the normals of my 3D mesh at the full resolution in the Metashape model window.  The mesh has 34 million faces and 17 million vertices.  Another question is whether the normal map generates normals from the faces or the vertices?

I notice that when I generate the normal map, Metashape automatically decimates the mesh from 34 million to 1 million polygons.  There doesn't appear to be a setting one can use to avoid decimating the mesh when generating the normal map, unless I'm missing something.  Could you or anyone suggest a way to generate the normal map without decimating the mesh?

My objects are relatively flat, so what I'm hoping to see in the model window would look something like a map of normals to the 3D mesh surface projected onto a 2D plane, like an orthomosaic or DEM with normals for each pixel.  Because my high-res mesh is relatively flat overall, despite having a lot of surface variation, the normal map appears to be a uniform color when the mesh is so heavily decimated.

2
General / Normal map visualization in Metashape Pro
« on: June 28, 2021, 08:33:44 AM »
I've generated normal maps of archaeological objects using Metashape Pro (v. 1.6.6) and I'm unable to see any detail when viewing the normal maps in the model window.  I used the arbitrary 3D mesh processed at "ultra-high" resolution as the source for the normal maps. The normal maps appear to be completely uniform in color, with no discernible features.  I'm trying to understand if there's a problem with the normal maps themselves or if it's just a problem with the visualization of the normals in the model window.  Has anyone had a similar experience viewing normal maps generated in Metashape and/or can you suggest a reason for the absence of any detail in the visualization?

3
General / Re: Best quality on heavy project
« on: October 20, 2018, 08:51:56 PM »
Hi VHZN,

It looks like your last version has much improved texture by processing it as a height field with generic/mosaic texture.  Given your parameters that time and model resolution aren't a major worry, I think Option 3, realigning the photos at the next lower quality setting (high, medium, or low), would also be a good choice.  This has the advantage that you could still process the model as an arbitrary mesh or as a height field before building the texture.  Anyway, since it appears you're getting good enough texture results with the current model processed as a height field, then there's probably no need to go back to Option 3 and realign the photos.

Whenever building a large model, I try to avoid splitting it into chunks to avoid problems with stair-steps or other discontinuities in the model.  To answer your question about aligning and merging chunks, Photoscan will merge the chunks into a single model but I've seen enough alignment problems with this approach to suggest it doesn't work as well as one might hope.

Best of luck

4
Another benefit of using a moderately short focal length (generally the equivalent of 20-30 mm on a full-frame camera works best, but you can calculate the equivalent FL for your camera sensor) is that you can fly at a lower elevation and also increase the depth of field.  You might, however, have some problems with motion blur or distortion from a rolling shutter effect.  It's best to use a camera with a global shutter and/or reduce the speed of the UAS to avoid these problems. 

5
General / Re: Best quality on heavy project
« on: October 20, 2018, 05:43:53 PM »
VHZN,

I think the problem with building the mesh in Arbitrary mode is that for any resolution you select (high, medium, or low), this step requires the same amount of RAM (as you said, up to 45 Gb for 500 cameras).  This is because Photoscan builds the mesh based on the density of the point cloud, then it decimates the mesh to the desired resolution setting you choose.  Reducing the resolution of the mesh after you've built the point cloud doesn't use any less RAM. 

I believe you have at least three choices:  1) split the model into chunks; 2) reduce the bounding box; or 3) reduce the density of the point cloud before building the mesh. 

Since you've already aligned the photos, I'd suggest trying option 2) by copying the point cloud 2x or more, and reducing the bounding box in each copy, so you have two or more slightly overlapping bounding boxes in separate copies of the point cloud.  Then process the mesh for each of the bounding boxes separately.  This is akin to processing the model in separate chunks but wouldn't require you to realign the photos for each chunk.  A possible variant of this method would be to select or turn off photos in different regions of the model in each copy of the point cloud to reduce the amount of data being processed into the mesh, but I haven't tried this.  Either way, you'd still need to process the mesh for each region from separate copies of the point cloud, and then merge them using Merge Chunks and selecting "Merge models."

Option 1, splitting the model into chunks, is less desirable because you'll need to realign the photos for each chunk, then align the chunks, and build the mesh for each chunk.  This often results in slight discontinuities between the chunks that are difficult to align perfectly. 

Option 3, reducing the density of the point cloud, would result in a model with lower resolution, but it might allow you to process it in a single chunk, avoiding the pitfalls of option 1.  Often, you don't need the highest density of the point cloud to get a model with acceptable resolution.  For this option, try re-aligning the photos at medium or high, instead of the highest accuracy.  Each lower setting in the alignment step will decrease the resolution of the model by a factor of 4.

Maybe others have better suggestions.  Let us know what method you choose and what works the best for you.

6
General / Align model axes with coordinate system
« on: October 20, 2018, 12:28:41 AM »
Is there a way to ensure that the XYZ axes in the model window are aligned exactly with the model's coordinate system?  For example, I'm working with images captured in the WGS84 coordinate system.  After aligning the images, Photoscan generates a sparse point cloud and bounding box (region) with X, Y, and Z axes that seem randomly rotated relative to the coordinate system.   I would like to fix the X and Y axes to point to true North and East, and for the Z direction to be vertical, but I haven't found a way to rotate the axes so they're aligned exactly with the WGS84 coordinates.  I can select "rotate region" to rotate the model by eye until the axes look fairly close to alignment with the coordinate system, but this is just an approximation.  How can one actually specify the axes to align exactly with WGS84 or any other coordinate system (northing & easting, local coordinates, U.S. State Planes coordinates, etc.)?

7
The pull-down menu for Tools>Dense Cloud>Select Points by Masks opens a window with a parameter to choose "Edge softness" using a slider to select values between 0-10.  I've tried various settings for this parameter, but I couldn't detect any obvious difference in how the points are selected using masks.  Can someone describe what effect "edge softness" is supposed to have on point selection?

Thanks


8
General / Re: Convert NAVD88 to NGVD29 vertical datum
« on: December 31, 2016, 12:39:10 AM »
The approach I described to convert from NAD83/NAVD88 (EPSG::2871) to NAD27/NGVD29 (EPSG::26742) in the 2nd to last par. above works, provided that you have the surveyed coordinates (northing, easting, and elevation) for the GCPs in both datums: 

As a possible workaround, I'm thinking of duplicating the chunk with the model in NAD83/NAVD88, applying the Photoscan conversion from NAD83 (EPSG::2871) to NAD27 (EPSG::26742), then re-importing the GCPs using Northing and Easting coordinates from ArcGIS in NAD27 and elevations in NGVD29 provided by the surveyor. 

After conversion and re-importing the GCP coordinates in the new datum (NAD27/NGVD29), the dense point cloud has the correct coordinates.  This conversion only re-projects the point cloud.  After the conversion, it's necessary to regenerate the mesh, DEM, orthomosaic, and contours if you want any of these products in the new coordinate system.   

LR

9
General / Re: Convert NAVD88 to NGVD29 vertical datum
« on: December 29, 2016, 05:41:11 AM »
Hi Alexey,

I've optimized and generated the dense point cloud, DEM, mesh, tiled model, and contours in "NAD83(HARN) / California zone 2 (ftUS)  (EPSG::2871)."  The surveyor has also provided elevations in both NAVD88 and NGVD29 vertical datums, but they provided horizontal (Northing and Easting) coordinates only in NAD83, not NAD27.

I used "Convert" in the Reference pane to convert the model generated in NAD83/NAVD88 to NAD27 by selecting "NAD27 / California zone II (EPSG::26742)."  My thinking was to do the horizontal conversion NAD83 --> NAD27 first, then manually enter the surveyor's elevations of GCPs relative to NGVD29 into the model to obtain accurate X,Y,Z coordinates in NAD27 / NGVD29.   

The resulting horizontal (X,Y) coordinates for my GCPs in NAD27 after the conversion are shifted approximately 17 feet in the X-Y plane relative to their Northing and Easting positions when converted to the same coordinate system (EPSG::26742) using ArcGIS or AutoCAD.  I'm surprised the default conversions in Photoscan gave such a different result than ArcGIS or AutoCAD, and curious what could be the reason for this. 

As a possible workaround, I'm thinking of duplicating the chunk with the model in NAD83/NAVD88, applying the Photoscan conversion from NAD83 (EPSG::2871) to NAD27 (EPSG::26742), then re-importing the GCPs using Northing and Easting coordinates from ArcGIS in NAD27 and elevations in NGVD29 provided by the surveyor.  Does this approach sound like it would work?

To resolve this difficulty for future projects, I'd like to create a custom PRJ conversion from NAD83/NAVD88 (EPSG::2871) to NAD27/NGVD29 (EPSG::26742).   Can you advise how to generate such a custom PRJ conversion and how to use it in Photoscan?

Thanks in advance for your help,
LR


10
General / Re: Convert NAVD88 to NGVD29 vertical datum
« on: December 07, 2016, 06:48:09 PM »
Thank you, Alexey!  I have GCP survey data in both NAD83/NAVD88 (using Geoid 12a) and NAD27/NGVD29.  I see you have the TIFF for Geoid 12a at the link you sent (very helpful). 

Could you please walk me through the steps for conversion or point me to the right place in the user manual or tutorials?  I'm hoping it won't require me to run the model again from the optimization through dense cloud generation, since they're the same points, just different coordinate systems.  For this project, there's a need to use both coordinate systems for comparison with historical maps using different horizontal / vertical datums, and we have survey data for the GCPs in both NAD27/NGVD29 and NAD83/NAVD88 (Geoid 12a). 

The images were aligned in WGS84 using the "reference" setting and using the embedded GPS metadata in the photos.  After alignment, I "converted" the coordinate system from WGS84 to NAD83/NAVD88 (EPSG::2871) and imported the GCPs in NAD83/NAVD88 (Geoid 12a).  (However, I didn't use the Geoid 12a file from the link you just sent me.  I hope this is ok since the surveyor already converted the data using this Geoid.)  Then I optimized and built the dense cloud.  All seems well with the error statistics for the GCPs.

To make the conversion to NAD27/NGVD29, I'm expecting that I need to put the Geoid 12a file in a specific location, and import the GCP survey data in NAD27/NGVD29,  thereby replacing the NAD83/NAVD88 survey data in the Reference pane.  Should I use the "convert" button before or after importing the GCPs in NAD27/NGVD29?  Or, instead of using "convert," do I simply change the coordinate system in the "Settings" dialogue box since I have the option of using survey data for either coordinate system?

11
General / Convert NAVD88 to NGVD29 vertical datum
« on: December 07, 2016, 10:47:31 AM »
Hi,

I'm trying to convert a model with a vertical datum of NAVD88 [specifically, NAD83(HARN) /California zone 2 (ftUS) (EPSG::2871)] to a vertical datum of NGVD29 by using the "Convert" button in the Reference pane.  When I select the coordinate system [NAD27 + NGVD29 height (EPSG::7406)], I get the error message, "Unsupported vertical datum."  Any help to convert this model to NGVD29 would be appreciated.

Thanks,
Limpopo River

12
General / Re: GCP Help
« on: November 28, 2016, 07:01:39 AM »
Hi Alexey,

I made an error about the tie point accuracy in the Reference Settings; the value I mentioned referred to setting the Projection Accuracy to 0.1 in an earlier version of the Reference Settings in Photoscan (http://www.agisoft.com/index.php?id=31), not the Tie Point Accuracy.  The question still stands about whether it's ok or not recommended to change any of the Reference Settings while performing iterations of the optimization steps.

A further question:  When do you recommend using Gradual Selection to select Projection Accuracy and Reconstruction Uncertainty?  Elsewhere, I've seen recommendations to use threshold values of 2 for Projection Accuracy and 10 for Reconstruction Uncertainty in the Gradual Selection dialogue box.

Thanks again!

13
General / Re: GCP Help
« on: November 28, 2016, 02:41:45 AM »
Hi Alexey,

Under Optimize Camera Alignment on p. 9 of the the linked tutorial (PDF), it says "On the Reference pane uncheck all photos..."  What is the reason for this?  How does it optimize the camera alignment and the calibration if all the images are turned off?  Or does this simply tell Photoscan not to use the GIS data associated with the images while continuing to perform bundle adjustments to optimize camera alignment?

Also, is it ok or not recommended to change any settings in the Reference Settings pane as the optimization proceeds?  For example, after several iterations of Gradual Selection for Reprojection Uncertainty to remove selected points with higher errors, could one use a lower value for marker accuracy and tie point accuracy as the calibration proceeds?  Another expert who teaches Photoscan has told me that the tie point accuracy of 4 seems very high and it should be possible to reduce this value to 0.5 or less (he typically sets this value at 0.1).

Thanks

14
General / Re: 3D Printing file standards? pre-flight checklist?
« on: January 29, 2016, 09:29:23 AM »
I'd echo all of ChrisD's list, and would also recommend decimating the mesh to fewer than ~100K polygons.  The limit on the number of polygons will depend on the type of 3D printer and the software used to slice the model into layers to produce G-code instructions for the printer.  Larger meshes can take a very long time to slice, especially if the resolution of the layers is very thin.  I found that Meshlab (open source) and Meshmixer (free) had useful tools for extruding a surface into a printable volume, performing minor edits and repairs, and for checking the mesh for the kinds of errors that ChrisD also mentioned. 

I'll offer some additional thoughts based on my admittedly limited experience using PLA printers (others' experience may differ and feel free to correct any of the following that could be out-of-date).  An alternative to shelling is to print the object with a honeycomb-like internal structure that provides both structural rigidity and reduces the volume of material in the print.  This can usually be specified when you set the parameters in the slicer software, so you don't necessarily have to produce a model with a shell before you give it to the printer; you can simply provide a "water-tight" 3D mesh.  The slicer software will allow you to choose the amount of open space and the type of internal structure.  The 3D printing service will often have suggestions for parameters to use, since printers vary.  Many 3D printers and slicer software will accept either an STL or OBJ file.  If you produce an OBJ file, you'll also likely need to provide an MTL file that describes material properties for the mesh. 

Color (RGB) printing typically requires more expensive 3D printers and different materials than single-filament PLA printers.  Some filament printers have multiple print heads to allow printing with different colors of filaments, but I don't think they can reproduce mixtures of RGB color.  Printing with ABS has some qualities that PLA lacks, but it's typically a bit more expensive than PLA and it requires a 3D printer with a heated bed to be successful.  The next step up for color printing is a gypsum-like material, often described somewhat misleadingly as "sandstone."  You'll see examples of this material on Shapeways.com, but there are other simllar vendors.

You'll find a variety of 3D printing services on 3DHubs.com (an Uber-like service), for example.  Browsing this site is a good way to learn about the variety of available printers, materials, and costs, which can vary a lot.  I searched a while before I found a local person who provided a high-quality PLA print of an 100-percent scale 3D model that was about 10 x 3 x 6 inches (180 cubic inches) for what I considered a reasonable cost.   

Best of luck,
LR

15
General / Re: Realignment of selected (unaligned) cameras
« on: January 29, 2016, 08:32:21 AM »
I recently had the exact same problem, and there is a simple way to do it that doesn't require you to realign the entire image set.  First, I recommend that you save your work.  Then select the cameras that are misaligned.  Right-click on the selected cameras in the Workspace pane, and choose "Reset Camera Alignment" from the drop-down menu.  It might help to include some of the cameras that were properly aligned and overlap the misaligned images.  (If you have View>Show Cameras turned on, you'll see all of the blue rectangles representing the selected cameras will change to blue spheres.)  Then, right-click again on the selected cameras and choose "Align Selected Cameras" from the drop-down menu.  This will run the Alignment step only on the cameras you selected, while the non-selected cameras will retain all of their respective alignments.  The cameras selected for realignment will be aligned using the same settings you used for the initial camera alignment.

This is a nice feature of Photoscan and I thank the developers for providing it.  I had already aligned and optimized over 600 images when I discovered that a subset of images became misaligned during one of the optimization iterations.  I didn't want to repeat the alignment process as well as reoptimizing the entire data set, but the "Reset Camera Alignment" saved the day.  After the misaligned cameras were reset and realigned, I did reoptimize the sparse point cloud, but it went very quickly and I didn't lose the optimization of the rest of the camera positions.

Best,
LR

Pages: [1] 2