Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - James

Pages: [1] 2 3 ... 51
General / Re: Scanning objects in an uncontrolled environment
« on: July 11, 2024, 10:54:02 AM »
For improving the texture you could try

  • estimate image quality - this will give a score to each image according to how sharp it is, which you can use to filter out blurry images. if you perform quality estimation, you can sort by quality in the details view of the photos pane and disable the n% worst images.
  • add frame images - if you use a normal camera (rather than extract from your existing 360 images) to get higher quality coverage of the object, then align these images with your current dataset, you can leave them disabled for mesh generation, and the enable them and disable the 360 images for texture generation.

i'm a bit dubious about 360 cameras, because i'm sure it's physically impossible for them to truly have a single 'spherical centre' as they normally work by having two sensors looking opposite ways, just very close together. they might be stitched very nicely, but i don't think the geometry quite works as a perfect spherical image, so i wouldn't expect the alignment to work very well.

once the images are converted to equirectangular there's probably not much you can do about that, but if you are able to access the individual raw fisheye images before they are stitched then you may get a better alignment. if they are provided as a pair of fisheyes in the same image then make a copy of each image and mask out the left in one set and the right in the other.

if you're stuck with equirectangular video frames then you may have luck with identifying where the stitch line is - it should be a vertical seam down the middle, or pair of vertical seams somewhere else (i guess!) and you can take the same approach as above by copying the images and masking out either side of the seam(s) so metashape is able to ignore the incorrect geometry caused by assuming a single spherical sensor when it was really two close together, so long as the image wasn't too warped in the process of stitching.

never tried it and all totally hypothetical, and probably a dangerous rabbit hole to get lost down.

better solution would be to use a normal camera for scanning 'objects' whereas 360 cameras are useful for enclosed spaces, based on the simple rule of thumb that the thing you are scanning should fill the image. in an enclosed or interior space the 360 camera is great as the image is full of what you are trying to scan, because it's everything around you.

General / Re: Scanning objects in an uncontrolled environment
« on: July 10, 2024, 04:53:01 PM »
In general you need to try to fill the frame of each image with the object you're scanning.

For 'thin' objects this means getting closer to them, and therefore taking a lot more photos.

If the thin object doesn't take up much of the image Metashape won't know that it's 'important' and it will just align based on what else it can see.

If you mask out everything in the background and only leave a small part of the frame unmasked (the thin object) then the angles to matching points won't be well enough distributed to give a good 3D position estimation (alignment), or there just might not be enough image unmasked to even find matching points.

If you do fill the frame of each image with your object but it has no texture, then it also won't find many useful matching points.

So you may need to include some background in the images to help it align. Don't worry too much about cars, people or clouds - Metashape won't find many points there anyway, and only worry about masking moving trees once you've got a good alignment and want to make it better.

If you get closer to your object then background objects will start to go out of focus anyway, assuming your camera is not fixed focus like a GoPro etc.

Using an external point cloud as a data source is only applicable to the Pro version, and won't really help photos that can't align by themselves. Markers are also a Pro only feature, and manually aligned photos that wouldn't align by themselves will still not be very useful when you come to the meshing stage.

Trying to force photos to align that just won't align by themselves is a thankless task even with the Pro version and it's almost always necessary to return and get more well focussed photos with plenty of the object in them and lots and lots of overlap.

Yes depth filtering will have more impact on thin objects, but I've never found that changing that setting helped if I was getting bad results with the default.

I would almost always use depth maps method for mesh generation and skip the dense point cloud generation step.

For a ~2m fence 'post' for example, I would take ~30 photos in a circle around it, all facing in towards it, and down to include the bottom half and some of the ground. Take another ring of 30 photos looking more horizontally to get the middle and some background. Then hold the camera high above your head so you can capture the top half without pointing the camera to the sky. The sky is no use, cloudy or clear!

That should give you 90 photos that align ok, but depending on the texture of the post and the distance to it, might still not give you a nice clean model. If it's a shiny or clean plain post then it will be very hard, but if it's a nice old wooden one then you can get better results by just getting closer and taking more photos.

General / Re: Depth Maps Turning Black with Large Drone Dataset
« on: June 26, 2024, 01:13:10 PM »
Not sure if this is relevant, but see page 40 of the manual (if you haven't already):

  • Tie point limit parameter allows to optimize performance for the task and does not generally
    effect the quality of the further model. Recommended value is 10 000. Too high or too low tie
    point limit value may cause some parts of the point cloud model to be missed. The reason is that
    Metashape generates depth maps only for pairs of photos for which number of matching points is
    above certain limit. This limit equals to 100 matching points, unless moved up by the figure "10%
    of the maximum number of matching points between the photo in question and other photos,
    only matching points corresponding to the area within the bounding box being considered."
  • The number of tie points can be reduced after the alignment process with Tie Points - Thin Point
    Cloud command available from Tools menu. As a results tie point cloud will be thinned, yet the
    alignment will be kept unchanged.

I can't quite get my head around what this really means, because if depth maps are generated for 'pairs of photos' then how is it that they appear to be assigned to individual photos in the UI, but without twisting my head around it too painfully I can imagine that maybe if a photo has an unreasonably large number of tie points with another photo, perhaps because it's actually taken from a very similar position, then that may raise the '10%' threshold too high for more suitable matching photos to be able to contribute to a proper depth map, or something.

So anyway, maybe a red herring, but if you're stuck you could try the suggestion in the note to use 'thin point cloud' to ensure a more uniform density of tie points to rule out this possibility.

General / Re: Gradual Selection Order?
« on: June 20, 2024, 12:49:29 PM »
If you do all three and only optimise at the end, then it makes no difference what order you do them.

Try duplicating a chunk and doing it one way in one, and the other way in the other, and see if you can see any difference in number of points or error values, or anything.

It would only make a difference if you optimised after each gradual select/delete, but i don't know why you would do that.

General / Re: Texture Quality - what affects this?
« on: June 17, 2024, 07:27:33 PM »
I'd still like to hear from anyone that knows what the term "intensity" refers to.

I believe it's pixel 'brightness'.

So of all the source photos that cover any given point on the model, it will choose the source photo that is brightest at that point, or for min intensity it will choose the darkest.

But I've never found a good use for it.

Another good reason to take your photos perpendicular to the surface you are shooting is that you will get more of your surface in the region of the focal plane.

General / Re: Striping artifacts when orthorectifying drone imagery
« on: June 05, 2024, 04:07:11 PM »
Were cameras gimbal mounted on both drones?

I guess the Mavic 3 would have been, allowing it to maintain a perfect nadir angle, whereas if the Micasense used a fixed mount then if it was anything other than straight down as it flew Eastward, it would be off by the same amount in the opposite direction on the return Westward (assuming that the drone rotates 180 to return, and that the pitch of the drone itself is negligible? - I'm not a drone person!)

So if the Mavic maintained a perfect nadir angle, and the Matrice/Micasense combo didn't, then you would expect different reflections on alternating stripes with the latter which wouldn't be evident in the former.

Did you try calibrate colors, or check if the images on alternating stripes were visibly different side by side?

As I understand it, the 'convert images' function will not convert a fisheye image to rectilinear, it'll just convert it to an ideal fisheye image, so perhaps you got lucky on the variation workflow where the camera type was still set to the default 'Frame' rather than 'Fisheye'?

Just a thought anyway!

Not sure if this is a bug or feature, but using the 'convert images' on a set of fisheye images, the output seems still to be a fisheye rather than rectilinear image.
Hello James,

The undistortion operation result corresponds to the selected camera type, so if the camera type is Fisheye, then you should get an "ideal" fisheye image.

Take a look at this Orthomosaic seamline editing (patching) tutorial. I think it should be all you need to fix your orthomosaic.

General / Re: Export Model with Scale Bar Visual
« on: January 10, 2024, 05:49:48 PM »
If your model is to scale, and you have/make your own model of a scale bar, then you can import the scale bar model to your chunk and merge the models.

In order to place the scale bar where you want it in relation to your model, you would have to import it to a separate chunk first, and then turn on 'show aligned chunks' (though you may have to tweak the scale/position/rotation of the scale bar chunk, using the transform object tools, before it will interpret it as an 'aligned chunk'), and then you can use the transform object tools to put it where you want it. Finally merge the chunks to get a single model including both your scan and the scale bar.

I've not tried it!

General / Re: Problems aligning cameras
« on: December 12, 2023, 07:43:27 PM »
I would first try to identify 1 single unaligned photo that you think has sufficient overlap with at least 2 of your 390 aligned photos, and post it here along with the 2 aligned ones so see if anyone can identify why it didn't work.

One possible reason is insufficient overlap, but you did say it was high so it might not be that.

Another possibility is that 'generic preselection' didn't identify those particular images as a good match (because it only looks at lower resolution versions to work out the pairs) and so it never tried to align them. You could test that by using a smaller number of images (a handful from the 390 aligned, and a handful of close but unaligned neighbours from the rest) in a chunk with all preselection disabled (because that's much slower) just to check if the images do overlap enough.

If that works, then you could try grouping your photos into chunks of ~400 that do align with generic preselection, and then some connecting chunks of fewer, but definitely overlapping, images which can be aligned with preselection disabled, and then once you have everything aligned in separate chunks you can align the chunks by 'cameras' and then removing the connecting chunks and merge the rest, and optionally do tools -> tie points -> build tie points to generate actual matches between the groups, but this is getting a bit advanced and if you've had to go to these lengths to get here then it's likely that your images aren't very good or the overlap is inadequate and so results may still be poor.

General / Re: Scan inside and outside a house?
« on: November 30, 2023, 08:13:26 PM »
one possibility would be to disable all interior images while making the mesh of the exterior

General / Re: Seamlines re-use
« on: October 27, 2023, 12:16:25 PM »
+1 for a method to do this please.

I often spend a long time manually assigning images only to later realise that I should have done colour correction, or not done it, or added some more photos, or improved the alignment, and then subsequent rebuild of the orthomosaic causes all the shape/image assignments to be lost.

I'm not saying it can't be done currently, but I don't know how.

General / Re: Build Point Cloud
« on: October 27, 2023, 11:44:07 AM »
It seems like Metashape doesn't automatically switch the dense cloud view after building the dense cloud.

You can switch to it by either double clicking the newly created "Point Cloud" in the workspace/chunk explorer bit, or Model -> View Mode -> "Point Cloud - (various options)" or the Point Cloud icon in the toolbar (showing a 3x3 grid of points).

If none of those work then I guess it really didn't do anything, like you say!

General / Re: How to get extrinsics from XML
« on: September 01, 2023, 06:30:24 PM »
Don't know if this helps, but I struggled with the XML format almost 10 years ago and think I came up with something that worked for me, though I haven't returned to it since.

Take a look anyway, and see if this helps:

General / Re: New Depth Map each mesh generation
« on: August 29, 2023, 05:36:04 PM »
Is this the way it's suppose to work?

Yes, except building the mesh doesn't require a dense point cloud to be built first. Both the dense point cloud and the mesh are derived directly from depth maps.

The quality you specify in the 'build mesh' dialog corresponds to the resolution of the depth maps so, each time you change the quality setting, new depth maps need to be generated at the appropriate resolution.

I would think that once I've generated the dense point cloud any mesh is going to be a derivative of that and a new depth map shouldn't be required.

If you do need/want both dense point cloud and mesh, then the depth maps for one can be reused for the other, but only if you opt to generate both with the same quality setting.

Furthermore, though I think this goes beyond what you're asking, it is possible to keep depth maps for multiple quality levels, but you have to manually right-click the depth maps in the chunk explorer workspace and uncheck the 'Set as default' option so that they will not be overwritten when you generate at a different quality level. Then when it comes to reusing them to build a mesh or point cloud you have to ensure that you have set the appropriate set of depth maps (for the quality you want to 'build' at) to be default before you can opt to reuse them in the build mesh/build point cloud dialog.

Pages: [1] 2 3 ... 51