Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - James

Pages: 1 ... 4 5 [6] 7 8 ... 52
76
Face and Body Scanning / Re: Issue with Alignment in Photogrammetry rig
« on: November 11, 2020, 05:32:55 PM »
Guessing from the file naming convention that you are using 6 columns of 10 images? Regardless of the size of the room I think 60 degrees between images is going to be too much for Metashape to understand.

My suggestion would be to try 12 columns of 5 images, or 15 columns of 4 images, even if it means you can only cover the top half of the body. (you can then tweak the configuration once you have a working alignment!)

Also you might as well turn the cameras to portrait orientation to get more of the subject and less of the background in each shot.

If that helps, then maybe try rearranging cameras so you have a bit more density around the head/face or wherever you need it.

I'd say it's better to start with a well aligned set which you can then tweak to get what you want (because when it stops aligning you'll know what you did that caused that) than a non-aligned set where you have nothing and no idea what to tweak to get what you want.

77
General / Re: Agisoft Metashape 1.7.0 pre-release
« on: October 28, 2020, 05:54:18 PM »

[For laser scans] Current approach is the following:
...
- run Align Photos operation as usual.
Does that mean that the poses of the laser scans will be adjusted in addition to the internal and external parameters calculated for the standard cameras?

As a user who commonly combines laser scan data with photogrammetry that is not at all what I would want to happen. The laser scan data will almost always be coming from other software where we have much more control over the alignment process. Therefore the laser scan data should be treated as fixed (no pose adjustments) and photos aligned to it.

...

Yes, it can be said that the laser scans are aligned together with the common images.

If you need to rely on the accurate scanner locations, you can input corresponding coordinates to the Reference pane for the related camera instances and adjust the accuracy value for them in order to increase the weight for them.

...

I would also like to see an option to just check a box or similar to force laser scan alignment to be preserved exactly, if it had been registered already in a 3rd party package, otherwise the function would not be useful. It seems a lot of manual work to input the coordinates of each laser setup (and presumably orientation?) to ensure they stay fixed. I take great care over my registration and wouldn't trust any algorithm to do a better job! We treat our laser data as sacred and once it is registered everything else (so many things...) is derived from it, but can not be allowed to affect it!

I haven't actually tried 1.7 yet so I might not know what I'm talking about, but it seemed like a potential issue!

78
General / Re: "Camera" vs "Image"
« on: October 25, 2020, 12:36:10 AM »
No idea of the history behind the current terminology, but here's a related thread which might be of interest, if not helpful.

https://www.agisoft.com/forum/index.php?topic=12259.0

I'm terrible at using the proper words for things, so feel in no place to pass any judgment myself! I can imagine, though, that working in a large team where effective communication is vital, that ambiguous or inaccurate language could lead to confusion in some situations.

79
This suggests it is expecting to load a multipage texture file.

texture.png is the first page and texture1.png is the second. It may even be expecting more pages after that. You can check in the GUI by going to Tools->Mesh->View Mesh UVs and looking at the page numbers in the bottom left.

But I've had a look through the API documentation and I can't find any good way to determine how many texture pages a model has, other than by specifying it at the point of building it.

A bad way that I may have found was that each face of the model contains a reference to the texture page (tex_index) for that face, so if you loop through each face you could look for the max tex_index to get the count, but that won't help if all the faces relating to 'higher' page numbers have been removed from the model, which may be what has happened in your case, and it won't help if you just don't have the texture page files to load anyway!

80
I think this may shed some light on it

In the version 1.6 you can disable markers in the Reference pane (not just uncheck, but right-click and choose Disable option) - in this case neither their coordinates nor projections will be considered for the optimization stage.

Read the whole thread to get the context and it might make more sense
https://www.agisoft.com/forum/index.php?topic=11793.0

81
General / Re: How to use Spherical Panorama Images as a source?
« on: October 16, 2020, 03:13:25 PM »
2-3m between camera positions is probably too great a distance in such an enclosed area.

I have used spherical imaging inside a large old church, with a spacing of about 5m and that worked ok because when you move 5m and look up you still see almost the same thing from almost the same angle. Also in an old church everything is textured and full of details for metashape to find and match between images.

Between the 2 images you shared, I can see that you only moved ~2m but the only thing common to the two images with any detail for metashape to work with is the timber framing, and the angle and distance to it is quite different in each image, so it will have a hard time making the connection. The ceiling and floor are also common to both images, but again the distance you moved means that potentially 'matchable' areas look too different to be matched.

I would suggest trying the images at more like 0.5m spacing. If you can't get back to the property to do that, then the other option is to add markers to your images, which will help metashape align them, and might help it find other matching points (i'm not too sure about that last part).

If you do the marker based approach, i suggest attempting to align all images as normal, and if any do align badly then reset their alignment (right click -> reset camera alignment). If you aren't sure if they're aligned correctly, then reset the alignment too. If none align, then that's ok.

Methodically start adding a small number of markers to a small number of images. The easiest way to add markers to two images is to open the first image by double clicking on it, as normal, then for the 2nd image right-click on it and select 'open in new tab'. Now right click on the new tab that opened and select 'move to other tab group'. Then you can see them side by side. I suggest finding two images that you believe should align, and add ~5-8 markers to each of them, well distributed across the images. Then select the two images and right click -> align selected cameras. If that works, then try adding your ~5-8 markers to another image that you believe should align to the first two. If you have to add more markers to the first two to get 5-8 in the 3rd image that's ok, but you will need ~5-8 markers in all three images to get the 3rd to align. If you can't find an image containing any 5-8 features that exist in the aligned two images, then it's just never going to work. 5-8 is just a number i made up, and you might get away with fewer, but obviously you can't go much lower! The fewer markers you use the less chance of making mistakes and getting in a muddle (hence the 5 in 5-8) but I think probably you should aim for at least 8.

For all subsequent images, you need to add 5-8 markers that exist in at least two other aligned images. If you can add markers to more than three images then you should do. The easiest way once you have a few images aligned is to select a marker, then right click -> filter by markers, and then cycle through the photos using page up/page down and it will automatically center on the estimated marker location, which you can then refine by clicking/dragging. If you just see a red-white line instead of an estimated marker position then it means it is only placed in one aligned image (not two or more) so in the current image metashape is saying it could exist anywhere along that line (although it could be somewhere off the line if the alignment is bad!)

Save often, and after aligning each new image click the 'optimise' button. This will adjust the alignment of all currently aligned images to take into account the newly aligned images and newly added markers if any. It's important to save as optimize can't be undone, and if you misplaced any markers or did something else wrong the alignment could all go wrong.

This is the most fun you can have with a computer as far as i am concerned. It's great. If you get in a muddle you probably added too many markers to too many images and there are some misplaced ones in there which are contradicting the well placed ones.

At the end you will probably have a set of aligned images but still very few points in your pointcloud. That's because the walls of the apartment are all painted white and there are no points that can be matched between images, because white is white wherever you see it on the wall. You will have a big 'cloud' of markers though, and if you're 3d minded you could export these points to something like blender and manually build the geometry, if geometry is what you are after.

82
General / Re: Selection Mode
« on: September 28, 2020, 12:41:35 AM »
No, but the 'filter by selection' mesh tool might be a good workaround?

Sounds like the Tools -> Mesh -> Filter by selection tool might help here.

You make a selection, then apply the filter, and can then reorient the view to make a further selection and filter and repeat as necessary. Then make your measurements unimpeded and reset the filter when you want to see the full model again.

83
General / Re: Intelligent paint... is it of any use for you?
« on: September 22, 2020, 05:21:23 PM »
I had a fiddle with it after I saw your first post, but came up with nothing!

84
Bug Reports / Re: agisoft.com - Private messages broken (?)
« on: September 15, 2020, 01:32:29 PM »
As soon as they switched the 'look and feel' of the forum i found the way to switch it back to the SMF default theme because i really didn't like the new one.

I just tried the 'new' one again and yes the private messages are overlapping for me too.

You could try the old SMF default curve theme (my preference), as messages seem to be working fine there.

You have to hunt through profile settings to change it, or this link might take you straight there https://www.agisoft.com/forum/index.php?action=theme;sa=pick;u=2673;theme=1;

85
General / Re: Calibrate colors - Undo (revert)
« on: September 11, 2020, 11:08:22 AM »
It's a bit counter intuitive.

In the Calibrate Colors dialog there is a <Reset> button. The colour calibration is reset as soon as you click it, but if you then click <OK> it begins calibration again, so you have to hit <Cancel> instead.

It won't necessarily update the photo thumbnails either though, not straight away anyway, but if you open each photo you should see that the colours have been reset.

But to answer about the dense cloud, "Reset" won't update cloud point colours that have already been generated, but there is an option in Tools -> Dense Cloud -> Colorize Dense Cloud which will reapply the colours from the source images, although i don't know if this is much faster than regenerating the dense cloud (presumably it is!)

86
General / Re: Validate invalid matches
« on: September 04, 2020, 12:39:53 PM »
I think the terms 'valid' and 'invalid' might be misleading. I think they more likely should be named 'used' and 'unused'.

A set of key points can only be 'used' to align an image if they are matched in two other aligned images.

Your screenshot shows that the selected image has 120 matches with one other image, and just 20 and 15 with another two images.

Assuming the 20 and 15 are garbage, or not sufficient to do much aligning, the 120 'good' matches are no use as they only link this image to one other. And if that image isn't able to be aligned for similar reasons then they're no use anyway.

What i would probably do would be copy the chunk, resetting alignment for all images, then selecting the two images you have shown above (..913_419 & ..913_779) and aligning just those two selected images. They will probably align to each other, but if the overlap doesn't carry on to other images then you might not be able to align any other images to that small subset, but that would be one way to find out.

87
General / Re: Inverted normals laserdata
« on: September 02, 2020, 02:14:34 PM »
Just in case you are using Cyclone to export your laser data, the latest version (2020) allows you to export normals, whereas previously it seems it wasn't doing that properly/at all.

I have only tried it with the e57 format, but was having the same problem as you show until i upgraded my version of cyclone.

It does seem like metashape should be able to do a better job of estimating normals though, as quite often it estimates them pointing away from the scanner position, which i think should be included in e57 and ptx file types, for it to check against.

Even if you do get your normals all pointing the right way, the filter dense cloud function seems to scramble them all again, so do any decimation before you import if possible!

88
General / Re: problem with generating dense cloud and mesh
« on: September 02, 2020, 12:40:30 PM »
Could it be that your 'region' (bounding box) is not deep enough to contain all of the data?

If you look from the side view instead of above you might find that it is slicing just the top of the data.

If you started with only a few cameras aligned, and then aligned more in subsequent steps, then that could explain why, as below:

The other possibility is that the bounding box region is not updated after using align selected cameras, and so does not include the area covered by the 'newly' aligned cameras after 'align selected cameras' and so is not processed during 'build dense cloud'.

Solution is to use tools->reset region or rotate/resize region tools to include the required volume before building dense cloud.

89
General / Re: Reprojection Error: Gradual Selection vs. Chunk Info
« on: August 19, 2020, 06:24:32 PM »
It looks like your slider in the gradual selection dialog box is not all the way to the left.

If it was all the way to the left then the max number should read 0.848666 which is the max preprojection error in the units of the keypoint scale.

I don't know what 'units of the keypoint scale' means, but it's what alexey said here https://www.agisoft.com/forum/index.php?topic=11869.msg53289#msg53289  ;D

90
General / Re: "Zero Resolution" Error
« on: August 05, 2020, 11:55:37 AM »
In that case my guess is it's probably something like this https://www.agisoft.com/forum/index.php?topic=9968.0

If the focal length was wrongly estimated then that could potentially compress the z values of everything so even an undulating or badly aligned surface could come out flat and the camera altitude could also be compressed, leading to positions on or close to the surface. At least that's how i explain it to myself without being very aware of the maths involved!

If you look in the camera calibration window, and compare the initial and adjusted focal lengths, they should be fairly similar, assuming the initial estimate was sensible, but if metashape has estimated a really dodgy focal length it could be wildly different. My guess from the numbers you gave is that it should be about 3500px.

In that case perhaps if you have a previous project that worked well with this drone/camera you could export the camera calibration from it to use here, or otherwise i believe it is possible to manually enter and fix the focal length. I'm a bit out of practice so don't remember the exact procedure, and don't have the software handy to look at - i normally just trial and error these things anyway!

Other things to try might be enabling rolling shutter correction (i found it worked best enabling this at the alignment stage, rather than only in subsequent optimisation), if your drone camera is of that type. Also maybe unchecking/discarding camera orientation (yaw/pitch/roll) values in the reference pane if they exist and are likely to be inaccurate at all (i'm not sure if that helps but i'd try it if it was me!)

Pages: 1 ... 4 5 [6] 7 8 ... 52