Agisoft Metashape
Agisoft Metashape => General => Topic started by: Andreas1206 on February 09, 2021, 03:55:17 PM
-
Hello!
In a project with a depth maps based reconstruction of a whole violin I am getting a mesh with holes in V1.71 where V1.65 was able to build a continuous closed mesh - this is independent of mesh interpolation turned on/off and Depth Map filterering paramter (mild vs. aggressive). I noticed in the log file, that a lot of images are not used in the depth maps generation due to:
"2021-02-09 10:30:33 filtering neighbors with too low common points, threshold=50..."
In V1.65 all images were used.
@Alexey: Is there maybe a Tweak to change the threshhold so images with less neighbors are used as well? Do other people experience similar problems, where meshes are reconstructed worse in V1.71 than in the previous version?
Thanks and best whishes!
Andreas
-
Hello Andreas,
Unless I am mistaken the old depth maps generation method (in 1.6 version) also used the same 50 valid tie points threshold for the depth maps generation process.
You can try to reduce the default threshold for the new depth maps via the following tweak: BuildDepthMaps/pm_point_threshold, but if you do not succeed, it would be helpful, if you could share the dataset that we can use to reproduce the problem on our side.
-
Hi Alexey,
thank you for your reply and the information. I tried again with the suggested tweak and several different common point threshold values down to 5 and even though more images are used, the holes in the mesh constructed by V1.71 persist.
Attached you find a screenshot of a reconstruction of part of a violin with V1.71 compared to V1.65. It's the exact same Metashape file with the exact same .dng images used, opened and processed in each corresponding version. This time interpolation is turned on, so the holes are closed, but the topography is much cleaner in V1.65.
What could be the reason, that V1.71 seems to have much more trouble reconstructing the surface from the exact same data set? I would like to send you the data set to verify the problem but it is a very large set with 655 .dng files - how could I transfer this to you?
thank you and best regards,
Andreas
-
Hello Andreas,
Dataset would be helpful. If you can share it via any file transfer service, please send the link to support@agisoft.com. If not, please send the email requesting FTP account for the data upload on our server.
-
Hello Andreas,
Thank you for sharing the problematic dataset.
It seems that the problem is similar to the issue reported in another thread:
https://www.agisoft.com/forum/index.php?topic=13075.0
The problem is caused by the presence of masks for the aligned images - due to bug in 1.7.1 it may lead to the incomplete depth maps generated in the project. The bug will be fixed in the version 1.7.2.
-
Hello,
Very good to know, as I also have this problem. Thank you for the fast resolve.
Are the errors in mesh reconstruction from depth maps as shown in attachement also linked to this specific issue ?
Thank you
-
Hello,
Very good to know, as I also have this problem. Thank you for the fast resolve.
Are the errors in mesh reconstruction from depth maps as shown in attachement also linked to this specific issue ?
Thank you
Yup this is my main issue with 1.7.x I surely hope that this can get resolved sooner rather than later.
Mak
-
Very good to know, as I also have this problem. Thank you for the fast resolve.
Are the errors in mesh reconstruction from depth maps as shown in attachement also linked to this specific issue ?
Hello jnb,
If you can send the sample data (original images and the project file with the alignment results) to support@agisoft.com, we will check, if there's any issue with the new depth maps or mesh generation for this problem.
-
Hello Alexey,
Thank you for looking into it. I can send you the data but the images are big (37 Go folder). Would it be possible to have an ftp acces on your servers ?
-
Hello jnb,
I have sent FTP-access credentials via PM. Let me know, when the data is uplaoded.
-
Hello Alexey,
I have uploaded the data on your ftp and sent an email to support.
Thank you for your help
-
Hello jnb,
Thank you for sharing the sample dataset.
Below is the screenshot of the same area for the depth maps + mesh generated in 1.7.2 pre-release (High quality and Mild filtering). So the problem could be related to the same issue mentioned before - depth maps generation with the presence of masks in 1.7.1.
Please check, if you are able to get proper results in 1.7.2 pre-release in Medium quality as well.
-
Alexey,
1.7.2 still exhibits the same issues (maybe a bit less than 1.7.x ):
(https://i.postimg.cc/W3bx8NtJ/Screenshot-2021-02-22-002539.jpg)
Mak
-
Thanks for your response.
For this particular area, the 1.7.2 seems to work (medium quality, mild filtering) but the main issue related to holes/incorrect reconstruction in the mesh is still present as stated by Mak.
See the top of the model in attachment.
-
Hello Alexey,
By any chance, did you have time to look further into this issue ?
I can confirm that such behaviour is present every time in 1.7.2 but with fluctuating strength.
-
FYI Version 1.7.2 build 12040 doesn't seem bring any improvement to this issue on Medium/High unfortunately..
(https://i.postimg.cc/MpykTYJv/Screenshot-2021-03-15-035159.jpg)
(https://i.postimg.cc/Gm0wd5NW/Screenshot-2021-03-15-035239.jpg)
But Ultra-High looks better (albeit still with un-wanted geo flying around..thankfully those can be easily discarded in 2 clicks)
(https://i.postimg.cc/0jbjRB36/Screenshot-2021-03-15-040209.jpg)
Mak
-
Hello Alexey,
By any chance, did you have time to look further into this issue ?
I can confirm that such behaviour is present every time in 1.7.2 but with fluctuating strength.
And in the full release? (https://www.agisoft.com/downloads/installer/)
-
This latest 1.7.2 version seems to be very buggy too.
This one just popup while making a dense cloud:
"clSetKernelArg pm_estimate_cost_per_neighb_uchar_uchar_3_2#19 (4 bytes) failed: CL_INVALID_ARG_SIZE"
-
Hello Renato,
Can you provide additional lines from the log related to failed operation and specify, if you are using the same hardware as before?
-
Hi Alexey
I'm going to test a little bit further to collet data but the problem is there and it's replicable.
Find in attachment the processing log from two different machines..
Best Regards
-
Hi Alexey
Tested in several sets of UAV imagery and fails in them all. The only project that I was able to finish was one with images without any GPS info (handheld camera).
Something is indeed wrong with this version concerning imagery from UAV aerial surveys. I hope you nail the problem quickly.
Best Regards
-
Hello Renato,
We have just updated 1.7.2 to build 12065, please check if it solves the GPU-based processing problem.
-
Hi Alexey
The bug is gone! This one seems to be perfectly nailed.
Thank you for your diligence and speed :-)
Best regards
-
And in the full release? (https://www.agisoft.com/downloads/installer/)
I took the time to test again the v1.7.2 build 12070 and the mesh is still not good. I am still using the v1.6, the last bug free version for me.
edit : add exemple of issue on the depth maps and correct one on 1.6
-
Unfortunately the pre-release of 1.7.3 ( https://www.agisoft.com/forum/index.php?topic=13045.msg58730#msg58730) doesn't fix the issue at all & that's a really shame.
Mak
-
Indeed, exactly the same results, with and without masks...There is a real problem with the new depth maps.
-
Indeed, exactly the same results, with and without masks...There is a real problem with the new depth maps.
Can you show the photographs used to reconstruct this area?
Full resolution ideally.
-
Here is a link for data sample : https://1drv.ms/u/s!Apz9qKkthJ4ywl8fYCfKLbMlbGi1?e=YvOVhp
Photos are 8bits tiff converted from 16bits raw.
But feature extraction is pretty much the same between v1.6 and 1.7
-
Thank you Alexey for looking into it.
Here is the differences between PM mark and SGM mark depth maps. As you can see, on some PM mark depth maps, depth is inaccurate.
-
What is this SGM & PM depth maps thing?
Mak
-
PM mark is the new depth map generation method introduced in v 1.7, that's all I know !
-
I've tested jnb's sample fotos and the reconstruction is near perfect. You might have been using some wrong parameter. I've used everything in default and in highest quality in the latest 1.7.3 version. See attached image. Also check this small video of the reconstruction.
https://u.pcloud.link/publink/show?code=XZxiCWXZkVzTqnhjlx8MdH14gX8cXLssAlrV
-
Thank you for testing @RHenriques. Unfortunately, the error in the mesh is difficult to see on the sample as the data set is vastly incomplete and was only made to share image quality.
Alexey reprocessed the whole set on his side in high quality and the result in v1.7.3 is alas not good as you can see in attachements. He also processed the sample, and the v1.7.3 is also incorrect compared to the v1.6.
-
It's strange because, on my side, with default parameters and version 1.7.3, everything comes out as you can see in the above images, even done with your 11 image subset. Can you make an extraction of all the parameters used? My test was done with mesh reconstruction from depth maps with all parameters in default. Try to do things from the beginning in version 1.7.3 and maintain the 40 000/4 000 numbers, in the alignment step, for the Key point and Tie Point limit respectively. Also see if you are not using too much overlap in the complete set. Sometimes, too much photos can lead to a wrong reconstruction.
If you use only the 11 images from your test set the results are similar to mine?
Can it be related with hardware differences?
-
If someone wants to play with it, I updated the sample data set to be more comprehensive.
Just to be on the safe side, I also reprocessed it with PM mark and SGM mark, with all others settings to default. Medium quality, Mild filtering.
Here is the tweak given by Alexey to change the depth map process : BuildDepthMaps/pm_enable
edit : @RHenriques, just saw your post. If you have the time, I am curious to see your results with the more complete data sample. I also didn't try to process it myself in high quality, as it is too time consuming and not needed for my work. This is I think the only difference.
-
It's strange because, on my side, with default parameters and version 1.7.3, everything comes out as you can see in the above images, even done with your 11 image subset. Can you make an extraction of all the parameters used? My test was done with mesh reconstruction from depth maps with all parameters in default. Try to do things from the beginning in version 1.7.3 and maintain the 40 000/4 000 numbers, in the alignment step, for the Key point and Tie Point limit respectively. Also see if you are not using too much overlap in the complete set. Sometimes, too much photos can lead to a wrong reconstruction.
If you use only the 11 images from your test set the results are similar to mine?
Can it be related with hardware differences?
Yes, either that or mask your photos by depth, so only sharp areas remain (see Generate Masks command).
-
I think that its perfectly clear from the results shown in this thread; that the new depth maps generation method suffers from issues when the result are vastly different (better) when the depth maps are generated via the previous method with the exact same data set (using exactly the same settings)
Mak
-
Hi Jnb,
I processed a model with 14 images downloaded from your shared drive. Comparing the 2 depth map generation versions (PM latest and previous), I do not see a great difference between the 2. Both models generated from depth maps with medium quality and mild filtering. Note that PM version generates much more detail (double the number of faces). However in case of low performance GPU (GeForce GTX 670MX), depth map generation time is greaty increased for latest version pm. See attachment.... maybe this brings some more insight on this issue....
-
Hello Paulo,
The less images, the less this issue seems to be visible. In my mind, missing parts are likely to be extrapolated the same way, leading to a close result between the two versions.
But if I want this area to be fully reconstructed, I need to use more images, which lead to the previously mentioned problem.
Nevertheless, you can see on the depth maps that depth is not estimated on this area with the new method, whereas it is with the old one.
But I agree with you, the new mesh is really good and detailed, but on this spot, which is a shame !
-
This is the most relevant information as JNB noted.
https://www.agisoft.com/forum/index.php?topic=13066.msg58741#msg58741
Some of the depth maps seem to be completely wrong.
BTW jnb.. Which GPU are you using?
Mak
-
Hello Mak,
Yes indeed !
i7-7820HK and GTX 1070 SLI
-
Jnb,
As a test. Did you try to reconstruct the mesh after disabling the depth maps that seem to be wrong (& keeping all the others obviously)?
Disable the photos then re-generate mesh with re-use depth maps option selected.
Mak
-
Hi,
you mention 1.7.3. It is not avalilable for download. Where do you get it from. Thanks.
-
Hi,
you mention 1.7.3. It is not avalilable for download. Where do you get it from. Thanks.
https://www.agisoft.com/forum/index.php?topic=13045.msg58730#msg58730
-
I did not as this seems to have no impact on the concerned area. But still, this is a strange behaviour. The main issue here I think is the lack of depth information in this particular area of interest.
PM mark depth map method also seems to not take into account the bounding box.
I have uploaded my psx with both PM and SGM versions, tested with +- 70 and 35 cameras if someone wants to look at it : https://1drv.ms/u/s!Apz9qKkthJ4ywl8fYCfKLbMlbGi1?e=XuLJh2
Also, if this help to visualise the problem, here are some comparisons (heatmap and point) between SGM as reference and PM models (on +-35 cameras)
-
Hello,
I am still experiencing the same issues of very unstable mesh reconstruction in the V 1.7.. versions as well.
Reducing the image count seems to prevent some reconstruction defects (see attached images) but of course this causes other areas of the mesh to be badly reconstructed as information is missing there.
Masking out defoucs areas also appears to help but the the "Mask defocus areas" option very often chooses the masked areas very unpredictably, masking out quite in focus areas and leaving completly blurry parts. Turning on the "Fix coverage" option pretty much just seems to create masks, that contain all the image parts used in the model - no matter whether they are in focus or not. Strangely enough defocus mask generation appears to be working much better, when creating the defocus masks based on a scaled mesh created in V 1.65 - so I assume, this has something to do with the new DM-generation process too?
It's too bad, the new DM-generation method causes so many problems, as it really is quite impressive, how much more detailed the meshes are in the areas, where the reconstruction works alright. But compared to earlier versions, V1.7.. at the moment is not reliable for my work (close-reange PG of cultural heritage) . This really is quite frustrating and taking up a lot of time testing, recalculating meshes, comparing them, using different versions, etc.
I really hope, this can soon be improved,
Andreas
-
After a few tests with jnb's complete set, I reinforce the suspicion that the problem is linked to overlap excess. Even if we use the old school (which I still prefer for UAV images) dense cloud generation, things clearly get worst with the complete 73 image set. If we lower to 37 images, the reconstruction is more reliable, with less holes in the problematic area. This happens no matter the quality we choose. I've tested from medium to ultrahigh and the problem is similar. Maybe there's the need that Agisoft use some sort of tweak to the reconstruction parameters depending on the overlap used. Excess overlap consistently increases the probability of holes in the mesh and noisy geometries.
-
Maybe there's the need that Agisoft use some sort of tweak to the reconstruction parameters depending on the overlap used. Excess overlap consistently increases the probability of holes in the mesh and noisy geometries.
wouldn't this just be this tweak suggested by Alexey in earlier threads - this limits the maximum amount of neighbors used in depth map creation right?
main/depth_pm_max_neighbors
-
Maybe there's the need that Agisoft use some sort of tweak to the reconstruction parameters depending on the overlap used. Excess overlap consistently increases the probability of holes in the mesh and noisy geometries.
wouldn't this just be this tweak suggested by Alexey in earlier threads - this limits the maximum amount of neighbors used in depth map creation right?
main/depth_pm_max_neighbors
Hi Andreas1206. Probably this is the one. My suggestion to Agisoft is that the parameter (this one or other) might adapt automatically, without user intervention, based on detected image overlap. If we tweak it by hand it will be a huge guess exercise.
-
Hi Andreas,
I believe the correct syntax for the tweak is BuildDepthMaps/pm_max_neighbors 70
The above example would set max neighbors used in depth map generation pm to 70.
But I agree with RHenriques, it would be great if the program would set these tweaks according to type of imagery, overlap, resolution, CPU/GPU hardware so as to get optimised results of desired products (Mesh model, dense cloud,...).
This would avoid going about doing trial and error with various tweaks to find best result!
Hope support can look at this :)
-
Hi Andreas,
I believe the correct syntax for the tweak is BuildDepthMaps/pm_max_neighbors 70
The above example would set max neighbors used in depth map generation pm to 70.
But I agree with RHenriques, it would be great if the program would set these tweaks according to type of imagery, overlap, resolution, CPU/GPU hardware so as to get optimised results of desired products (Mesh model, dense cloud,...).
This would avoid going about doing trial and error with various tweaks to find best result!
Hope support can look at this :)
That is dangerous though, I am on the contrary, more parameters should be exposed so that we can test things out (and ignore them if we dont need them).
-
Unfortunately, this tweak as no impact on this issue. To my understanding, this is a valid tie point threshold, see Alexey post here : https://www.agisoft.com/forum/index.php?topic=13066.msg57974#msg57974
It seems to me that the new depth map method deals differently with low tie points areas, leading to no depth reconstruction in those.
i really don't know if this is relavant, but lowering the threshold to 10 won't change the limiting neighbors to 40 parameter
filtering neighbors with too low common points, threshold=10...
avg neighbors before filtering: 72 (0% filtered out)
limiting neighbors to 40 best...
-
That is dangerous though, I am on the contrary, more parameters should be exposed so that we can test things out (and ignore them if we dont need them).
Hi wojtek,
Good point. But I think both could be profitable. Let MS set the tweaks according to your particular case but still have these tweaks with their parameter values appear in an advanced list so that the user can look at changing some of them if he wants. But for this, it is important that a description of each tweak with their use is given. Because without this we are working pretty blinded....
-
That is dangerous though, I am on the contrary, more parameters should be exposed so that we can test things out (and ignore them if we dont need them).
Hi wojtek,
Good point. But I think both could be profitable. Let MS set the tweaks according to your particular case but still have these tweaks with their parameter values appear in an advanced list so that the user can look at changing some of them if he wants. But for this, it is important that a description of each tweak with their use is given. Because without this we are working pretty blinded....
Spot on.
-
Unfortunately, this tweak as no impact on this issue. To my understanding, this is a valid tie point threshold, see Alexey post here : https://www.agisoft.com/forum/index.php?topic=13066.msg57974#msg57974
It seems to me that the new depth map method deals differently with low tie points areas, leading to no depth reconstruction in those.
i really don't know if this is relavant, but lowering the threshold to 10 won't change the limiting neighbors to 40 parameter
filtering neighbors with too low common points, threshold=10...
avg neighbors before filtering: 72 (0% filtered out)
limiting neighbors to 40 best...
Hi jnb,
If I understood this correctly, those are different tweaks:
BuildDepthMaps/pm_point_threshold --> This one is a tie point threshold for the minimum number of tie points needed between two neighbors in order for them to be used in DM creation. First I thought this was the reason for the bad mesh reconstruction in my case, since in my log it said, that many photos were not used because of too low common tie points. But then Alexey said, that this threshold did not change compared to V 1.65. And while lowering the threshold to fewer tie points did lead to more photos being used, the mesh quality did not improve.
BuildDepthMaps/pm_max_neighbors --> sorry for the outdated syntax in my earlier post, thanks Paulo. This one does change the maximum amount of neighbors used for DM creation - so I guess this should limit the overlapping images used. But strangely enough I just finished a series of reconstructions of the same area as in the image in my post above - once with 40, once with 20 and once with a 10 neighbors limit - all of them show more or less the same amount of mesh defects. Maybe this is because the total amount of Images/Depth Maps used for the mesh generation does not change with this tweak - what changes is only the amount of neighbors used for the DM-creation of each individual image. What definitely seems to improve mesh quality though is using less total images.
So neither of these tweaks appear to really improve the problem. I somehow can't imagine, this is a problem that can simply be solved by the user applying one of the few tweaks provided in this forum - this sounds a bit like guesswork with very limited understanding and tools to me. Of course I agree, that having more adjustable parameters in the reconstruction process would be useful, interesting and allow the user to optimize results. It would also make the whole photogrammetric reconstruction process with Metashape a bit less of a "Black Box". But I believe the issues we are experiencing clearly have to be resolved on the developer's side as they appear to be more fundamental than just experimentally playing around with some tweak settings.
Don't get me wrong - I'm very impressed by and grateful for this amazing Software - but right now there definitely seems to be an issue with the last version step, causing problems (at least in close range photogrammetry - which I do) that were simply not there in the previous 1.6 Versions.
best wishes,
Andreas
-
Indeed, I read too fast the tweak...
For the rest, I couldn't agree more
-
Alexey,
Can we get an official word on this issue from the team ? ETA ?
cheers
Mak
-
Hello,
Unfortunately, the issue is still here in version 1.7.3 build 12473.
Bad mesh in difficult areas compared to 1.6 and and occasionally bad depth estimation for the depth maps.
See screenshots attached.
-
I'm unfortunately not expecting a fix until v1.8 but hope that I'm wrong :(
Mak