Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Arie

Pages: 1 2 [3] 4 5 ... 9
31
Bug Reports / Re: Computer Specification Advice
« on: September 10, 2020, 01:08:41 PM »
andyroo made some great suggestions and Pudget Systems is really good resource for deciding on hardware.
If time permits it, I would wait for Nvidias new 3000s series of GPUs.  They seem to be very promising for FP32 calculations, which Metashape uses.
Cheers!

32
General / Re: Depth maps
« on: September 10, 2020, 12:46:13 PM »
Hi Paul,
take a look in the project folder (*.files\1\0\depth_maps\depth_maps.zip). There you have the depth-maps in EXR format in 32-bit.
If my memory is correct, you will have to check "Keep depth maps" under the advanced settings.
Cheers!

33
General / Re: Sparce and dense cloud export
« on: September 10, 2020, 12:40:27 PM »
Hi Paul,
are the sparse and dense cloud in the same position in Metashape when toggling between the views? Are the export settings identical?

I've just tested the export of sparse and dense cloud on an unreferenced project and after loading both into CloudCompare, they are in the same position. Metashape 1.6.4.
Cheers.

34
General / Re: Minimum percent overlap
« on: September 10, 2020, 12:27:51 PM »
With 30% overlap how will you create a DEM/ Mesh for proper orthorectification?

35
General / Re: Minimum percent overlap
« on: September 09, 2020, 01:34:10 PM »
Hi,
usually it's stated that the overlap for aerial projects should be 80% forwards and 60% sidewards. Some even do a 80% and 80% overlap.
You can get away with less than that, but your accuracy can suffer quite a bit and it's not fun trying to correct that.
Cheers!

36
General / Re: Problem alignment
« on: September 08, 2020, 11:35:06 AM »
Hi,
I had this happen to me before. Have you tried re-aligning the misaligned cameras (right-click on the selected images, "Align Selected Cameras")?
Currently I don't have an example project to test, but you might have to reset the alignment before (right-click on selected images, "Reset Camera Alignment").
Cheers!

37
General / Re: good photos, good dense cloud, BAD TEXTURE
« on: September 07, 2020, 02:33:41 PM »
Hi Costas,
since I don't reprocess old projects super frequently, I might have made an error. I did a little research and in an older thread I found this statement from Alexey:

"If the depth maps are present in the project and you are selecting the further processing option that assumes depth maps generation, then you will be able to re-use them (skip there generation again), providing that the same quality/filtering settings are selected.
Note that new mesh currently requires only depth maps generated with Mild filtering option, so in your workflow you would be able to re-use depth on Step 6 (build mesh) only if on set 3 (build dense cloud) you have used Mild filtering option, and on Step 6 select the same quality as on the Step 3."

The last sentence seems to be not true anymore. I just tested it with depthmaps generated with "mild" and "agressive" filters (dense cloud) and both could be reused in the meshing step. In my quick comparision, the depthmaps (and resulting mesh) with "Mild" filter settings (dense cloud) are equal to the new depth-maps created with "Build mesh". In "Build mesh" you cannot adjust the filter anymore, its automatically set to "mild" when calculating new depthmaps.

I probably misinterpreted my results when testing the two approaches and actually compared mild/ agressive filter settings and not the actual resulting depthmaps.
Thanks for pointing that out!

38
General / Re: good photos, good dense cloud, BAD TEXTURE
« on: September 06, 2020, 05:47:40 PM »
Hi Dave,
i'll try to keep it short and simple. Regading your first question, the sparse cloud represents feature points, which were used to calculate the camera pose*. If you delete points from the sparse cloud either by manually selecting or using the gradual selection tool and you use "Optimize cameras" the camera pose will be recalculated without the deleted points (warning: if you delete too many points, it might not be possible to re-align the images).
This can help to increase the accuracy of the camera pose since some feature points might be "unstable". For example, vegetation can move slightly during the image acquisition. If these points are used, the overall accuracy of the camera pose decreases. And the more accurate your camera pose, the better quality your high-resolution model will have.

With the new depth-map based approach to calculating the high-res model you will have to clean-up the mesh instead of the pointcloud. But it is a lot more efficient, so you do not have clean as many outliers etc. as with the dense cloud approach. AFAIK you cannot classify the mesh but it should be possible to export the mesh and use Cloudcompare to extract the points of the mesh and reimport those to Agisoft for classification. Bu I haven't tried that yet.
The quality and speed of the depth-map based approach is quite an improvement and I would highly recommend it.

Hope that helps.
Cheers!


* just in case you are not familiar with the term, camera pose describes the position and orientation of the images (i.e. the result of "Align images").

39
General / Re: good photos, good dense cloud, BAD TEXTURE
« on: September 05, 2020, 04:49:08 PM »
Hi majou,
thank you for the screenshots. Since all settings regarding texturing are identical (except c being processed with 1.63), the error should be somewhere else.
There are some settings, which you can optimize and those might influence the quality of your model including the texture:

1. For alignment, please try accuracy "High" so that feature detection will be computed on all pixels. With "medium" setting, you lower the resolution, which can lead to a slightly larger error for the camera pose.

2. Are you familiar with the optimizing process? It can improve the quality to delete stray points in the sparse cloud including points of the surrounding area of the trench (like vegetation and other "moving" subjects) as well as points which were detected in the out-of-focus areas and to run a camera optimization (the star symbol under "Reference"). You can also run the gradual selection to automatically remove points with a high reprojection error (under Model - Graduatial Selection...).

As an example: When I process excavation sites, I first run the alignment (Accuracy: High). After creating the sparse cloud, I add my GCPs to get an inital impression of the accuracy. I uncheck all markers (so they are used only as checkpoints) and start cropping the sparse cloud i.e. removing the surrounding area (usually full of shrubs, grass etc. which move ever so slightly during image acquisition). Additionally, I run the gradual selection with a reprojection error of about 0.25 - 0.3 (watch the number of sparse points selected) and delete those points as well. After that I run the "Optimize Cameras" using "adaptive camera model fitting". Check the accuracy of your GCPs and, if necessary, use some for adjusting the camera alignment (some should always stay unchecked as validation).

3. You should try using Agisofts new algorithm for computing the high-res model: Instead of building the "dense cloud" go directly to "Build mesh" and use "Source data: depth maps". Please note, that if you reprocess old projects, the depth maps from the dense cloud might still be present- so make sure to uncheck Reuse depth maps (under Build Mesh - Advanced settings).* The quality and speed of this approach is a significant improvment over the old algorithm.

4. When texturing your model you should not hesitate to use texture sizes larger than 4096. Of course it depends a lot on your further use but I frequently use textures sizes from 8192 to 16384 px and sometimes even more. Performance-wise it is better to have one larger texture than multiple smaller ones (e.g. one 8192 x 8192px texture equals four 4096 x 4096px textures).

Good luck!

*Sorry, bad advice on my behalf.

40
General / Re: Camera Alignment
« on: September 03, 2020, 04:44:03 PM »
Hi Anton,
I took a look at the images and your approach with masks etc. is solid. Which settings did you use for aligning the cameras?
In general, you should try optimizing the coverage of the subject i.e. using the entire image frame for capturing the subject.
Cheers.

41
General / Re: good photos, good dense cloud, BAD TEXTURE
« on: September 03, 2020, 04:41:32 PM »
Hi majou,
could you show the processing settings for the recent and the old project (right-click on chunk, show info)?

42
General / Re: Streaks in orthomosaic
« on: September 03, 2020, 12:37:56 PM »
Well, then it's obviously the cloudy condition causing these streaks since the lighting condition changed during the flight.

As a recommendation to save some time: Copy the data to a new chunk (right-click on chunk, Duplicate...) and include the model. Delete most of the images, leaving only a subset of images where the streaks are (see attached image) and run the Calibrate Color (under "Tools") using "model" as source (include WB). It will be alot faster then running the entire dataset and it will be a possibility for you to see if that helps eliminating the streaks.
Cheers!

43
General / Re: Streaks in orthomosaic
« on: September 03, 2020, 11:39:07 AM »
Definitely looks like different exposure times (or cloud shadows) during image acquisition. Have you tried using Agisofts color calibration?

44
General / Re: LIDAR Point Cloud Mesh
« on: September 01, 2020, 07:45:55 PM »
Hi Darko,
IMHO, you will not achieve very satisfying results using that LIDAR scan since the resolution and accuracy seem fairly low. You might have a better chance if you process just the drone images using Metashapes new depth map reconstruction, which achieves more detailed results than using dense pointcloud.
Cheers!

45
General / Re: LIDAR Point Cloud Mesh
« on: September 01, 2020, 12:37:49 PM »
Hi,
to be honest, the quality of the LIDAR looks fairly bad, actually even worse than the Agisofts dense cloud. So you can't expect that "bad quality + mediocre quality = good quality".
AFAIK, Agisoft uses poisson surface reconstruction for calculating meshes from points. This algorithm requires good normal estimation to achieve nice results. In this case, it seems the normals were not estimated very well, which led to the "bubble" in the front side.
What's the source of the LIDAR point cloud? You could try using Cloudcompare to estimate the normals (it has different settings, so maybe you will be able to achieve better results).
Cheers.

Pages: 1 2 [3] 4 5 ... 9