Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Malalako

Pages: [1] 2
1
I have processed Mavic 3M multispectral images into an orthomosaic (see attached screenshot). Even though I calibrated the reflectance using the sun sensor, the orthomosaic is not good - it has shadow areas and bright spots. I thought the sun sensor reflectance calibration was supposed to alleviate this?

The processing steps were:
- Import photos
- Calibrate reflectance of images using the sun sensor
- Align photos
- Create point cloud
- Create model
- Build orthomosaic from model
- Use raster calculator to get reflectance values
- Export using index value.

Have I done something wrong or is the sun sensor just not doing a good job?

2
General / How to create orthomosaic showing only ground layer?
« on: March 07, 2025, 04:53:54 AM »
I am creating orthomosaics from drone imagery over forested areas. I am then using supervised image classification techniques to classify the orthomosaic into classes (e.g. trees, grass, bare ground, shadows etc). The aim would be for me to be able to quantify the proportion of a site that is, for example, grass. The issue is if there is a lot of canopy cover, then obviously a large portion of the ground is not visible on the orthomosaic (because they are under the canopy) and therefore things like grass cover are not accurate.

Is there a way to generate an orthomosaic where the trees are excluded so I just have an orthomosaic of the ground layer? Perhaps excluding areas >30cm AGL or something. I have collected the drone images in a way to maximise how much ground is observed (flying off nadir and a cross hatch pattern) and in the point cloud you can see that there are many ground points under the trees. I accept that there will likely still be areas without ground data.

I have tried reducing the height of the bounding box but because my sites have undulating terrain that doesn't work properly.

3
General / Re: Reduce distortion for trees in orthomosaic
« on: January 29, 2025, 08:32:59 AM »
I've just realised the issue may have been my use of the 'average' blending mode rather than the default 'mosaic' blending mode. I've redone the orthomosaic based on the DEM (see attached) and it is much better (though still some warping artefacts).

4
General / Reduce distortion for trees in orthomosaic
« on: January 29, 2025, 08:26:54 AM »
I am wondering if anyone has advice on how to reduce the distortion for trees in an orthomosaic? The flight settings were:
Mavic 3M at 120m with 90% front 90% side overlap.

My agisoft process is:
1. Import photos
2. Align photos (accuracy setting highest)
3. Optimise cameras (default settings)
4. Build point cloud (quality setting high, all others default)
5. Create DEM
6. Create orthomosiac (default values).

The resulting orthomosaic (see attachment) is not clear - with strange artefacts around the edges of the canopy and a very blurred reconstruction. I then created a model both from the point cloud and the depth maps and then created orthomosaics using these models but the result is similar (see attachments).

Do people have any other suggestions? I have also processed the exact same images in DroneDeploy and there is no distortion at all in the orthomosaic (see attached) - I don't understand how the results can be so different. Unfortunately there is no way to understand what DroneDeploy has done as that system is a bit of a black box.

5
Hello Malalako,

If you need only multispectral orthomosaic as processing result, then you can calibrate reflectance any trim before building the orthomosaic. But if also point cloud colored by original bands is required, then reflectance calibration should be performed before building point cloud.

Thanks for all the help :)

6
Hello Malalako,

If you are using the steps like in your original post (5. Build point cloud, 6. Create orthomosiac), then you are missing generated surface  - which can be DEM or mesh, such surface should be used in "surface" field of Build Orthomosaic dialog. In your workflow you are likely using Point Cloud surface option and it means that the points are rendered to the orthomosaic.

As for the reflectance calibration, you can use Sun Sensor option in Calibrate Reflectance dialog without panel.

So I could suggest to build DEM from the point cloud, then calibrate reflectance using Sun Sensor option (images in Photos pane may get darker visually), then build Orthomosaic using DEM as a surface.

Hi Alexey,

Thank you so much. I had forgotten to do the step of creating a DEM before the orthomosaic. And hadn't worked with multispec data before so didn't realise I could use the inbuilt sun sensor to calibrate reflectance. Adding those two steps has resulted in a nice smooth orthomosaic.

Out of interest - does it matter at what stage I do the reflectance calibration? You've mentioned doing it after building the DEM but the below tutorial (for Phantom drone) suggests to do it as a first step (even before alignment stages)
https://agisoft.freshdesk.com/support/solutions/articles/31000159853-dji-phantom-4-multispectral-data-processing


7
Hello Malalako,

We would probably need some sample subset of images from this project that can be used to reproduce the issue on our side.

Please also specify, if you are using reflectance calibration in the project and whether the point cloud is generated based on the image data and not point cloud has been rendered to the orthomosaic.

Hi Alexey,

I can provide a subset of images - can you please clarify how these are best shared. I haven't used reflectance calibration in the project. I'm assuming you mean those reflectance panels? I couldn't afford the panels and given this was a one time flight (not part of a time series) I decided not to pursue them.

I don't understand your question about if the point cloud is based on the image data or rendered from the orthomosiac? COuld you please explain. I have outlined the steps I took on my initial post.

8
Hi Malalako,

Thanks for sharing your insights on this issue. I noticed that part of your response seems to reference support from DroneDeploy. Could you clarify if the solution you mentioned was provided by DroneDeploy or Agisoft, or if it was a combined effort? Additionally, could you provide any extra details about how you implemented the fix? I’d like to better understand the process to ensure accurate results in my own projects.

Thanks in advance for your help,

Regards,

Apologies my reply was posted in response to a different question on a different forum. I've now deletd it.

9
Message deleted as posted in wrong forum

10
I've processed multispectral images (G, R, NIR, Red-edge) captured from the Mavic 3M and everything seemed to work well until the orthomosaic stage. The orthomosaic (attached) has strange streaking/checkerboard pattern. The steps I took were:
1. Import photos
2. Change primary channel/band to NIR
3. Align photos
4. Optimise cameras
5. Build point cloud
6. Create orthomosiac (default values).

Any suggestions on what is happening here and how to correct it? The point cloud looks fine.

11
Guided image matching option is designed to help in areas with a lot of vegetation, so try to enable it and set Key point limit per
Mpx to ~2000-3000.

Thanks for the suggestion. That has improved the reconstruction slightly (canopy cover [CC] up from 15.3% to 15.7% but still nowhere near a complete canopy (42.7% CC according to LiDAR flight).

The only thing that has worked is using the multispec images and changing the primary channel to NIR (results in 32.4% CC) or Red-edge (29.1% CC). Using the default (green) or red bands as the primary channel results in very poor reconstruction - 14.3% and 12.9% CC.

12
I've had a play around with different settings - varying the alignment quality, preselection settings, density of point cloud, and filtering (see screenshot for canopy cover % for each). Reducing the density of the point cloud had a poor effective (makes sense) and nothing else really substantially changed the results.

13

Try and disable filtering and see what happens.

Thanks for the suggestion. I've just finished running the exame same project but disable filtering and effectively no change to the point cloud unfortunately.

14
I am wondering if anyone has advice on how to improve the reconstruction of forest canopy. I know that trees are difficult to stitch together, but it’s a relatively open canopy so I’m surprised the software is struggling so much.

I flew a Mavic 3M at 70m AGL with 80% front and 80% side overlap. At this stage, I’m just processing the RGB images but will try the multispec images in case that works better. There was a bit of wind which I know doesn’t help. I’m not looking for suggestions on changing the data collection moving forward, but hoping for suggestions on how to improve the outputs with the data I already have.

Processing:
1.   Make sure image qualities are high (all over 0.8)
2.   Align cameras – accuracy setting highest and all other options default
3.   Do camera optimisation – default settings
4.   Create dense point cloud – quality setting high, all other settings default

I have played around with changing the accuracy and quality settings on the align cameras and dense point cloud stage and that hasn't really seemed to help.

Everything works ok but the canopy has been very poorly reconstructed. Attached are screenshots showing the dense point cloud produced from the Mavic 3M and also one producced over the same area using LiDAR. I understand the point cloud from photogrammetry will not be as complete as from LiDAR but was expecting something a bit better than I got.

15
Thanks Alexey. I tried that script and the same error came up. I've attached a copy of the console output. Sorry if I'm just making a really basic mistake.

Pages: [1] 2