Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Malalako

Pages: [1]
1
I have processed Mavic 3M multispectral images into an orthomosaic (see attached screenshot). Even though I calibrated the reflectance using the sun sensor, the orthomosaic is not good - it has shadow areas and bright spots. I thought the sun sensor reflectance calibration was supposed to alleviate this?

The processing steps were:
- Import photos
- Calibrate reflectance of images using the sun sensor
- Align photos
- Create point cloud
- Create model
- Build orthomosaic from model
- Use raster calculator to get reflectance values
- Export using index value.

Have I done something wrong or is the sun sensor just not doing a good job?

2
General / How to create orthomosaic showing only ground layer?
« on: March 07, 2025, 04:53:54 AM »
I am creating orthomosaics from drone imagery over forested areas. I am then using supervised image classification techniques to classify the orthomosaic into classes (e.g. trees, grass, bare ground, shadows etc). The aim would be for me to be able to quantify the proportion of a site that is, for example, grass. The issue is if there is a lot of canopy cover, then obviously a large portion of the ground is not visible on the orthomosaic (because they are under the canopy) and therefore things like grass cover are not accurate.

Is there a way to generate an orthomosaic where the trees are excluded so I just have an orthomosaic of the ground layer? Perhaps excluding areas >30cm AGL or something. I have collected the drone images in a way to maximise how much ground is observed (flying off nadir and a cross hatch pattern) and in the point cloud you can see that there are many ground points under the trees. I accept that there will likely still be areas without ground data.

I have tried reducing the height of the bounding box but because my sites have undulating terrain that doesn't work properly.

3
General / Reduce distortion for trees in orthomosaic
« on: January 29, 2025, 08:26:54 AM »
I am wondering if anyone has advice on how to reduce the distortion for trees in an orthomosaic? The flight settings were:
Mavic 3M at 120m with 90% front 90% side overlap.

My agisoft process is:
1. Import photos
2. Align photos (accuracy setting highest)
3. Optimise cameras (default settings)
4. Build point cloud (quality setting high, all others default)
5. Create DEM
6. Create orthomosiac (default values).

The resulting orthomosaic (see attachment) is not clear - with strange artefacts around the edges of the canopy and a very blurred reconstruction. I then created a model both from the point cloud and the depth maps and then created orthomosaics using these models but the result is similar (see attachments).

Do people have any other suggestions? I have also processed the exact same images in DroneDeploy and there is no distortion at all in the orthomosaic (see attached) - I don't understand how the results can be so different. Unfortunately there is no way to understand what DroneDeploy has done as that system is a bit of a black box.

4
I've processed multispectral images (G, R, NIR, Red-edge) captured from the Mavic 3M and everything seemed to work well until the orthomosaic stage. The orthomosaic (attached) has strange streaking/checkerboard pattern. The steps I took were:
1. Import photos
2. Change primary channel/band to NIR
3. Align photos
4. Optimise cameras
5. Build point cloud
6. Create orthomosiac (default values).

Any suggestions on what is happening here and how to correct it? The point cloud looks fine.

5
I am wondering if anyone has advice on how to improve the reconstruction of forest canopy. I know that trees are difficult to stitch together, but it’s a relatively open canopy so I’m surprised the software is struggling so much.

I flew a Mavic 3M at 70m AGL with 80% front and 80% side overlap. At this stage, I’m just processing the RGB images but will try the multispec images in case that works better. There was a bit of wind which I know doesn’t help. I’m not looking for suggestions on changing the data collection moving forward, but hoping for suggestions on how to improve the outputs with the data I already have.

Processing:
1.   Make sure image qualities are high (all over 0.8)
2.   Align cameras – accuracy setting highest and all other options default
3.   Do camera optimisation – default settings
4.   Create dense point cloud – quality setting high, all other settings default

I have played around with changing the accuracy and quality settings on the align cameras and dense point cloud stage and that hasn't really seemed to help.

Everything works ok but the canopy has been very poorly reconstructed. Attached are screenshots showing the dense point cloud produced from the Mavic 3M and also one producced over the same area using LiDAR. I understand the point cloud from photogrammetry will not be as complete as from LiDAR but was expecting something a bit better than I got.

6
Hi

So I've used Agisoft a few times in the past but this project I'm currently working on is much larger and I think I'm going to have to split it into chunks to get it processed. I understand that there is a "Split in Chunks" python script that I can use to do this. But having never used a script before, let alone python, I really need some basic newbie instructions. So I went to the wiki and downloaded the Split_in_chunks.py file. Do I need to save this in a specific location associated with my Agisoft files or just somewhere that I can find it. I then go to Agisoft and go to Tools>Run Script and in the the Run Python Script box that pops up I browse to where I saved the script. Now if I just press ok it comes up with the error "Can't Run Script". I'm guessing that's because I need to specific arguments about how I want it to run (e.g. quality and depth filtering etc). But how do I do this? Do I change the actual script document or do I input them into the "arguments" section of the "Run Python Script" box? In either case, how exactly do I type it out. I'm using Agisoft version 1.3.2. build 4205.

For example I see the script starts off with:
QUALITY = {"1":PhotoScan.UltraQuality,
         "2":PhotoScan.HighQuality,
         "4":PhotoScan.MediumQuality,
         "8":PhotoScan.LowQuality,
         "16":PhotoScan.LowestQuality}

Do I delete all the ones I don't want. Or do I just change it down in this section:

if buildDense:
               if new_chunk.depth_maps:
                  reuse_depth = True
                  quality = QUALITY[new_chunk.depth_maps.meta['depth/depth_downscale']]
                  filtering = FILTERING[new_chunk.depth_maps.meta['depth/depth_filter_mode']]
                  try:
                     new_chunk.buildDenseCloud(quality = quality, filter = filtering, keep_depth = False, reuse_depth = reuse_depth)
                  except RuntimeError:
                     print("Can't build dense cloud for " + chunk.label)
                     
               else:
                  reuse_depth = False
                  try:
                     new_chunk.buildDenseCloud(quality = PhotoScan.Quality.HighQuality, filter = PhotoScan.FilterMode.AggressiveFiltering, keep_depth = False, reuse_depth = reuse_depth)
                  except RuntimeError:
                     print("Can't build dense cloud for " + chunk.label)




Apologies for such basic questions. If anyone could provide very detailed instructions about how to go about this, then it'd really help me out.


Pages: [1]