3
« on: October 12, 2023, 11:04:35 PM »
Hello,
I am testing the results of GUI vs API processing of Micasense imagery. I use different shapefiles placed over the image to compare the pixel values of each band (using raster zonal stats). There are 5 bands: blue, green, red, red-edge, and nir.
The green and red orthomosaic results are great and the pixel averages for the shapefiles match well, the red-edge is also okay.
The issue is that the orthomosaic blue and nir pixels values are quite different for the shapefile areas. I manually set the whiteboard albedo in both the GUI and API {"Blue": "0.66", "Green": "0.67", "Red": "0.67", "NIR": "0.67", "Red edge": "0.63"}. Is there any other possibility of why this might be happening? Here is the code used for this step:
albedo = {"Green": "0.67", "Red": "0.67", "NIR": "0.63", "Red edge": "0.67"}
for camera in chunk.cameras:
if camera.group and camera.group.label == "Calibration images":
for plane in camera.planes:
plane.meta["ReflectancePanel/Calibration"] = albedo[plane.sensor.bands[0]]
for sensor in chunk.sensors:
sensor.normalize_sensitivity = True
task = Metashape.Tasks.CalibrateReflectance()
task.use_reflectance_panels = True
task.use_sun_sensor = True
task.apply(chunk)
doc.save()
for sensor in chunk.sensors:
sensor.normalize_sensitivity=True
chunk.locateReflectancePanels()
chunk.calibrateReflectance(use_reflectance_panels=True, use_sun_sensor=True)
doc.save()
Thanks in advance!
- Robin