Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - nprokofyev

Pages: [1] 2
Camera Calibration / Re: Multispectral calibration
« on: February 14, 2023, 03:25:43 PM »
Yes, yes and yes.

- example : if exposure of the image changes with the same  DLS value but with a different exposure camera setting (iso, shutter, aperture), will the radiometric value be the same?
- yes, to the precision of (mostly DLS) data

Does it depend of the camera used?
- It may, if camera writes some incorrect metadata. But generally it's highly unlikely.

General / Re: DJI Phantom 4 multispectral in non ideal conditions
« on: February 14, 2023, 03:06:22 PM »
Hi, how many flights do you have here? Are they processed in single chunk? Please note that every single chunk can only have one set  of calibration parameters (set by reflectance panel). In case you process several flights in one chunk, you will have one properly calibrated flight and malcalibrated others.

Camera Calibration / Re: Close-Range Multispectral Imagery calibration
« on: November 18, 2022, 04:59:33 PM »
File - Export - Convert Images...

General / Processing non-square pixels
« on: March 09, 2022, 04:40:11 PM »
Hi, could someone advise on processing images with non-square pixels?

We have an industrial camera with several drive modes, and initially all the pixels are square. One of the drive modes is "Vertical FD binning", which means pixels are grouped in pairs in vertical direction in order to reduce noise and increase dynamic range.  Vignetting profile is calculated for normal all-pixel mode.

Is it OK to align "binned" images as is or it is better to restore original aspect first? Which resolution will orthomosaic have - "horizontal" or "vertical"?

General / Re: Multispectral and RGB cameras workflow
« on: December 27, 2021, 04:20:28 PM »
The section of the forest was filmed with Sony ILCE-6000 and MicaSense RedEdge-M cameras. The cameras were located on the same fixed-wing UAV, but the multispectral and RGB images are not synchronized, the resolution of the cameras is also different.
I need to build a dense point cloud and orthomosaic for both RGB and multispectral data. I tried to build two separate projects for the cameras, but the final data has a shift of several meters. What is the correct workflow for my problem? I want to have information about a certain tree (the coordinates of the crown are known) both in rgb and in multispectral form.
Thank you!

Hi Alexandra,
First of all, you can use RGB data from Rededge-M. However, a6000 data must be significantly more detailed.

1. I'd suggest aligning in a single chunk. This is the way to align both sets "pixel-to-pixel". There could be some issues with camera positions, so check reference tab after aligning before going on.
2. After successful alignment, create a copy of chunk.
3. I assume a6000 GSD is better, so build Dense cloud based on a6000's photos. In other words, disable all images captured by Rededge and build Dense cloud.
4. DEM/DTM as usual.
5. Do RGB orho as usual. Copy it when it's done.
6. Now activate multispectral images, disable a6000's images, do Reflectance calibration  and Build ortho again.

Hi Leonardo,
I haven't got P4M but I am very interested in its calibration procedure because our customers keep asking us about P4M.

(1) I've read the guide that you'd attached and it seems like ρNIR = 𝑝𝐶𝑎𝑚NIR/𝑝𝐿𝑆NIR but... they don't provide 𝑝𝐿𝑆NIR, they only provide it multiplied to NIRLS in [Irradiance]
I inspected metadata from P4M and didn't find anything that looks like  either  ρNIR or  𝑝𝐿𝑆NIR. I've used quite old data (dec 2019), maybe it is added in later firmware versions. If you have P4M images with recent firmware, would you please share it so I could look at their metadata?

BTW, I beleive there is (or was) no way to convert P4M images to spectral irradiance in SI units because DJI do not provide corresponding convertion constants. There is a tag XMP.Camera.RadiometricCalibration which contain such constants in case of Micasense RedEdge camera. Unfortunately, DJI simply stores additional copy of XMP.dji-drone.SensorGainAdjustment which couldn't be used in "Micasense-style" radiometric calibration  with SI units in output.

(2)  In case of Micasense and clear sky or equally overcast sky panels perform great. Moreover, if you use reflectance panel, you don't need to worry about intermediate step with spectral irradiance, because panel calibration is something like inverted opertaion to sensor response calibration plus additional correction for incident light.  Do you use Spectralon with ~99% reflectance? Most cameras will overexposure such targets, something less reflective (~50%) would be fine.

(3) Yes, Metashape corrects vignetting. You'll see corresponding information in console right after adding photos. And you may also open Tools - Camera Calibration, then right click on the camera and open Distortion plots. Vignetting plots are also there.

(4) Yes, File - Export - Convert Images  and select required checkboxes.

General / Re: Issue of orthophoto with DJI Phantom4 Multispectral
« on: March 30, 2021, 12:10:44 PM »

Through the NDVI image, we can see the problem more clearly in the attachment.

I see the problem but have no clue what caused that. Could you process up to orthomosaic WITHOUT calibration at all, I'd like to make sure the calibration process is a reason.

We often see similar issues. Seems to be following exactly the seamlines so might also be related to image blending (next to flight conditions, camera calibration). We had this issue with this sensor but also with MicaSense RedEdge & Altum.

Hi, could you show a couple of examples?  The sample shown seems like whole lines lost their contrast completely.

General / Re: Selecting camera parameters to be optimized
« on: March 30, 2021, 11:39:06 AM »
Hi Lara,
First of all, please refer to User Manual Appendix C. Camera models, where  optimization parameters and corresponding formulae are listed.

Yes, you should select camera parameters based on the camera type used.  DJI Mavic Pro has "frame camera". This type of camera should be processed well with default parameter set, namely,
f, cx, cy, k1-k3, p1-p2
If you are not satisfied with optimization results, you could add other parameters, but generally, there is much less mathematical basis behind that.

General / Re: Generate Prescription Map in latest release 1.7.2
« on: March 29, 2021, 12:51:22 PM »
I am facing a problem with Boundary Shape layer: it doesn't appear in the Prescription map dialog.
It appears in another project where some shapes were created some time ago, though.

ver 1.7.2 12070

General / Re: Phantom 4 Multispectral
« on: August 21, 2020, 05:25:49 PM »
I didn't try P4M for myself, but our customers are also interested whether it's possible to use Micasense panel for P4M calibration.

It seems unclear to me what can go wrong. It is supposed to be just a reflectance target...

Whithin the Rededge calibration model, which AFAIK is implemented in Metashape, there are two possible sources of differences between P4M and RedEdge: V(x,y) or a1 ( see )

I am not sure, but suppose a1 is a band multiplier (listed in metadata) which is supposed to represent integral band transmittance (Can't be wrong).
V(x,y) is a vignette correction function. Does P4M provide vignette corrections? Even if isn't, V(x,y) turns to 1 and the rest part of the formula may still work.

I'd apreciate any updates on the topic

General / Re: Validating invalid tie points
« on: August 21, 2020, 04:49:52 PM »
Do you suggest changing the the primary channel to a particular band, or loading only images from one band?

I don't remember for sure how primary channel selection works  :-\
Try to load one single band 

General / Re: Preprocessing RGB pictures
« on: July 17, 2020, 08:04:15 PM »
1. Basically, yes, reflectance panels are for multispectral data because you need to convert DN to reflectrance.
2. There are two sets of parameters to determine: spectral and directional. Real life reflection is described by a bi-directional function ( ) which means that parameter set depends on aircraft direction. Theoretically, you can measure all the values and calculate the corrections, but it is a hard task.

3. Well, you may use white/gray panel  to determine light conditions at the moment the picture was shot. And this may be applied for pictures with same light conditions. But, once again, due to directional component, IRL you have more than two parameter sets (not only "sunny" and "cloudy").

4. Actually, you don't need color panels to set WB, you need only white/gray card. And in my opinion, some cameras may benefit from custom WB in JPG mode, even expensive ones. Auto-WB works really simple: camera averages whole image and adjust multipliers so that average tone is neutral gray. Sounds coarse, isn't it?

I'd suggest you to try several flights with fixed WB (not necessarily RAW). It may improve overall color situation under constant light conditions. 

General / Re: Preprocessing RGB pictures
« on: July 15, 2020, 05:43:18 PM »
I used to use RAW images with IR converted cameras. This is the only way you can extract something useful from them.

And if there were clouds, neither RAW nor color panels would improve the situation because ambient light spectrum is completely different on a direct sunlight and under the clouds. To eliminate clouds, you need to use downwelling light sensor and correct every picture with its own parameters. That is how it works on multispecral cameras.

1. I am not sure about P4, but generally, working with raw pictures allows you to achieve better dynamics range, gives you additional ~ 1 EV to correct exposure and a chance to apply any white balance you want. But IRL JPG quality isn't as dramatically bad as JPG processing is easier.
Gray panels are quite good for setting manual WB before flight, especially with JPG.

2. Multispectral data have to be calibrated, because it's main derivative is reflectance values. Think of it not as of photography, but as of spectrometry.

3. Yes, you can make it worse if you don't apply chromatic abberation correction in RAW mode xD  Seriously, you may achive slightly better sharpness by the price of increased noise. Doubtful.

4. RAW allows you to convert it to 16-bit TIFF and then perform raster calculations with significantly lower statistical error. Do you use raster calculations on RGB data?

Well, you know, I understand you'd like to have all the 12 bands together, but look at your functions's ranges first.
And answer  to yourself, which bitrate you can assign to the resulting TIFF to store all the desired bands "lossless"?

reflectances are unsigned float32 normalized to 0..1

temperature - SIGNED float and not normalized to 1

indices - mostly -1 to 1 and also signed
and simple ratio (that labeled as Fe2O3) is a function with  infinite range.

I'd suppose to normalize all the channels you need so that they'll have similar ranges and will be able to store well in one predefined bitrate.


Since this camera takes 16-bit imagery photos, I am surprised that reflectance values are higher than 32,768 after reflectance calibration. What could we do to correct this?

Hi Isaac,

I've also seen in my projects that some values exceed reflectance>1 and here is an explaination. The reflectance calibration model assumes that the objects reflects light almost as a diffuse (Lambertian) reflector. Which is not always true, especially under complex lighting conditions such as a combination of direct and diffuse light. In such cases, plants may reflect light in more mirror-like way and in that case reflectance can (mathematically) exceed a value of 1 because it is not diffuse anymore. This is noticable in NIR particularly.

Other source of such effect is a situations where the illumination angle is high. The physics is same here - plants and other objects aren't behave  as a diffuse reflector.

Pages: [1] 2