Hi all!
I'm trying to understand how multi-camera system and band layer works.
My goal is to obtain an RGB orthoimage and the corresponding radiometric orthoimage (the two sensors acquire "simultaneously") .
I organized the zoom sensor (jpeg) and thermal sensor (tiff) images into two subfolders and imported them as multi-camera system and set band layer 0 to the master rgb camera and band layer 1 to the slave thermal sensor
What I thought I could do was this:
-do the photogrammetric alignment with the RGB images only and build the orthophoto of the RGB images only;
-having done this, I thought that, since the sensors shoot simultaneously, with the same alignment and orthorectification parameters of the rgb photos, I could build the radiometric orthophoto without aligning the radiometric images between them (taking advantage of the photogrammetric alignment with rgb images, more precise).
having said that, I ran into two problems:
- the software tries to match rgb-rgb, rgb-tiff (even if the valid matches between rgb and tiff are zero), tiff-tiff; while, instead, I expected that, putting the images on two different layers, with rgb as master, the software would only align rgb-rgb and then used the same parameters to construct the radiometric orthophoto;
- consequently (I think) to the previous problem, the final result is: the rgb orthophoto is perfect, while the radiometric orthophoto is not (the frames are not merged well with each other and the result is bad).
I don't know if the approach is the right one or if what I thought I could do makes little sense
I hope this discussion can be interesting and that we can come up with some useful ideas.