I've been doing some field testing in Metashape. I've taken a set of 40 photos with the same camera both as JPEG (8-bit) and RAW/DNG (the sensor captures 10-bit colour). The camera saves the photos as both formats, I did not retake the photos and changed format. Then, I've introduced the JPEGs and RAW/DNGs as they are into Metashape, and also converted the RAWs to TIFFs in 16-bit ProPhoto RGB using Photoshop. I did not touch the colours or white balance for the TIFFs, only increased a bit the sharpness and exposure. On visual check-up the TIFFs look almost identical to the JPEGs colour-wise.
All processing done in Metashape used the exact same parameters for the sets (same alignment, dense cloud, mesh and texture parameters).
Upon alignment of the images in Metashape I noticed that JPEGs produced better mean and max RMS than TIFFs. Why is that? Isn't higher colour depth supposed to be a better for model accuracy and image alignment?
Upon texture production, the TIFFs came out with a greenish tint that is not present in the source images! Why does this happen? I know Metashape supports up to 32-bit EXRs, hence 16-bit ProPhoto should not be an issue.
I'm attaching a couple of screenshots with close-ups of the textured model in 8bit JPEG and 16bit TIFF. Both the quality drop and colours shift are evident in the 16-bit. Am I doing something wrong or is it a software issue? Is there a way to let Metashape know what colour space are the data introduced?
Note that the RAW/DNG model produced slightly better mean and max RMS, and similar colour to the JPEGs, so the issue is with the 16-bit TIFFs in ProPhoto RGB.