Thanks for your comments Alexey. Please see some questions I have below.
I think that there could be several reasons of the effect that you are observing:
- low quality images - they seem noisy and the surface of interest looks to be out of focus (could be related to aperture settings or unstable camera
orientation), so I cannot fully agree that the features on the surface of interest are sharp,
I agree that the sharpness of the surface of interest in these imaging is not perfect. These images were taken with a Raspberry Pi camera module, and as such the pixel size is not retained in the EXIF data. I've had to manually enter this as 0.00112 mm / px based on what is posted here
https://www.raspberrypi.org/documentation/hardware/camera/Focusing this camera is done manually by moving the lens towards or away from the CCD. This makes focusing the camera subjective.
- inaccurate marker placement - projections do not seem to be adjusted on the source images, so there's quite a large placement error compared to accuracy settings used (also the image quality doesn't allow to get precise location of the marker),
Would using coded markers and auto locating these in images be much more precise than what I've done?
- accuracy of 0.001 mm doesn't seem to reasonable value selected, as the nodes of the printed checkerboard pattern cannot be measured with such precision (I would suggest to use 0.1 mm max),
Can you please explain what changing this value in metashape actually has an impact on in the generation of sparse / dense cloud process? Does this affect the SIFT algorithm somehow or the bundle adjustment? How does having a grosely wrong accuracy value come into play in the reconstrution?
- disabled filtering for the dense cloud would result in considerably higher noise compared to standard settings (mild, moderate, aggressive),
My understanding is that if small surface details are important to retain that depth filtering should be disabled. This was the advice in one of the tutorials or white papers listed on your site, for the example of drone captured aerial imagery. It's obviously several orders of magnitude of scale difference between drone footage and what I'm doing but I assume the mathematics of the depth filtering is equally applicable for both scales. If I'm trying to retain micron level detail, then should this not be disabled?
- the surface of interest may have some inclination compared to the base level defined by markers, in this case the contour lines would represent the certain levels of the "slope".
I agree, there is likely a differential slope between the checkerboard surface and the surface of interest. This accounts for the gradual elevation change across the surface of interest but not the sawtooth patten.