As I can see in Alignment parameters section in Photoscan manual
http://www.agisoft.com/pdf/photoscan-pro_1_4_en.pdfAccuracy
Accuracy
Higher accuracy settings help to obtain more accurate camera position estimates. Lower accuracy
settings can be used to get the rough camera positions in a shorter period of time.
While at High accuracy setting the software works with the photos of the original size, Medium setting
causes image downscaling by factor of 4 (2 times by each side), at Low accuracy source files are
downscaled by factor of 16, and Lowest value means further downscaling by 4 times more. Highest
accuracy setting upscales the image by factor of 4. Since tie point positions are estimated on the
basis of feature spots found on the source images, it may be meaningful to upscale a source photo
to accurately localize a tie point. However, Highest accuracy setting is recommended only for very
sharp image data and mostly for research purposes due to the corresponding processing being quite
time consuming.
As I understand accuracy setting can be interpreted as image downsize factor:
Highest x2 up
High original size (?)
Medium x2 down
Low x4 down
Lowest x8 down (?)
Is it right?
Also when I measure accuracy of model by gcp points (that was not included in alighnment process) I expect that higher accuracy will give smaller error, but seems dependense is not linear and at lowest setting I get smallest error and that is weird, can someone confirm that in some cases it can be the case? Also I have tried to run this experiments without Adaptive camera model fitting and this strange behaviour preserves.