To answer to BobvdMeij:
Your proposal looks good. However, and because buddle block adjustement includes so many parameters, you may firstly need enough accurate georeferenced photos and well-known camera parameters.
Indeed, adjusting a non-scale and non-georeferenced model on GCPs on a big step itself ! It would be for the moment to hard, to my mind, to add an automatic evaluation of which coordinates correspond to which non-coded GCPs.
But, if you already get quite well georeferenced model by using geotags, it may finally be just an adjustement between theoretical coordinates (given in your CSV file) and non-coded detected markers. But, in order not to get completly wrong adjustement, a main condition of this process would be that geotags accuracy would be very smaller than distance between your targets on field, to prevent wrong association between coordinates and target.
For the rest, we would need a piece of help for Alexey, who may gives us some starter advices if something looks possible (python scripting may be mandatory at this point !)
Regards
That makes sense, but I reckon it should still work.
I have a dataset flown at 50m AGL and a GSD of approximately 1.3cm. We use non-coded cross targets measuring 35x35cm. The on board GPS we utilize is decent when it comes to XY accuracy (5M), but the Z component is useless. Hence we do use the GPS EXIF data for our first alignment, but then use GCPs (and turn off all photo's) to optimize camera positions.
I conducted a test run yesterday using the same data and aligning the photographs using High settings. Alignment worked out surprisingly well. Then I used 'Detect Markers' with a tolerance value of 50. Agisoft detected 40 markers, although we only placed 30 targets in the field. Out of these 40 markers, 8 were located (far) outside the studied region where not a single photograph was taken (strange..), and 2 were located within the study area although not anywhere near an actual GCP marker. I reckon I could get rid of these by changing the tolerance value, but I haven't tried this yet. The remainder of the 30 detected markers, however, was SPOT ON. I must say I was sincerely impressed.
Unfortunately, however, Agisoft names the detected markers in the order of 1, 2, 3, etc. whereas we named our targets 1001, 1002, 1003 etc. in the field using our RTK-GPS equipment. If I now import the CSV file with marker coordinates, Agisoft only compares the IDs of imported markers with those that are already created in the software. Due to naming conventions being different, Agisoft simply loads my CSV coordinates but does not relate them to the ones it previously detected.
I could, off course, remove the 10 incorrect markers and then visually locate the remainder (and correctly identified) 30 markers, after which I change their ID to the ones we provided them with in the field. If I then import my CSV file again Agisoft should identify a match and overwrite the (empty) coordinate fields of the detected markers with the coordinates in my CSV file.
Even though this would still save me some time, it should be possible to quiet accurately predict (and then relate!) which of the predicted markers relate to which GCP in my CSV file. I mean, the relative position of the 30 detected markers in my sparse point cloud is known (in a local coordinate system) quiet well, and so are the relative positions between the 30 in-situ measured markers in my CSV file.