Agisoft Metashape

Agisoft Metashape => General => Topic started by: BobvdMeij on August 06, 2018, 01:40:18 PM

Title: Your opinion on USGS Agisoft Processing Workflow
Post by: BobvdMeij on August 06, 2018, 01:40:18 PM
Dear all,

While randomly scouting across the internet in search of clarifications on certain terminology used in Agisoft I came across this (https://rmgsc.cr.usgs.gov/outgoing/UAS/workshop_data/2017_UAS_Federal_Users_Workshop/USGS%20PhotoScan%20Workflow.pdf) (also see the attachment below) seemingly well structured Agisoft Photoscan Workflow formulated by USGS (United States Geological Survey) in March 2017. USGS being a globally renowed organization I like to believe considerable thought and extensive testing and validation led to this document.

I personally very much like the column named ‘Function’, which supposedly describes what each step does and how it affects the output. Especially because such information is often lacking in the rather technical and somewhat limited explanation provided in Agisoft’s official user manual. This is particularly valid for the various and seemingly important Gradual Selection stages. I believe this ranks among the most frequently discussed themes on this forum, although a comprehensive/ understandable explanation of what it exactly does and how it should be applied is still missing.

I'm eager to learn what you all think of this USGS workflow, how it relates to your own and if you could perhaps comment on why certain steps are executed in this order using particular settings. I’m particularly intrigued by the presented order of the Marking of GCPs, Camera Optimization and the subsequent Optimization Parameters.

I personally always use the following methodology:

1.   Align Photos
2.   Mark Ground Control and Checkpoints
3.   Optimize Camera’s (checking all parameters except p3 and p4)
4.   Gradual Selection Reprojection Error at 0.5 > delete points > Optimize Camera’s (check allexcept p3/p4)
5.   Gradual Selection Reconstruction Uncertainty at 10 > delete points > Optimize Camera’s (check all except p3/p4)
6.   Gradual Selection Projection Accuracy at 2-3 > delete points > Optimize Camera’s (check all except p3/p4)
7.   Dense Cloud > DEM > Orthomosaic

The USGS workflow, however, employs a much more complex procedure. Rather than running ALL Gradual Selection stages AFTER marking the GCPs (as done by me), the USGS workflow applies Reconstruction Uncertainty and Projection Accuracy BEFORE any GCPs are marked. Only the Reprojection Error step is executed AFTER GCPs are included. Also note that the USGS workflow suggests to change the Tie Point Accuracy setting within the Reference Settings from 1.0 to 0.1 along the way. The workflow furthermore suggests to check/uncheck different Camera Parameters for distinct Gradual Selection stages, rather than keeping this the same across the board as is done by me (and I believe by many others). 

Again, the workflow seems to be thought out well but I still cannot wrap my head around certain details. I’m hoping some of you, and Agisoft’s developers in particular, are able to reflect on the matter!

Thanks in advance.

Bob
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: GPC on August 07, 2018, 05:48:41 AM
There are more good files in the directory:

https://rmgsc.cr.usgs.gov/outgoing/UAS/workshop_data/2017_UAS_Federal_Users_Workshop/ (https://rmgsc.cr.usgs.gov/outgoing/UAS/workshop_data/2017_UAS_Federal_Users_Workshop/)

Some great info there, even sample image files.

Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: Hydro_Ydé on August 11, 2018, 07:49:22 PM
Thanks  BobvdMeij for this very interesting topic.
I found normal to clean the cloud at maximum level before including GCPs. That gives a clean "aero side only" solution before trying to adjust to real world coordinates. But that would not change a lot in my opinion to follow your own process (i.e. include GCP first).
Also the 0.1 parameter for Tie point seems very low for me, as I have often not so crisp images. 0.3 is better in my case.
One thing they don't mention is to have a look on camera calibration values. This is something I really look closely, i.e. values are consistant between various surveys. Big Z error can come from bad F value, even with good GCPs.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: andyroo on August 15, 2018, 09:53:22 PM
Just wanted to provide a little context on these docs and credit where credit is due -

The material is from a 2017 workshop presented by the USGS National UAS Project Office, and based heavily on training provided by Tom Noble (US BLM, Ret.) and Neffra Matthews (US BLM).

Tom's methodology "...works for fixed cameras, a wide angle lens, decent photos and is based on years of experience, testing and comparing..." and he is the first to tell you that photogrammetry is as much an art as it is a science.

I have been to several workshops Tom has instructed or participated in, and he often cautions that these methods are based around DSLRs with prime lenses or other fixed lens and even metric cameras. The workflow as summarized in a table or a few pages of a PDF is dramatically oversimplified from the use cases and guidance he presents in his workshop.

That said, the referenced USGS materials are pretty useful background, and a great point for discussion. For example, Tom would tell you that the 0.1 pixel parameter for tie point accuracy is only something you'd constrain after you go through iterations to remove the points with bad reconstruction uncertainty and projection accuracy using the gradual selection tools, and it might not be achievable if you're not using a fixed-lens camera with minimal motion blur and a good lens.

I know Tom is pretty busy right now, but hopefully he'll provide more background when he has a chance.

Andy

Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: Hydro_Ydé on August 16, 2018, 03:32:05 PM
Thank you Andy for the response.
Much agreed about the art part, mapping is always also a bit of artistic creation.
I hope Tom will find some time to share his experience here. This field is still quite new, and at least we are happy to see that Photoscan is a good choice as the main tool for photogrammetry. Maybe we could also talk about pre and post PS workflow.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: mks_gis on August 23, 2018, 05:37:16 PM
This is really interesting! How would you update the workflow with the adaptive fitting option that PS now has? In the expanded document they say to use the default values or use the checked parameters for optimisation, which is best?

Cheers
Martin
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: karad on August 24, 2018, 08:40:17 AM
I think that Photoscan developers should explain the whole process in photogrammetric and adjustement terms for better understanding and judging.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: mks_gis on August 24, 2018, 01:42:42 PM
I'm unclear of one thing, amongst others. Perhaps someone can help.

In the gradual selection process they mention the percentage of points selected in each step. Is that percentage of total tie points you started with, or percentage of tie points left after each step. The latter obviously reduces the tie points considerably, when I tried it with some P4 data by about 90% of tie points we started with.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: macsurveyr on August 24, 2018, 11:44:03 PM
Hello All,

Learning and executing photogrammetry properly is so much more than following a workflow, a recipe or a checklist. Understanding the physics of light and the workings of a camera/lens system and the identification of sources of error in order to either eliminate or effectively model those errors is more challenging than checking all the boxes of a checklist or following all the steps of a workflow.

It is very difficult to teach and learn in a software forum - there are just too many variables - so many variables.

I will tell you that the reason to do any optimization is to improve the camera calibration. However, if the camera is not stable, with no consistent focus, has rolling shutter effect, has a small sensor, if the focal length is to long, the images are compressed, etc., then improving the calibration - no matter what parameters are checked - might be a futile effort. If the camera calibration can be improved, how far things can or should be optimized are also not always the same - many variables. The workflow cited in this forum topic - that has its origins from me - is for high quality cameras with a relatively large sensor, relatively wide angle lens, high quality photos, and is modeled after being able to achieve high quality results comparable to several other tried and true, more traditional, photogrammetry software suites from days gone by, but are still the gold standard as far as quantifiable results. Much more effort was necessary and no, it wasn’t at all obvious or straightforward as to how to achieve those results. It took a lot of experience. PhotoScan is very good and capable of achieving excellent results and while I sometimes wish good results were possible with no background knowledge and/or no experience, that just is not the case. PhotoScan has made incredible strides to that end. Maybe someday it will be the case.

Tom
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: AndyC on August 25, 2018, 05:14:09 PM
Well said Tom, thanks.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: Dave Martin on August 25, 2018, 05:46:05 PM
Well said Tom, thanks.
+1
Dave
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: jmaeding on August 30, 2018, 09:50:01 PM
Oh Geeze, the results after using USGS methodology are 100x better than not.
This should be part of the agisoft standard workflow IMO.

I wonder if the gradual selection steps could be automated. I am a .net guy mainly but can adapt to other languages.
I sent my "re-write" of the USGS workflow to agisoft sales. Hope they do something with it.
In the end, the PS results were better than Dronedeploy and pix4d.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: mks_gis on August 31, 2018, 11:08:11 AM
Oh Geeze, the results after using USGS methodology are 100x better than not.
This should be part of the agisoft standard workflow IMO.

I wonder if the gradual selection steps could be automated. I am a .net guy mainly but can adapt to other languages.
I sent my "re-write" of the USGS workflow to agisoft sales. Hope they do something with it.
In the end, the PS results were better than Dronedeploy and pix4d.

I've automated grad sel here using Python, loosely based on this, but not fully (only do RecUncert/ProcAcc once each):

http://www.agisoft.com/forum/index.php?topic=9578.0

workflow with GCP:
run script
set export folder
set any file prefix for export files
in Custom Menu Align with grad
Do your GCP work
in Custom menu Run all after alignment - Geo with grad

Opinions welcome.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: stihl on August 31, 2018, 11:55:26 AM
Oh Geeze, the results after using USGS methodology are 100x better than not.
This should be part of the agisoft standard workflow IMO.

I wonder if the gradual selection steps could be automated. I am a .net guy mainly but can adapt to other languages.
I sent my "re-write" of the USGS workflow to agisoft sales. Hope they do something with it.
In the end, the PS results were better than Dronedeploy and pix4d.
Funny, seeing how I followed this workflow and got worse results than following our own developed workflow.

I suppose it all depends on your data set.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: mks_gis on August 31, 2018, 11:58:48 AM

Funny, seeing how I followed this workflow and got worse results than following our own developed workflow.

I suppose it all depends on your data set.

Could you outline your workflow and describe your type of data, out of interest?
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: stihl on August 31, 2018, 05:31:26 PM

Funny, seeing how I followed this workflow and got worse results than following our own developed workflow.

I suppose it all depends on your data set.

Could you outline your workflow and describe your type of data, out of interest?
Unfortunately I prefer not to go into depths of our workflow. I can say that it's quite different than what USGS following for their gradual selection filtering.
The results that the USGS workflow yielded were slightly worse absolute errors compared to independent check points than our own workflow and showed a decrease in the amount of detail for the dense cloud. Most likely due to larger misaligment errors compared to our own workflow.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: mks_gis on August 31, 2018, 05:52:12 PM
I understand commercial implications, it's a shame though as sharing our settings and experiences helps the community overall. We'll slog on, cheers.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: TXPE on March 27, 2019, 05:21:50 PM
How, if at all, would the use of higher precision geotags (5-20 cm) affect step 1 (pg 4) of the instructions that state, "keep the accuracy settings to the default values...".
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: 2bForgotten on April 08, 2019, 03:13:02 PM
Hi there,

So I don't know about y'all, but I still am eager to hear someting from our Agisoft pro's on this forum regarding the questions raised in OP.

Regards,
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: wizprod on October 06, 2019, 05:47:59 PM
The link in OP is now broken. Did anyone save the full workflow PDF and can post it here?
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: PROBERT1968 on October 06, 2019, 06:13:13 PM
Is that what you are looking for ?


https://uas.usgs.gov/nupo/otherresources.shtml

apparently they may have move it or  change..
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: wizprod on November 14, 2019, 05:58:28 PM
Yep! Thanks!
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: bgroff on November 30, 2019, 08:20:05 PM
Hi. I've tried the USGS workflow for a few weeks now on several projects and these are my observations:


All of my mapping operations are using a P4P with 20MP camera, no RTK/PPK yet, but GCP set with diff. GNSS receivers. We're looking at higher quality mapping drones.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: working_guy on December 05, 2019, 11:13:59 AM
bgroff

Where you define the set tie point tolerance? I Can't find that.

Thanks

HRS

Hi. I've tried the USGS workflow for a few weeks now on several projects and these are my observations:

  • Number of projections per photo seems better at a minimum of 150, rather than 100 as USGS suggests
  • The mean error in each photo of 0.3 pixels seems fine, though 0.4 is my usual result
  • The whole gradual selection workflow appears okay. No major hangups with it
  • Tie point tolerance has the greatest effect on increasing SEUW to 1, but I find that my GCP and check point errors are much higher as I lower tie point tolerance toward 0.1. My preference is to set tie point tolerance to around 0.3 and this appears to result in my desired 5cm accuracy for GCP and check points.

All of my mapping operations are using a P4P with 20MP camera, no RTK/PPK yet, but GCP set with diff. GNSS receivers. We're looking at higher quality mapping drones.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: bgroff on December 06, 2019, 03:09:19 AM
Hi HRS. It's in the Settings dialogue on the reference panel.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: working_guy on December 06, 2019, 07:59:15 PM
Thanks I go see that.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: MaciekK on March 20, 2020, 12:37:53 AM
Hello everyone,

I'm new to the forum and also at work on Metashape. I have some experience with photogrammetry - a few years of work on Z / I Imaging. Now I start with Agisoft .... But back to the topic, I practiced the USGS workflow on several data sets - 1.5 ha (75%, 75%) 100 photos, 12 ha (80%, 80%) 530 photos and 12 ha (60%, 40%) 370 photos grid- the same area, GCP measured with accuracy xyz 1cm, pixel 2cm, drone P4RTK. Alignment carried out to obtain errors assumed by USGS, camera parameters - all marked without additional. I recommend this workflow, very good results obtained. At 33 checkpoints, errors no larger than 2cm. The dense cloud finally generates quickly, about 40 minutes on high setting. Dense cloud height accuracy at +/- 2 cm checkpoints. I really recommend.
PS. I want to introduce some modifications of the workflow in order to obtain more accurate image matching from about 0.2 pix

Regards
Maciek
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: working_guy on March 20, 2020, 01:57:03 PM
Hello everyone,

I'm new to the forum and also at work on Metashape. I have some experience with photogrammetry - a few years of work on Z / I Imaging. Now I start with Agisoft .... But back to the topic, I practiced the USGS workflow on several data sets - 1.5 ha (75%, 75%) 100 photos, 12 ha (80%, 80%) 530 photos and 12 ha (60%, 40%) 370 photos grid- the same area, GCP measured with accuracy xyz 1cm, pixel 2cm, drone P4RTK. Alignment carried out to obtain errors assumed by USGS, camera parameters - all marked without additional. I recommend this workflow, very good results obtained. At 33 checkpoints, errors no larger than 2cm. The dense cloud finally generates quickly, about 40 minutes on high setting. Dense cloud height accuracy at +/- 2 cm checkpoints. I really recommend.
PS. I want to introduce some modifications of the workflow in order to obtain more accurate image matching from about 0.2 pix

Regards
Maciek

Dont forget tell us what you change and your results, great work.  ;)
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: cronair on March 20, 2020, 06:28:23 PM
Thank you for bumping this thread back to the top. This is exactly what I have been looking for. Its a couple years old but sounds like its a still a valid workflow. I am trying it now.

And echoing what working_guy said, don't forget to tell us your modifications MaciekK.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: 2bForgotten on March 24, 2020, 01:00:26 PM
Hello everyone,

I'm new to the forum and also at work on Metashape. I have some experience with photogrammetry - a few years of work on Z / I Imaging. Now I start with Agisoft .... But back to the topic, I practiced the USGS workflow on several data sets - 1.5 ha (75%, 75%) 100 photos, 12 ha (80%, 80%) 530 photos and 12 ha (60%, 40%) 370 photos grid- the same area, GCP measured with accuracy xyz 1cm, pixel 2cm, drone P4RTK. Alignment carried out to obtain errors assumed by USGS, camera parameters - all marked without additional. I recommend this workflow, very good results obtained. At 33 checkpoints, errors no larger than 2cm. The dense cloud finally generates quickly, about 40 minutes on high setting. Dense cloud height accuracy at +/- 2 cm checkpoints. I really recommend.
PS. I want to introduce some modifications of the workflow in order to obtain more accurate image matching from about 0.2 pix

Regards
Maciek

Hi there,

Thanks for sharing!

Some year ago I gave it a try myself. Did get some mixed results on a project with sandy dunes and low vegetation.

But the questions raised in OP regarding the workflow with the gradual selection tool are still open to debate. Have you experimented with that too?

Regards,
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: MaciekK on March 24, 2020, 11:19:39 PM
Hello,
I think gradual selection is a very good tool for cleaning aerotriangulation. I went a little further than the USGS shows and I received accuracy below 0.2pix. This required the gradual removal of less accurate projections. I used 3 options: Reconstruction uncertainty, Projection accuracy, Reprojection error. After each correction, I optimized the camera and watched SEUW = ​​~ 1. I am a bit worried that a lot of points about 50% occurs only in 2 pictures - I have to work on it. Still, the construction seems very rigid. After enabling the coordinates of projection centers and camera optimization, I received estimated GCPs xyz coordinates differing max 0.02m from those measured in the field with the GNNS receiver. In addition, there is no dense cloud noise and it generates very quickly.
All this on P4RTK with a relatively weak camera. Metashape calculates the quality of my photos always above 0.7pix
I look forward to your experiences
Regards MK
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: Paulo on March 25, 2020, 04:49:36 AM
Hi Maciek,

your final numbers are very impressive but I think that your Tpts reduction is extreme from 729 000 to 69 000 for a 280 image data set. This represents about average of 626 projections per image... Some images may have much less projections and that may affect the depth map generation (needing at least 100 common points between image pairs so that it can be used).
I have processed a similar P4RTK data set and I reduce Tpts using gradual selection but keep a limit of 50% of original points to not go over... See comparative screenshots of your adjustment and mine. My repro error is also under 0.2 pixel....
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: RoryG on March 25, 2020, 12:35:21 PM
Our workflow pretty much follows the USGS processing workflow, however we do try and not remove over 50% of the sparse cloud points. We always aim to achieve a reprojection error under 0.3px which gives us very accurate outputs - a recent test project we achieved 1.4cm RMSE XYZ on our surveyed checkpoints. You can read the article here: https://www.linkedin.com/pulse/post-processed-kinematic-gnss-mapping-rory-gillies/

Here's a couple of screen grabs from the report:

Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: MaciekK on March 25, 2020, 01:08:44 PM
Thanks for the feedback, guys, I'm also wondering if I'm doing the right thing, deleting so many points. On the other hand, I worked a few years with classic aerial photogrammetry and there was no question of 100 stereo pairing points. Good relative orientation came out at about 20 points. Hence, I have no resistance when I have only 100 projections in the picture (usually they are pictures on the edge of the block). I read somewhere - I do not remember the source that the quality of Dense Cloud depends primarily on the correct parameters of relative orientation and I hope that these about 100 good points guarantee - this is confirmed by my tests. The topic of camera calibration also remains in my head - I do not know to what extent the reletive orientation is responsible for the correct calibration and to what extent it is external. Please, Alexey, join the discussion ..... In the coming days I will work a block of about 1500 photos. This will be a more representative example ...
Thanks again for your data, I've been working on Metashape for about a month and I don't understand all the procedures.
Regards
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: gsmarshall on April 10, 2020, 01:49:26 AM
Hello all,

This may not fall directly under the topic of this thread, but it seems there are lots of experienced metashape users with helpful insights on here, so here we are.

I am working with two fairly small sets of historic aerial photos (30-50 images each, from 1943 and 1978), and have some questions about what the error rates mean and how I might improve my workflow. I am not certain of exactly how the images were digitized, but I believe they came from negatives scanned in with a high quality (but not photogrammetric) scanner, and are stored as .tifs. After initially aligning, I have been incorporating the basic elements of the USGS workflow mentioned above: editing the sparse cloud by reconstruction uncertainty and projection accuracy, importing control points, editing by reprojection error, and optimizing cameras after each step. The dense point cloud and DEM generated from the images has too much noise and too many holes to produce a good orthomosaic, so I have been using a lidar DEM and have gotten better results with that.

The fiducial marks on the images are not auto-detectable so I have so far not used them (I picked up this project fairly recently), but I would be very interested to hear about anyone's experience using them/how much difference they make.

So far, camera errors for one set are pretty rough (1-2px), with better control point error. The orthomosaic has few visual errors in my area of interest (near the center of the scene, with good dispersal of GCPs, and at higher elevations than the surrounding area), but significant visual errors in the valleys. The images for this set are not cropped perfectly - there is still some border on one side of the image, and some of the fiducials are cut off - so that may be hampering the quality of my results, but I am unsure of how much difference that makes.
 
The other set has much better camera error (~0.4px) and control point error, and the orthomosaic is good but still has some relatively small visual errors (disjointed roads, duplicated areas over seams, etc).

I have two main questions: For the worse image set, placing more GCPs in the scene increases the camera error and disperses the visual errors across the scene, so that the valleys aren't as bad but the uplands (where I'm interested) are worse. Why might this happen? Is it the lack of geometric correction from fiducials, poor overlap, poor quality GCPs (all of which are possible)?

Second, regarding the better image set, how are the camera errors, control point errors, and visual errors related? My goal is to measure vegetation change with a supervised classification, so it seems to me like the visual errors in both orthomosaics would harm the accuracy of this measurement; but maybe a lower camera error produces better results across the whole image, while the visual errors are isolated to the seamlines?

I am quite new to metashape and photogrammetry, and any help would be much appreciated.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: Alexey Pasumansky on July 05, 2021, 03:29:39 PM
It seems that just a couple of weeks ago a new version of a detailed USGS workflow (for Metashape 1.6) has been published:

https://pubs.er.usgs.gov/publication/ofr20211039
https://pubs.usgs.gov/of/2021/1039/ofr20211039.pdf
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: c-r-o-n-o-s on July 06, 2021, 03:20:11 PM
The procedure in the instructions is very interesting, and I had already tested it some time ago.
Unfortunately, I have to say that the effort is in no relation to the result.

I use levelled fixed points to check the accuracy. (+/- 1mm)
 
1.) If you calculate without a tie point limit, the calculation time increases immensely.
2.) Up to a certain point of the gradual selection I get nearly the same results as if I use the "normal" 40000/4000 values.
3.) But the worst thing is:
The more I do the "optimisation" after a certain point, the worse the control points fit!
It comes to a kind of overcorrection, which delivers worse results for me.


In the best case , I am 1mm better; but have a significantly longer calculation time.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: PROBERT1968 on July 07, 2021, 05:42:17 PM
Hey Alexey,

There is another one from US BLM  that had written about Close range photogrammetry on their website that might be helpful for others that needed help to understand.  When I took the training from my US Forest Service I learned how to use the software back then I think it was in 2017 . It was PhotoScan before you rebrand to Metashape .

So I had that copy in PDF but it is big file to upload here so I found the link for everyone in here that wants to read can download there.. It is not the workflow but it explains how it works .

https://www.blm.gov/documents/national-office/blm-library/technical-note/aerial-and-close-range-photogrammetric

The BLM and US Forest Service are more focus on using UAS to fight the wildfires here in the USA. 

I think I have posted a few on this site for anyone who wants to learn about UAS and Drones.


May I suggest you that it might be a good idea to add a new technical or workshop under your forum for anyone to share the links. Just saying ...
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: dpitman on July 13, 2021, 10:10:01 PM
For anyone using MS with aerial drone image sets, this is very interesting.  It all seems straight forward until you get to the the removal of points using gradual selection.  I did a quick test with the project I'm working on and if I follow the recipe, the values for photo errors go down but the error for the check points goes up. I'm talking very small movement, but movement in the wrong direction. Since my confidence is highest in the check point coordinates, I abandoned the idea of reducing sparse cloud points if the reported check point errors go up.

Perhaps I'm misunderstanding this "subjective step" (used in the latter USGS workflow) and that it's not really suitable for projects where in the end, I uncheck all of the photos (cameras) and use only the coordinates of the ground control points?

It would be great if you experienced guys would weigh in.

Thanks,
Dave
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: c-r-o-n-o-s on July 14, 2021, 10:01:36 AM
For anyone using MS with aerial drone image sets, this is very interesting.  It all seems straight forward until you get to the the removal of points using gradual selection.  I did a quick test with the project I'm working on and if I follow the recipe, the values for photo errors go down but the error for the check points goes up. I'm talking very small movement, but movement in the wrong direction. Since my confidence is highest in the check point coordinates, I abandoned the idea of reducing sparse cloud points if the reported check point errors go up.

Perhaps I'm misunderstanding this "subjective step" (used in the latter USGS workflow) and that it's not really suitable for projects where in the end, I uncheck all of the photos (cameras) and use only the coordinates of the ground control points?

It would be great if you experienced guys would weigh in.

Thanks,
Dave

That is exactly what I have found.
I think it depends on the project you are working on. What camera and what altitude etc.
Metashape does a good job here "out-of-the-box".
However, turning off RTK coordinates altogether should not be the goal.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: Paulo on July 14, 2021, 04:41:13 PM
Hello all,

I have adapted my gradual selection procesure according to latest USGS workflow and I have found on 2 drone projects flown at 4 cm GSD that :
- in 1st case  it does increase check point accuracy during gradual selection last phase (Reprojection error);
- in 2nd case it does not, it actually decreases during 3rd phase  reprojection error optimization so stopping after Reconstruction uncertainty and Projection Accuracy phases gives better results.....
So results are mixed refarding check point accuracy. Of course it does decrease average reprojection error as well as tie point size. The key is to iterate the reprojection gradual selection so that each iteration only selects 10% of remaining Tie points up to user defined limit (in my case I stop when remaining points are 20% of original number of Tie pts) and check if Control/Check points error increase or decrease.

But for alignment, I do not use Tie Point limit of 0 as it increases processing time excessively (v1.7.3) instead I use 40 000 Key pt limit and 7 000 Tie pt limit and it gives good resuts for 16 Mpix imagery.

Also as dpitman in last phase (reprojection error) optimization I do uncheck all my cameras as my drone is equipped with a consumer grade GPS (5 to 10 m accuracy) and check all my GCPS (accuracy 2 to 3 cm) except my Check points....
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: dpitman on July 14, 2021, 05:29:05 PM

[snip] However, turning off RTK coordinates altogether should not be the goal.


[snip] Also as dpitman in last phase (reprojection error) optimization I do uncheck all my cameras as my drone is equipped with a consumer grade GPS (5 to 10 m accuracy) and check all my GCPS (accuracy 2 to 3 cm) except my Check points....

Also in my case, a P4P (not RTK) so the camera coordinates are not highly accurate.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: andyroo on July 14, 2021, 09:41:47 PM
Enjoying the discussion on this topic a lot.

We developed the most recent USGS workflow (https://pubs.er.usgs.gov/publication/ofr20211039) using fixed lens DSLR cameras with survey-grade RTK positions and precise (sub-millisecond) event marks, so I wouldn't be at all surprised if the "best" workflow for built-in drone cameras (often with rolling shutter) and using only GCPs and/or consumer grade drone GPS is much different.

I am especially interested in the discussion limiting keypoints and tiepoints - since as Paulo noted, that increases processing time substantially. I hope to get around to doing some experiments on that soon, but again I expect the results will be somewhat specific to different camera types.

A couple notes re optimization from my current "best" workflow, which is very similar to the current USGS published workflow:

1) I've found that there doesn't appear to be any significant difference in how many/which points are selected whether  you optimize or not between performing gradual selection on Reconstruction Uncertainty (RU) and on Projection Accuracy (PA), and because I'm trying to minimize the number of times I optimize (both for speed and error propagation) I perform both RU and PA gradual selection before my first optimization.

2) At the moment I'm only performing 1 RU, 1 PA, and 2 Reprojection Error (RE) optimizations.

3) for all but the last RE optimization, I only optimize f, cx, cy, k1, k2, k3, p1, p2, and for the last RE optimization, I add Fit Additional Corrections (FAC), but I DO NOT add b1, b2, k4, p3, p4. I haven't re-evaluated recently, but last time I took a deep dive into the lens model parameters, I found that enabling b1, b2, k4, p3, p4 resulted in residual errors for those variables that were a significant fraction of the variable value (much more so than the other variables) - without significantly improving camera position errors or GCP errors. This implied strongly to me that I was overfitting - Again - this is specific to fixed-lens DSLR cameras with precise camera and ground control positions.

4) I've noticed that if I do multiple optimizations with FAC enabled, error (camera and GCP) appears to increase, so I only enable Fit Additional Corrections on my final optimization.

5) I've had mixed results, sometimes worse, and at best insignificant improvement using the above methods and trying to tighten tie point accuracy values (tighten to the previously optimized RE step) on the last iteration, so at the moment I'm leaving tie point accuracy at the default. I expect that if I do multiple RE optimizations, I might be able to improve tie point accuracy, but in general I'm able to meet accuracy targets using the above methods with our camera systems.
Title: Re: Your opinion on USGS Agisoft Processing Workflow
Post by: c-r-o-n-o-s on July 14, 2021, 11:50:12 PM
I am especially interested in the discussion limiting keypoints and tiepoints - since as Paulo noted, that increases processing time substantially. I hope to get around to doing some experiments on that soon, but again I expect the results will be somewhat specific to different camera types.

Unfortunately, Agisoft's automatic limitation of tie points is a black box.
We don't know which "formulas" are used to reduce the points.
I only know that it works very well.
As already mentioned, it is certainly possible to get another millimetre out of it, but the effort for this increases a lot.
It is possible that metashape uses a different but similar algorithm to limit the tiepoint.

By the way, the hole workflow is similar to the one described here:
https://www.hcu-hamburg.de/fileadmin/documents/Geomatik/Labor_Photo/publik/cncg2018_mayer_et_al.pdf (https://www.hcu-hamburg.de/fileadmin/documents/Geomatik/Labor_Photo/publik/cncg2018_mayer_et_al.pdf)