Support

Mission Planning & Redundant Image Detection in Agisoft Metashape Professional

Metashape 1.5.1 introduces functionality for obtaining optimal sets of camera positions based on rough model and creating mission plans using these optimal sets. This functionality may be used as well for analyzing excessive image sets to understand which images are useful and which are redundant. This tutorial covers both scenarios. Please note that these features are in the experimental stage and any feedback is highly appreciated.

Mission planning

Overview

Mission planning feature works as follows. At first, photos captured during a simple overhead flight are used to create a rough model. Then, a set of the viewpoints which are enough to cover the surface of the object with sufficient overlap is generated. Finally, a round-trip passing through all the generated viewpoints is computed and saved in KML file format. Complete workflow is described in the next section.

Workflow

  1. Make an overhead flight over the region of interest to capture a basic imagery set. To capture vertical and concave surfaces better, you may use a 3d-photogrammetry survey preset in your drone app instead of taking only nadir photos.
  2. Import photos to Metashape and align them, then build mesh - a rough model of the object - from the sparse cloud.

  3. Specify "home point" for the drone close to the expected take-off point. This can be done by placing a Point shape on the rough model (use toolbar instrument to draw shapes in the Model View) and assigning label "home" (case-insensitive) to it (Properties dialog available from the Point shape context menu).
  4. Open Tools -> Plan Motion dialog. Enable the following steps: Generate candidates, Optimize coverage, Create flight plan. Specify KML export path and other parameters according to the description given below (in the Plan Motion dialog parameters overview) and run processing.

  5. Import files to drone app that supports KML flight plans with gimbal orientation e.g. Litchi.

Notes

Please note that each flight must be watched by a pilot ready to take manual control over the drone in case of gps issues or unreconstructed obstacles such as wires or trees.

Coverage optimization

Overview

Optimize coverage feature is made for analyzing excessive image sets to understand which images are useful and which are redundant.

Workflow

  1. Align photos using entire dataset and build rough mesh from the sparse cloud.
  2. Open Tools -> Plan Motion dialog. Check Optimize coverage step, disable Generate candidates and Create flight plan steps. You should also specify preferred capturing distance (in meters) if your model is scaled.
  3. Run processing. After it, all cameras not included in optimal subset will get disabled.

Original dataset with excessive number of images - 2480 images, 6 h 50 min processing time:

Automatically selected cameras sufficient for the quality model - 833 images, 1 h 59 min processing time:

Notes

Successive runs of Optimize coverage step will remove some cameras each time. This is an expected behaviour conditioned by the current implementation of stop criterion (see Total quality threshold property in the description below).

Plan Motion dialog parameters overview

Steps:
  • Generate candidates: generates excessive set of viewpoints using default model in the current chunk. Note that this step replaces cameras in chunk with synthetic cameras and removes point cloud.
  • Optimize coverage: selects optimal subset of viewpoints using both enabled and aligned cameras of the current chunk.
  • Generate flight plan: finds optimal traverse order of existing viewpoints and exports mission plan in KML format. Note that this step replaces cameras in chunk with synthetic cameras and removes point cloud.
General:
  • Focus on selection: to consider only selected triangles of the model as target for reconstruction. In any case all available triangles will be used for obstacle avoidance.
Texture resolution:
  • Capture distance: distance in meters which is preferred for taking photos from, measured from the surface of a rough model. In case some parts of the surface are not observable by cameras from the specified distance, more distant cameras will be used for those parts. For an unscaled model this option is disabled and the closest cameras will be given a higher priority.
Coverage optimization:
  • Coverage saturation threshold: value that presents concept of what coverage per point is enough for a complete model to be reconstructed. It may be understood as weighted percentage of hemisphere covered by cameras at each point of the surface. Typical values lie in range between 0.3 and 0.5.
  • Total quality threshold: value that specifies what percent of total achievable coverage is enough to stop adding cameras to the optimal set.
  • Max cameras: specifies the number of the cameras in the optimal set to be selected.
Obstacle avoidance:
  • Restricted distance: distance from the object which is restricted for putting waypoints in or planning path through. Focus on selection parameter is ignored here - all parts of the object available in the rough model are considered.
  • Min altitude: relative altitude to home point (in meters) which is restricted for putting waypoints in or planning path through.
Flight plan properties:
  • KML file path: path to export flight plan as KML file.
  • Min pairwise distance: minimal distance between consecutive waypoints. The default value is set to 0.5 meters according to DJI drones firmware.
  • Roundtrip chunk size: maximal possible number of waypoints per KML file. If flight plan has more waypoints than specified value, it will be split into chunks. The default value is set to 99 according to DJI drones firmware.