Forum

Author Topic: Quality Control: Image, Point Cloud, Calibration, and Python API  (Read 9128 times)

jpvega

  • Newbie
  • *
  • Posts: 10
    • View Profile
Hi everyone,

I recently started working with PhotoScan to evaluate its Structure From Motion performance (camera alignment performance) as well as its Python API. A lot has been said in this forum about these concepts and I will do my best effort in citing all the posts that I used, to give credit to the original authors. If you find any missing citations (please forgive me), I will edit the post to reflect their existence.

Scope

This post is meant to be a mix of:
  • A go-to post for very often asked concepts about quality SFM processing.
  • A place to describe SFM concepts, share opinions about it, and centralize Agisoft staff's explanations.
This post is not meant to be:
  • A go-to post for MVS, mesh generation, texture generation, color correction, point classification, nor ortho-rectification of images.
  • A one-way dissertation. Please, feel free to participate.

SFM vs MVS

I am only interested in talking about Structure From Motion (SFM) and not Multi-View Stereo (MVS). The difference, although somewhat subjective and dependent on the literature, could be summarized as follows:

SFM: SFM focuses on building a joint estimation of camera parameters, including pose and sometimes calibration, as well as world points in 3D, based on an set images and potentially initial estimates on camera calibration and point poses, markers, etc. For those coming from other fields, it can be thought of as offline visual SLAM.

MVS: MVS focuses on building a dense, colored and unified point cloud of the scene. Dense is the priority here, then coloring, and finally filtering and matching of disjoint chunks of points. In short, it can be thought of as multiple camera stereo, which it is.

Python API vs GUI

There is one simple reason why I want to include the Python API in this post. There are many types of users for this kind of software, each with a very different background. Some only swear by experience, others by theoretical definitions. I am a fervent defender of the need to lay a bridge between both in other to really, fully, understand something. And having access to those pieces of computation based on theoretical grounds which explain why one’s intuition or experience is right or wrong is uplifting.

SFM Pipeline

There may be as many variations of the SFM pipeline as people trying to implement it. To be consistent with the spirit of the post, I will try to describe the key concepts as presented in PhotoScan, which are probably known to all of you.

Feature Detection: For each image, extract singular and hopefully unique visual attributes, called key points. These key points should be scale invariant, i.e. distinguishable independently of their distance from the camera, rotation invariant, i.e. distinguishable independently of their orientation with respect to the camera, as unique as possible, as sharp and free of noise as possible, and finally as abundant as possible.

Feature Matching: This stage consists of a piece of computation per image pairs. Whether these pairs are found through brute force methods, quick and dirty matching, or using reference information is only relevant to processing time and, in some measure, to output quality. For each image pair, try and match their respective key points in a coherent way. This is better achieved with high quality key points, and can be perfected with additional information like prior information on relative camera poses and initial camera calibrations. These matches are called tie points.

Structure Estimation: This stage is where camera calibration really comes into play. Whether you have initially calibrated cameras or not, each camera will have an intrinsic matrix and intrinsic distortion coefficients associated with it. The real issue is whether you want to (re)calibrate your cameras as part of the estimation. Needless to say, including this (re)calibration not only makes it more time consuming, it also decreases quality and, in some cases, it can cause a failure or divergence of the process. This stage consists of global or bundled pieces of computation per image n-tuples. Although the basic concept is better explained using a single image pair. For each image pair, and for each tie point, we need to find their position in the world. Assuming that all tie points come from correctly matched key points, i.e. there are no mismatches or outliers, each one of them defines a geometrical constraint on both cameras, and hence a constraint on their parameters. These constraints, in conjunction with some error function, are then used to define the structure of the scene, including tie point poses in the world. When some of the tie points come from mismatched key points, additional procedures like RANSAC are used.

Structure Optimization: This stage is what its name suggests, an optimization of the scene. The process is similar to that of structure estimation, but varies in that additional intrinsic distortion coefficients can be used and tie points can be left out due to poor performance, measured using some quality control metrics. Finally it is expected that this stage builds upon the results of previously applied structure estimations and optimizations.

Quality Control Metrics

The quality control metrics mentioned earlier, some of them used in structure optimization, can and should be used to evaluate the overall performance. After all, if we are not using ground control points or markers to close the loop, they are our only references. There are five metrics such metrics in PhotoScan: image quality; image count and effective overlap; projection accuracy; reconstruction uncertainty; reprojection error. Some of these metrics are only descriptive, others are actionable. In PhotoScan, they are accessible either in the Photos pane, in the Show Info window, or in the Gradual Selection option.

Let's start with a little bit of RTFM.

Quote
Alignment Accuracy: Higher accuracy settings help to obtain more accurate camera position estimates. Lower accuracy settings can be used to get the rough camera positions in a shorter period of time. While at High accuracy setting the software works with the photos of the original size, Medium setting causes image downscaling by factor of 4 (2 times by each side), at Low accuracy source files are downscaled by factor of 16, and Lowest value means further downscaling by 4 times more. Highest accuracy setting upscales the image by factor of 4. Since tie point positions are estimated on the basis of feature spots found on the source images, it may be meaningful to upscale a source photo to accurately localize a tie point. However, Highest accuracy setting is recommended only for very sharp image data and mostly for research purposes due to the corresponding processing being quite time consuming.

Image quality: Poor input, e. g. vague photos, can influence alignment results badly. To help you to exclude poorly focused images from processing PhotoScan suggests automatic image quality estimation feature. Images with quality value of less than 0.5 units are recommended to be disabled and thus excluded from photogrammetric processing, providing that the rest of the photos cover the whole scene to be reconstructed. PhotoScan estimates image quality for each input image. The value of the parameter is calculated based on the sharpness level of the most focused part of the picture.

Image count: PhotoScan reconstruct all the points that are visible at least on two photos. However, points that are visible only on two photos are likely to be located with poor accuracy. Image count filtering enables to remove such unreliable points from the cloud.

Expected overlap: In case of aerial photography the overlap requirement can be put in the following figures: 60% of side overlap + 80% of forward overlap.

Projection Accuracy: This criterion allows to filter out points which projections were relatively poorer localised due to their bigger size.

Reconstruction uncertainty: High reconstruction uncertainty is typical for points, reconstructed from nearby photos with small baseline. Such points can noticeably deviate from the object surface, introducing noise in the point cloud. While removal of such points should not affect the accuracy of optimization, it may be useful to remove them before building geometry in Point Cloud mode or for better visual appearance of the point cloud.

Reprojection error: High reprojection error usually indicates poor localization accuracy of the corresponding point projections at the point matching step. It is also typical for false matches. Removing such points can improve accuracy of the subsequent optimization step.

Now, some explanation.

Image Quality: The first stage in the SFM pipeline consists of extracting quality features, or key points, from images. It is therefore expected that images should be sharp, textured, and free of unwanted noise. These are common problems in images which should be avoided: blur, lack of focus, noise, chromatic aberration, specularities, exposure variation, saturation, vignetting, residual distortion. In other words, they should not be blurry and all non-blurriness should come from recognizable and repeatable texture. PhotoScan matches images on different scales, forming image pyramids, to improve robustness with blurred or difficult to match images. The Accuracy parameters in the MatchPhotos method sets the minimum scale at which images are processed. Using blurred images produces a similar effect as lowering the Accuracy value.

Image Count and Effective Overlap: For a world point, the number of key points is equal to the number of camera from where it is observed. This number is by definition the image count. When the image count of a world point increases, its uncertainty tends to decrease. That’s why a 90%-overlap set of blurry images may give a low number of low quality point, whereas a 60%-overlap set of high quality images may give a high number of high quality points. When averaging the image counts over all world points, the result is an indicator of the effective overlap of the images. It’s called effective because only useful key points are considered, and not image areas. See Figure 1.

Projection accuracy: Each detected key point has a coordinate in the image and a size. Given a camera and a key point in the image plane, the error in projecting a key point into the world can be viewed as a cone whose diameter is directly proportional to the key point size. Assuming that its coordinate projects towards the exact world point pose, then the size is directly related to projection accuracy. For a world point coming from multiple key points, its projection accuracy is calculated as the mean key point size. See Figure 2.

Reconstruction uncertainty: When intersecting key point projections with differing directions and accuracies, a uncertainty volume surrounding the world point estimate can be defined using a contour surface containing equally uncertain points in the world. By approximating the uncertainty by gaussian noise the surface becomes an ellipsoid, represented by a matrix in 3D, calculated from a PCA approximation of the original uncertainty. This matrix already includes the information coming from all key points. We can then directly define the reconstruction uncertainty as the condition number of this matrix, calculated from the ratio of its largest and smallest eigenvalues. As the condition number gets larger, so does the reconstruction uncertainty. See Figure 3.

Reprojection Error: Let’s assume that we have on one hand the estimated pose of a camera as well as the estimation for one point in the world corresponding to one of the key points for this camera, and on the other hand a (u, v) coordinate for the key point in the image. We could reproject the imperfectly but optimally estimated point in the world to the image and compare its (x, y) coordinate to (u, v). The discrepancy, in the image plane, between these values is what is called reprojection error. For a world point coming from multiple key points the reprojection error for this point comes from aggregating the reprojection error of all the key points. We can then find a Max and an RMS value. Because key points are not actually points but rather areas in the image, there is an issue of normalization involved. Normalization based on key point size makes the error relative to key point sizes. See Figure 4.

Acceptable Values

Should we use a higher number of key points and then filter them out or look for a lower amount of points and hope that optimization improves the result? What should the minimum image quality be? What about the other quality control metric? This is a matter of much debate, and the most common answer probably is: it depends. Many answers are based on experienced and that’s fine by me, but I hope that this post helps enlighten this experience.

Camera Calibration

Camera calibration involves all intrinsic parameters, including distortion parameters. When running multiple projects with the same camera, its calibration parameters are re-estimated over and over, based on potentially highly varied scenes, and thus providing dissimilar results. Repeating the same project may even give different calibration outputs.
To prevent this discrepancies from occurring, it is recommended to pre-calibrate the camera as best as you can and use these constant values as fixed calibration for all your projects. This provides quality, consistency, and unity among all your projects. You can use Agisoft Lens for this purpose. Plus, using a fixed pattern provides more reliable values.

Questions

  • Is there a way to define a absolute quality metric for images? (i.e. not dependent on the whole image set, maybe by adding a reference image)
  • Could anyone give me pointers on how to implement reconstruction uncertainty using PhotoScan Python API?
  • Why is effective overlap computed using points marked as not valid but not so mean key point size?
  • What are tracks and how do they relate to points?
  • Once a good scene structure has been found, there is plenty new information for key point detection and matching. Nevertheless, all this information is ditched out when repeating the process. Do you plan changing this behaviour in future releases?
  • Is camera referencing uses in the matching stage or is it only used as an initial estimate for camera alignment?
  • Why does Agisoft Lens provide less distortion parameters than Agisoft PhotoScan when the process is more reliable?

References

Optimization workflow: http://www.agisoft.com/forum/index.php?topic=738.msg3821#msg3821
Image quality: http://www.agisoft.com/forum/index.php?topic=5325.msg26216#msg26216, http://www.agisoft.com/forum/index.php?topic=2179.msg11596#msg11596, http://www.agisoft.com/forum/index.php?topic=1924 (!)
Precision accuracy: http://www.agisoft.com/forum/index.php?topic=4149.msg22550#msg22550
Reconstruction uncertainty: http://www.agisoft.com/forum/index.php?topic=738.msg3575#msg3575, http://www.agisoft.com/forum/index.php?topic=2653.msg14014#msg14014
Acceptable values: http://www.agisoft.com/forum/index.php?topic=4279.msg21997#msg21997, http://www.agisoft.com/forum/index.php?topic=4513.msg22877#msg22877, http://www.agisoft.com/forum/index.php?topic=3559 (!)
Camera Calibration: http://www.agisoft.com/forum/index.php?topic=3747(!)
Reference manual: http://www.agisoft.com/pdf/photoscan-pro_1_2_en.pdf
Python API: http://www.agisoft.com/pdf/photoscan_python_api_1_2_5.pdf
« Last Edit: June 30, 2016, 08:52:41 PM by jpvega »

jpvega

  • Newbie
  • *
  • Posts: 10
    • View Profile
Re: Quality Control: Image, Point Cloud, Calibration, and Python API
« Reply #1 on: June 28, 2016, 10:05:41 PM »
Additional material:
  • Figure 1: Image quality = 0
  • Figure 2: Image quality = 0.55
  • Figure 3: Image quality = 0.95

jpvega

  • Newbie
  • *
  • Posts: 10
    • View Profile
Re: Quality Control: Image, Point Cloud, Calibration, and Python API
« Reply #2 on: June 28, 2016, 10:13:25 PM »
Python API

There is no code for reconstruction uncertainty yet.

EDITED

Code: [Select]
import PhotoScan
import math


def get_quality(chunk, force=False):

    cameras = chunk.cameras

    if force is False:
        cameras = [camera for camera in cameras
                            if 'Image/Quality' not in camera.photo.meta]

    if len(cameras) > 0:
        chunk.estimateImageQuality(cameras)

    images_quality = [float(camera.photo.meta['Image/Quality'])
                            for camera in chunk.cameras]

    return images_quality


def show_cameras_info(chunk):

    images_quality = get_quality(chunk)
    cameras = chunk.cameras

    width = 30
    print('-------------------------------------------------')
    print('Label'.ljust(width), 'Quality'.ljust(width))
    [print('+' if camera.enabled is True else '-',
           camera.label.ljust(width-2),
           camera.photo.meta['Image/Quality'].ljust(width))
                for camera in cameras]
    print('-------------------------------------------------')


def select_cameras(chunk, min_image_quality=None, verbose=False):

    cameras = chunk.cameras

    selected_cameras = []
    if min_image_quality is not None:
        images_quality = get_quality(chunk)
        selected_cameras = [camera for i, camera in enumerate(cameras)
                                    if camera.enabled is True and
                                       images_quality[i] < min_image_quality]
    for camera in selected_cameras:
        camera.selected = True

    nselected = len(selected_cameras)
    ncameras = len(cameras)

    if verbose is True:
        print('-------------------------------------------------')
        print('{:,}'.format(ncameras), 'cameras,',
              '{:,}'.format(nselected), 'selected')
        print('-------------------------------------------------')

    return selected_cameras


def disable_cameras(chunk, verbose=False):
   
    cameras = chunk.cameras
    ndisabled = sum([True for camera in cameras if camera.selected is True and
                                                camera.enabled is True])
    ncameras = len([True for camera in cameras if camera.enabled is True]) \
                 - ndisabled

    for camera in cameras:
        if camera.selected is True:
            camera.selected = False
            camera.enabled = False

    if verbose is True:
        print('-------------------------------------------------')
        print('{:,}'.format(ndisabled), 'disabled,',
              '{:,}'.format(ncameras), 'cameras')
        print('-------------------------------------------------')

def filter_cameras(chunk, min_image_quality=None):
    select_cameras(chunk, min_image_quality=min_image_quality)
    disable_cameras(chunk)


def get_reprojection_error(chunk):

    cameras = chunk.cameras
    point_cloud = chunk.point_cloud
    points = point_cloud.points
    projections_per_camera = point_cloud.projections
    tracks = point_cloud.tracks
    point_squared_errors = [[] for i in range(len(points))]
    point_key_point_size = [[] for i in range(len(points))]
    track_cameras = [[] for i in range(len(tracks))]
    track_projections = [[] for i in range(len(tracks))]

    for camera_id, camera in enumerate(cameras):
        if camera not in projections_per_camera:
            continue

        projections = projections_per_camera[camera]
        for projection_id, projection in enumerate(projections):
            track_id = projection.track_id
            track_cameras[track_id].append(camera_id)
            track_projections[track_id].append(projection_id)

    for i, point in enumerate(points):
        if point.valid is False:
            continue

        track_id = point.track_id

        for idx in range(len(track_cameras[track_id])):
            camera_id = track_cameras[track_id][idx]
            projection_id = track_projections[track_id][idx]
            camera = cameras[camera_id]
            projections = projections_per_camera[camera]
            projection = projections[projection_id]
            key_point_size = projection.size
            error = camera.error(point.coord, projection.coord) / key_point_size
            point_squared_errors[i].append(error.norm() ** 2)
            point_key_point_size[i].append(key_point_size)

    total_squared_error = sum([sum(el) for el in point_squared_errors])
    total_errors = sum([len(el) for el in point_squared_errors])
    max_squared_error = max([max(el+[0])
                            for i, el in enumerate(point_squared_errors)])
    rms_reprojection_error = math.sqrt(total_squared_error/total_errors)
    max_reprojection_error = math.sqrt(max_squared_error)
    max_reprojection_errors = [math.sqrt(max(el+[0]))
                                for el in point_squared_errors]

    return rms_reprojection_error, \
           max_reprojection_error, \
           max_reprojection_errors


def get_overlap(chunk, only_points=False):

    cameras = chunk.cameras
    point_cloud = chunk.point_cloud
    points = point_cloud.points
    projections_per_camera = point_cloud.projections
    tracks = point_cloud.tracks
    track_idx = []

    if only_points is False:
        overlap = [0 for i in range(len(tracks))]

        for camera in cameras:
            if camera not in projections_per_camera:
                continue

            projections = projections_per_camera[camera]

            for projection in projections:
                track_id = projection.track_id
                overlap[track_id] += 1   

    else:
        overlap = [0 for i in range(len(points))]
        point_tracks = [None for i in range(len(tracks))]

        for i, point in enumerate(points):
            track_id = point.track_id
            point_tracks[track_id] = i

        for camera in cameras:
            if camera not in projections_per_camera:
                continue

            projections = projections_per_camera[camera]

            for projection in projections:
                track_id = projection.track_id
                point_id = point_tracks[track_id]

                if point_id is not None and points[point_id].valid is True:
                    overlap[point_id] += 1   

    effective_overlap = sum(overlap) / len(overlap)

    return effective_overlap, overlap


def get_key_point_size(chunk):

    cameras = chunk.cameras
    point_cloud = chunk.point_cloud
    points = point_cloud.points
    projections_per_camera = point_cloud.projections
    tracks = point_cloud.tracks
    key_point_size = [[] for i in range(len(points))]
    point_tracks = [None for i in range(len(tracks))]

    for i, point in enumerate(points):
        track_id = point.track_id
        point_tracks[track_id] = i

    for camera in cameras:
        if camera not in projections_per_camera:
            continue

        projections = projections_per_camera[camera]

        for projection in projections:
            track_id = projection.track_id
            point_id = point_tracks[track_id]

            if point_id is not None and points[point_id].valid is True:
                key_point_size[point_id].append(projection.size)

    total_key_point_size = sum([sum(el) for el in key_point_size])
    total_key_points = sum([len(el) for el in key_point_size])
    mean_key_point_size = total_key_point_size / total_key_points
    mean_key_point_sizes = [sum(el)/len(el) for el in key_point_size]

    return mean_key_point_size, key_point_size, mean_key_point_sizes


def show_points_info(chunk, only_points=False, verbose=False):
   
    rre, mre, mres = get_reprojection_error(chunk)
    mkps, kps , mkpss = get_key_point_size(chunk)
    eo, o = get_overlap(chunk, only_points=only_points)

    point_cloud = chunk.point_cloud
    points = point_cloud.points
    tracks = point_cloud.tracks

    npoints = len([True for point in points if point.valid is True])
    ntracks = len(tracks)

    width = 30
    print('-------------------------------------------------')
    print('Points'.ljust(width), '{:,}'.format(npoints),
          'of', '{:,}'.format(ntracks))
    print('RMS reprojection error'.ljust(width), '{:6g}'.format(rre))
    print('Max reprojection error'.ljust(width), '{:6g}'.format(mre))
    print('Mean key point size'.ljust(width), '{:6g}'.format(mkps), 'pix')
    print('Effective overlap'.ljust(width), '{:6g}'.format(eo))
    print('-------------------------------------------------')


def select_points(chunk, max_reprojection_error=None, min_image_count=None,
    max_projection_accuracy=None, verbose=False):
   
    point_cloud = chunk.point_cloud
    points = point_cloud.points
   
    selected_points = []
    if max_reprojection_error is not None:
        rre, mre, mres = get_reprojection_error(chunk)
        selected_points = [point for i, point in enumerate(points)
                                    if point.valid is True and
                                       mres[i] > max_reprojection_error]
    for point in selected_points:
        point.selected = True

    selected_points = []
    if max_projection_accuracy is not None:
        mkps, kps , mkpss = get_key_point_size(chunk)
        selected_points = [point for i, point in enumerate(points)
                                    if point.valid is True and
                                       mkpss[i] > max_projection_accuracy]
    for point in selected_points:
        point.selected = True

    selected_points = []
    if min_image_count is not None:
        eo, o = get_overlap(chunk, only_points=True)
        selected_points = [point for i, point in enumerate(points)
                                    if point.valid is True and
                                       o[i] <  min_image_count]
    for point in selected_points:
        point.selected = True

    nselected = len([True for point in points if point.valid is True and
                                                 point.selected is True])
    npoints = len([True for point in points if point.valid is True])

    if verbose is True:
        print('-------------------------------------------------')
        print('{:,}'.format(npoints), 'points,',
              '{:,}'.format(nselected), 'selected')
        print('-------------------------------------------------')

    return selected_points


def remove_points(chunk, verbose=False):
 
    point_cloud = chunk.point_cloud
    points = point_cloud.points
    nremoved = sum([True for point in points if point.selected is True and
                                                point.valid is True])
    npoints = len([True for point in points if point.valid is True]) - nremoved
    point_cloud.removeSelectedPoints()

    for point in points:
        point.selected = False

    if verbose is True:
        print('-------------------------------------------------')
        print('{:,}'.format(nremoved), 'removed,',
              '{:,}'.format(npoints), 'points')
        print('-------------------------------------------------')


def filter_points(chunk, max_reprojection_error=None, min_image_count=None,
    max_projection_accuracy=None):
   
    select_points(chunk, max_reprojection_error=max_reprojection_error,
                         min_image_count=min_image_count,
                         max_projection_accuracy=max_projection_accuracy)
    remove_points(chunk)


if __name__ == '__main__':
    #pass
    PhotoScan.app.console.clear()
    chunk = PhotoScan.app.document.chunk

    # IMAGE QUALITY TOOLS
    show_cameras_info(chunk)
    #select_cameras(chunk, min_image_quality=0.7, verbose=True)
    #disable_cameras(chunk, verbose=True)

    # POINT CLOUD QUALITY TOOLS
    show_points_info(chunk, only_points=False)
    #selected_points = select_points(chunk, max_reprojection_error=0.1,
    #                                       min_image_count=5,
    #                                       max_projection_accuracy=25,
    #                                       verbose=True)
    #remove_points(chunk, verbose=True)
« Last Edit: June 30, 2016, 03:28:07 PM by jpvega »

Uygar

  • Newbie
  • *
  • Posts: 16
    • View Profile
Re: Quality Control: Image, Point Cloud, Calibration, and Python API
« Reply #3 on: June 30, 2016, 02:14:02 PM »
i have this error ...

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 9871
    • View Profile
Re: Quality Control: Image, Point Cloud, Calibration, and Python API
« Reply #4 on: June 30, 2016, 02:18:40 PM »
Hello Uygar,

To run the script you need to save it in plain text format to the file with .py extension. Then start the script using Run Script command in the Tools Menu.
Best regards,
Alexey Pasumansky,
AgiSoft LLC

Uygar

  • Newbie
  • *
  • Posts: 16
    • View Profile
Re: Quality Control: Image, Point Cloud, Calibration, and Python API
« Reply #5 on: June 30, 2016, 02:38:49 PM »
How to save plain Text format ? which software by ?

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 9871
    • View Profile
Re: Quality Control: Image, Point Cloud, Calibration, and Python API
« Reply #6 on: June 30, 2016, 02:41:32 PM »
You can copy the contents of code section from the forum post to Notepad, for example, then save it to the file with .py extension.
Best regards,
Alexey Pasumansky,
AgiSoft LLC

Uygar

  • Newbie
  • *
  • Posts: 16
    • View Profile
Re: Quality Control: Image, Point Cloud, Calibration, and Python API
« Reply #7 on: June 30, 2016, 02:47:12 PM »
i cant save py format , only txt .

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 9871
    • View Profile
Re: Quality Control: Image, Point Cloud, Calibration, and Python API
« Reply #8 on: June 30, 2016, 03:07:25 PM »
Hello Uygar,

Can you switch to "All files" option and type in filename with .py extension manually? Like script.py.
Best regards,
Alexey Pasumansky,
AgiSoft LLC

Uygar

  • Newbie
  • *
  • Posts: 16
    • View Profile
Re: Quality Control: Image, Point Cloud, Calibration, and Python API
« Reply #9 on: June 30, 2016, 03:13:31 PM »
i tried bu its not work .

jpvega

  • Newbie
  • *
  • Posts: 10
    • View Profile
Re: Quality Control: Image, Point Cloud, Calibration, and Python API
« Reply #10 on: June 30, 2016, 03:27:02 PM »
Hi Uygar,

i tried bu its not work .

My bad, you need to uncomment the following line at the end the script:

Code: [Select]
chunk = PhotoScan.app.document.chunk

Also, notice that I have edited the content of the code snippet a few times, you should make sure that you have the latest version.



Uygar

  • Newbie
  • *
  • Posts: 16
    • View Profile
Re: Quality Control: Image, Point Cloud, Calibration, and Python API
« Reply #11 on: June 30, 2016, 05:48:10 PM »
Thanks. it is work .

But may ı found ı bug :) . if there is disabled camere in chunk , program is not responding .

And , can this script  choose bad photos automaticly ?

jpvega

  • Newbie
  • *
  • Posts: 10
    • View Profile
Re: Quality Control: Image, Point Cloud, Calibration, and Python API
« Reply #12 on: June 30, 2016, 06:18:47 PM »
Hi Uygar

Let me try to replicate your bug. If I find something, I'll get back to you.

And , can this script  choose bad photos automaticly ?

As for your request, uncommenting some of the lines at the end of the script you can filter bad images and bad points.

Code: [Select]
# IMAGE QUALITY TOOLS
show_cameras_info(chunk)
#select_cameras(chunk, min_image_quality=0.7, verbose=True)
#disable_cameras(chunk, verbose=True)

# POINT CLOUD QUALITY TOOLS
show_points_info(chunk, only_points=False)
#selected_points = select_points(chunk, max_reprojection_error=0.1,
#                                       min_image_count=5,
#                                       max_projection_accuracy=25,
#                                       verbose=True)
#remove_points(chunk, verbose=True)

Notice that you can also use the following methods:

Code: [Select]
def filter_cameras(chunk, min_image_quality=None):
    ...

def filter_points(chunk, max_reprojection_error=None, min_image_count=None,
    max_projection_accuracy=None):
    ...

Uygar

  • Newbie
  • *
  • Posts: 16
    • View Profile
Re: Quality Control: Image, Point Cloud, Calibration, and Python API
« Reply #13 on: June 30, 2016, 08:47:51 PM »
Thank you so much. Can ı share this script on my facebook group ?

Hi Uygar

Let me try to replicate your bug. If I find something, I'll get back to you.

And , can this script  choose bad photos automaticly ?

As for your request, uncommenting some of the lines at the end of the script you can filter bad images and bad points.

Code: [Select]
# IMAGE QUALITY TOOLS
show_cameras_info(chunk)
#select_cameras(chunk, min_image_quality=0.7, verbose=True)
#disable_cameras(chunk, verbose=True)

# POINT CLOUD QUALITY TOOLS
show_points_info(chunk, only_points=False)
#selected_points = select_points(chunk, max_reprojection_error=0.1,
#                                       min_image_count=5,
#                                       max_projection_accuracy=25,
#                                       verbose=True)
#remove_points(chunk, verbose=True)

Notice that you can also use the following methods:

Code: [Select]
def filter_cameras(chunk, min_image_quality=None):
    ...

def filter_points(chunk, max_reprojection_error=None, min_image_count=None,
    max_projection_accuracy=None):
    ...

jpvega

  • Newbie
  • *
  • Posts: 10
    • View Profile
Re: Quality Control: Image, Point Cloud, Calibration, and Python API
« Reply #14 on: June 30, 2016, 08:50:06 PM »
Hi Uygar,

As long as you cite this post to honour all the authors of the references I have made, you may.

Quote
http://www.agisoft.com/forum/index.php?topic=5536.0
« Last Edit: June 30, 2016, 10:40:21 PM by jpvega »