Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - LFSantosgeo

Pages: [1]
1
Python and Java API / Reprojection Errors (script x chunk info)
« on: November 20, 2019, 01:39:54 PM »
Hello!

I'm running the following script to get reprojection errors by point of the point cloud:

Code: [Select]
def tiepointsRMS(path, i=[0]):
    i[0] += 1
    """
    Generates a TXT file with point index of Sparse Cloud, associated
    coordinates and calculated reprojection error (pIndex, X, Y, Z, error)
    """
    point_cloud = chunk.point_cloud
    projections = point_cloud.projections
    points = point_cloud.points
    npoints = len(points)

    M = chunk.transform.matrix

    file = open(path + str(i) + ".txt", "wt")
    print("Tiepoints reprojection error calculations started...")

    t0 = time.time()

    points_errors = {}

    for photo in chunk.cameras:

        if not photo.transform:
            continue

        T = photo.transform.inv()
        calib = photo.sensor.calibration

        pIndex = 0
        for proj in projections[photo]:
            track_id = proj.track_id
            while pIndex < npoints and points[pIndex].track_id < track_id:
                pIndex += 1
            if pIndex < npoints and points[pIndex].track_id == track_id:
                if not points[pIndex].valid:
                    continue

                coord = T * points[pIndex].coord
                coord.size = 3
                dist = calib.error(coord, proj.coord).norm() ** 2
                v = M * points[pIndex].coord
                v.size = 3

                if pIndex in points_errors.keys():
                    pIndex = int(pIndex)
                    points_errors[pIndex].x += dist
                    points_errors[pIndex].y += 1
                else:
                    points_errors[pIndex] = PS.Vector([dist, 1])

    for pIndex in range(npoints):

        if not points[pIndex].valid:
            continue

        if chunk.crs:
            w = M * points[pIndex].coord
            w.size = 3
            X, Y, Z = chunk.crs.project(w)
        else:
            X, Y, Z, w = M * points[pIndex].coord

        error = math.sqrt(points_errors[pIndex].x /
                          points_errors[pIndex].y)

        file.write("{:6d}\t{:.6f}\t{:.6f}\t"
                   "{:.6f}\t{:.6f}\n".format(pIndex, X, Y, Z, error))

    t1 = time.time()

    file.close()
    print("Script finished in " + str(int(t1-t0)) + " seconds.")

While it succeeds to calculate the reprojection error for each tie point, when I compare the max error value obtained with script (14,5876 pix) with the max error provided by chunk info (21.8388 pix) they don't match. Is that supposed to happen? Looking forward for the reply.

Regards!

2
Python and Java API / Reprojection error
« on: March 10, 2019, 01:46:14 PM »
Hello,

I got the following code to calculate max and rms reprojection errors:

Code: [Select]

def RMS_MAX_reprojection_error(chunk):

    cameras = chunk.cameras
    point_cloud = chunk.point_cloud
    points = point_cloud.points
    projections_per_camera = point_cloud.projections
    tracks = point_cloud.tracks

    point_squared_errors = [[] for i in range(len(points))]
    point_key_point_size = [[] for i in range(len(points))]
    track_cameras = [[] for i in range(len(tracks))]
    track_projections = [[] for i in range(len(tracks))]

    for camera_id, camera in enumerate(cameras):
        if camera not in projections_per_camera:
            continue

        projections = projections_per_camera[camera]

        for projection_id, projection in enumerate(projections):

            track_id = projection.track_id
            track_cameras[track_id].append(camera_id)
            track_projections[track_id].append(projection_id)

    for i, point in enumerate(points):
        if point.valid is False:  # se válido faz a estatística abaixo
            continue

        track_id = point.track_id

        for idx in range(len(track_cameras[track_id])):
            camera_id = track_cameras[track_id][idx]
            projection_id = track_projections[track_id][idx]
            camera = cameras[camera_id]
            projections = projections_per_camera[camera]
            projection = projections[projection_id]
            key_point_size = projection.size
            error = camera.error(point.coord, projection.coord) / key_point_size
            point_squared_errors[i].append(error.norm() ** 2)

    total_squared_error = sum([sum(el) for el in point_squared_errors])
    # nº projeções
    total_errors = sum([len(el) for el in point_squared_errors])
    max_squared_error = max([max(el+[0])
                            for i, el in enumerate(point_squared_errors)])

    rms_reprojection_error = math.sqrt(total_squared_error/total_errors)
    max_reprojection_error = math.sqrt(max_squared_error)

    return rms_reprojection_error, \
           max_reprojection_error

Sometimes it doesn't work at all at getting the values and sometimes it does. I haven't manage to figure out why. Can anyone help out?

3
Python and Java API / Markers XML to Text File
« on: February 14, 2019, 06:08:40 AM »
Hello!

Is there any script to export MetaShape XML markers into text file with marker labels, photo labels and 2D coordinates from markers projections in the photos? Output something like:

#marker_label  #photo_label  #img_x  # img_y

I've run into lots of exemples of the opposite procedure...
Thanks in advance!

4
Python and Java API / Filtering Sparse Point Cloud
« on: May 16, 2018, 07:32:34 PM »
Hello! Is there a consensus about the gradual selection and build points to filter reprojection error of the sparse point cloud?

I've checked the following topics from which I extracted some quotes bellow:
http://www.agisoft.com/forum/index.php?topic=6287.0
http://www.agisoft.com/forum/index.php?topic=8140.0

Quote
GUI: Gradual Selection=X is not the same as buildPoints(error=X)
So they are not the same. And they are not equivalent for the filtering task. What's the difference?

For buildPoints() -- Build Points.
Code: [Select]
buildPoints(error=10[, min_image ][, progress])
Rebuild point cloud for the chunk.
Parameters
• error (float) – Reprojection error threshold.
• min_image (int) – Minimum number of point projections.
• progress (Callable[[float], None]) – Progress callback.

Quote
The perk of using this [buildPoints], apart from that it's shorter, is that statistical 'outliers' that were removed in previous stages could be reinstated if their errors are reduced to within the threshold.

As for the PointCloud.Filter()  -- Gradual Selection:
Code: [Select]
threshold = 0.5
>>> f = PhotoScan.PointCloud.Filter()
>>> f.init(chunk, criterion = PhotoScan.PointCloud.Filter.ReprojectionError)
>>> f.selectPoints(threshold)

Quote
I understand that all matches ,even filtered ones, are kept within the database which means that chunk.buildPoints(error=X) {'coordinates applied...'} will 'resuscitate' some of them and is by no means equivalent to Gradual Selection:Reprojection Error






5
Python and Java API / Height Above Ground
« on: May 10, 2018, 11:26:59 PM »
Hello!

Is there a way to access the estimated(calculated) height above ground through Python (if already available after alignment)?

Thanks!
Luiz Fernando



6
Python and Java API / MetaData
« on: March 20, 2018, 12:48:27 AM »
I'm trying to retrieve the alignment duration out of the sparse point cloud (point_cloud) metadata but there are only the following metadata when the input is chunk.point_cloud.meta:

{'match/duration', 'match/match_downscale', 'match/match_point_limit', 'match/match_preselection_generic', 'match/match_preselection_reference', 'match/match_tiepoint_limit''}

Does anyone knows where can I find the align duration?

Thanks in advance!

7
General / PhotoScan Handling Different Flights
« on: December 15, 2017, 03:29:00 AM »
Hello!

I would like to know how PhotoScan handles two flights with different altitude above ground.  I have two flights with good overlap between them as I'm able to process them together. I also have GCP for the imaged area but they are unable to be used in separate processing (have good coverage for the whole area but not for the individual flight's coverage). So what PhotoScan does:

1. In terms of GSD as it depends on the altitude when processing it all together?
2. When generating reports for the whole project (including both flights) it outputs a mean value for all?

How can I assess if their overlap or different GSDs contributes somehow to the error/accuracy for the point cloud?
Any comments, suggestions, ideas, experiences?

Thanks!
L.Fernando

8
Python and Java API / Creating and Managing Multiple Chunks
« on: December 11, 2017, 03:46:28 PM »
Hello!

I'm finding difficulty to add a large number of chunks in loop.
 
Code: [Select]
    from_range = 1000
    to_range = 351000
    interval = 1000

    list_labels = list(range(from_range, to_range, interval))
    lenght = len(list_labels)
    print("Will be created {} chunks for this process".format(lenght))

    chunk = PS.app.document.chunks
    for chunk in range(lenght):
        doc.addChunk()
 
UPDATED: managed to create multiple chunks...

9
General / Workflow and Errors
« on: November 28, 2017, 07:41:18 PM »
Hello!

I just made some tests with Photoscan when generating Sparse Cloud:

Workflow 1:
1. Add chunk
2. Add photos (155 photos)
3. Convert from WGS84 to my desirable one (mine is SIRGAS2000 UTM)
4. Align with highest accuracy and standard tie and key points limit (using 80.000/8.000) = gives 153 aligned cameras.
5. Import markers (*xml Agisoft Markers) [previously done with the same photos getting the lowest error possible]
6. Uncheck cameras and check the proper markers
7. Update & Optimize
8. Read the reference pane

This gives me control points error as 0.006623 and check points error as 0.031124.

Workflow 2:
1. Add chunk
2. Add photos (the same 155 photos)
3. Convert coordinates reference
4. Add markers (import)
5. Uncheck all photos in the reference pane
6. Align with the same previously parameters (highest/80,000/8,000) = gives 151 aligned cameras.

7. Update & Optimize
8. Read the results in the reference pane

This workflow 2 gives: control points error as 0.006235 and check points error as 0.030192.

So my questions are (despite quite small differences):
A) Alignment: does it gives different alignment every time I run Align photos...?
B) Referencing: does it matters the order to put or am I doing it wrong?
C) Shouldn´t give the same numbers of aligned cameras and error even though different ordered steps workflows?

The non-aligned cameras are those acquired with high vegetation (on both workflows) that I was already expecting difficult in the alignment.

Thanks in advance!

PS. My workflow was updated on post bellow!

Pages: [1]