Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - colin.spexigeo

Pages: [1]
1
General / Re: How is the total marker error (pix) being calculated
« on: February 07, 2020, 04:38:52 AM »
Thank you Paulo!

P.S. Off-hand, you wouldn't happen to know an easy way to calculate the "Error (m)" value, would you? I imagine you'd have to project the marker points/error offsets through the cameras (maybe projected onto a plane that's facing the sensor) and run an RMS on the cartesian-space error values. I'm trying to calculate this information and surface it to the user outside of the program.

2
General / How is the total marker error (pix) being calculated
« on: February 07, 2020, 01:11:31 AM »
Hi there,

I'm interested in how the "Total" value of marker errors is being calculated. I had assumed that it was an average (mean) of all camera projection errors, but this does not seem to be the case.

For example, Agisoft reports the following:

Errors
-dYUxAmaeU0.143
-YDOHdBJSz0.458
-YOo6Otjv50.629
CYzKGPFuwI0.437
f6n4TJCcSD0.054
FqIO_ToeyF0.397
Fy4qsbY3ft0.661
ladYl1TVsF0.497
nziQ02gyo50.322
pfZjrVEIQA0.518
rFtTrwWuiN1.332
tQGuj6cN-y0.585
VcznBnOJaW0.358
XDxBg3CLSh0.135
Total0.554 pix

However, the *average* values of all these errors is actually 0.466.

Does anybody know how the "total" values is being calculated?

Thanks in advance,
Colin

3
Python and Java API / Applying geoid to DEM appears to have no effect
« on: June 28, 2019, 09:30:48 PM »
Hello again,

As one of the final steps of our processing pipeline, I need to apply a geoid to adjust the DEM elevations (current DEM values are based on WGS84 ellipsoidal elevation).

The geoid I'm using is CGG2013.

I'm attempting to programmatically accomplish the addition of a new CRS with geoid as laid out in this tutorial. Here's how I'm going about it:

Code: [Select]
geoid_path = os.path.join(geoids_directory, 'EPSG 4269', 'CGG2013.tif')
crs = Metashape.CoordinateSystem('EPSG::4269')  # NAD83
crs.addGeoid(geoid_path)
self.chunk.buildDem(projection=crs)

The DEM end sup in NAD83, however, it seems to have had no effect on the resultant elevations coming back in the DEM. Am I missing something here?

Thank you for your time,
Colin

4
Figured this out by myself. The trick is to run:

Code: [Select]
chunk.refineMarkers([marker])
after pinning at least one projection.


5
Hi there,

We are trying to add some interactivity to our GCP automation process, allowing the user to adjust the marker projections in photos in an outside program. We then want to load up those specific adjustments into Metashape and have Metashape do the marker re-projection on unpinned marker projections.

I'm able to set the `pinned` and `coords` via Python just fine, but it never "re-projects" the other markers based on the changes I'm making via script. If I do the equivalent thing via the normal GUI interface (clicking the marker, dragging it to a new position), it updates all the other photo projections. Is there a way to trigger this via script?

This is roughly what I'm doing now:

Code: [Select]
from Metashape import Vector
offset = Vector((100, 200))
chunk = Metashape.app.document.chunks[0]
marker = chunk.markers[0]
projection = marker.projections.items()[1]
projection.coord += offset
projection.pinned = True
# I expect other un-pinned projections to be adjusted at this point!

Thanks,
Colin

6
Python and Java API / Re: Markers are slightly off
« on: April 02, 2019, 11:04:11 PM »
Hi Alexey,

Yes, that's essentially what I'm trying to do. I am also running a script that adjusts the reference altitude of the cameras based on the DJI metadata prior to initial camera alignment (forgot to mention this):

Code: [Select]
        for camera in chunk.cameras:
            if 'DJI/RelativeAltitude' in camera.photo.meta.keys() and camera.reference.location:
                z = float(camera.photo.meta['DJI/RelativeAltitude'])
                camera.reference.location = (camera.reference.location.x, camera.reference.location.y, z)

However it doesn't seem to be an altitude problem, otherwise the markers would simply be above/below the corresponding camera positions, unless I'm misunderstanding something fundamental here.

EDIT: Just ran another little test to verify I'm not going crazy here.

Code: [Select]
> campos = chunk.cameras[0].transform.translation()
> p = chunk.transform.matrix.mulp(campos)
> p = chunk.crs.project(p)
> p
Vector([-123.01777296919887, 49.14079153544101, 68.74203079824663])
> chunk.cameras[0].reference.location
Vector([-123.018551851397, 49.141292174004, 68.7])

Note that the projected values are significantly different. So this seems to confirm that the camera translations are *not* where the camera reference info says they are. Why would this be? Am I missing a step somewhere?

7
Python and Java API / Markers are slightly off
« on: April 02, 2019, 09:16:47 PM »
Hi there,

I'm attempting to automatically add GCP markers via script. The user provides a list of GCP latitude/longitude/elevation values, and the script will ingest it and add markers, then send back the projections to an API so the user can verify and modify them. I've hit a snag with my script that adds the markers, however. The markers I add via the coordinates are slightly off (rotated)

To demonstrate this, I have this script that will attempt to add a marker at the exact spot that the cameras are according to their reference data.

Code: [Select]
chunk = Metashape.app.document.chunks[0]

# Take in WGS84 point and project it into the chunk
def add_marker(point):
    p = chunk.crs.unproject(point)
    p = chunk.transform.matrix.inv().mulp(p)
    chunk.addMarker(p)

for camera in chunk.cameras:
    add_marker(camera.reference.location)

If the script is working correctly, then the markers would be overlapping with the camera positions. However, they are not, as you can see in the attached photo. All I had done with this chunk previously was add and align the photos.

Any help with this is appreciated!

8
Python and Java API / Re: Can't save settings: Permisson denied
« on: April 02, 2019, 09:00:16 PM »
I eventually figured out what this was. The images I was trying to add were not actually images. I've found a way to sidestep the permission issues for now, also.

9
Python and Java API / Can't save settings: Permisson denied
« on: March 25, 2019, 11:51:19 PM »
Hi there,

I am moving our company's pipeline scripts from PhotoScan to Metashape, but I'm running into an odd error when running scripts. The error is:

"Can't save settings: Permisson denied"

I have looked online and there doesn't seem to be anyone else running into this problem. I think that this is a related symptom of another problem I'm having where I'm unable to run `addPhotos`, only getting back: "RuntimeError: Can't load photos"

The invoking user has read/write permissions for both the Metashape executable and all of the photos that are trying to be added, so I don't know where the permission error could be.

Any help resolving this issue would be appreciated!

10
For posterity and for those interested, I finally put all the pieces together on the thing I was working on. This script does the following:

1. For a given polygon shape, create a convex hull.
2. From the convex hull, determine the minimum-area oriented bounding box (obb)
3. Set the chunk region to be this oriented bounding box (conform_region_to_polygon).

This was all needed because we want to use known GEOJSON shapes of property lines to shrink the processing region automatically.

NOTE: the script assumes that:
1. You've already run "Align Photos..." on a set of georeferenced photos
2. The chunk coordinate system is WGS 84.
3. The chunk has a single polygon shape with at least 3 vertices.

Code: [Select]
__author__ = 'Colin Basnett (https://github.com/cmbasnett)'

import math
from functools import cmp_to_key
from PhotoScan import *


# Three points are a counter-clockwise turn if ccw > 0, clockwise if
# ccw < 0, and collinear if ccw = 0 because ccw is a determinant that
# gives twice the signed  area of the triangle formed by p1, p2 and p3.
def ccw(p1, p2, p3):
    return (p2.x - p1.x) * (p3.y - p1.y) - (p2.y - p1.y) * (p3.x - p1.x)


def angle(p1, p2):
    x = (p2.x - p1.x)
    y = (p2.y - p1.y)
    a = math.atan2(y, x)
    return a


def points_cmp(a, b):
    if a.y == b.y:
        return a.x - b.x
    else:
        return a.y - b.y


# Removes duplicates from a sorted list (duplicate items are guaranteed to be adjacent)
def remove_duplicates(points):
    old = points[-1]
    for i in range(len(points) - 2, 0, -1):
        if points[i] == old:
            points.pop(i)
        else:
            old = points[i]


# Graham Scan to get a convex hull from a set of points
def get_convex_hull(points):
    # Sort by y, then x
    points = sorted(points, key=cmp_to_key(points_cmp))
    # Remove duplicates from sorted list
    remove_duplicates(points)
    # Now sort by polar angle from the first point
    points[1:] = sorted(points[1:], key=lambda p: angle(points[0], p))
    # Remove adjacent collinear line segments.
    for i in range(len(points) - 2, 0, -1):
        a = angle(points[i - 1], points[i])
        b = angle(points[i], points[i + 1])
        if a == b:
            points.pop(i)
    stack = points[0:3]
    for i in range(3, len(points)):
        while ccw(stack[-2], stack[-1], points[i]) <= 0:
            stack.pop()
        stack.append(points[i])
    return stack


def distance(p1, p2):
    return math.hypot(p1.x - p2.x, p1.y - p2.y)


# Numerically stable version of Heron's formula
def area_of_triangle(points):
    assert len(points) == 3
    a, b, c = points
    a, b, c = sorted([distance(a, b), distance(b, c), distance(c, a)], reverse=True)
    return 0.25 * math.sqrt((a + (b + c)) * (c - (a - b)) * (c + (a -b)) * (a + (b - c)))


# Area of polygon as a sum of triangle areas
def area_of_convex_hull(convex_hull):
    poly_area = 0
    for i in range(1, len(convex_hull) - 1, 1):
        points = [convex_hull[0], convex_hull[i], convex_hull[i + 1]]
        poly_area += area_of_triangle(points)
    return poly_area


def rotate_points(points, origin, angle):
    new_points = list(map(lambda x: x - origin, points))
    for p in new_points:
        s = math.sin(angle)
        c = math.cos(angle)
        px = (p.x * c) - (p.y * s)
        py = (p.x * s) + (p.y * c)
        p.x = px
        p.y = py
    return list(map(lambda x: x + origin, new_points))


def fit_obb(points):
    min_area = None
    obb = None
    for i in range(len(points) - 1):
        theta = -angle(points[i], points[i + 1])
        origin = points[i]
        rotated_polygon = rotate_points(points, origin, theta)
        aabb = aabb_from_points(rotated_polygon)
        area = aabb.area()
        if min_area is None or area < min_area:
            min_area = area
            size = aabb.size()
            rot = -theta
            obb_vertices = rotate_points(aabb.vertices(), origin, rot)
            center = (obb_vertices[0] + obb_vertices[2]) * 0.5
            obb = OBB(center, rot, size)
    return obb


class OBB(object):
    def __init__(self, center, rot, size):
        self.center = center
        self.rot = rot
        self.size = size

    def vertices(self):
        points = [
            Vector([-self.size.x / 2, -self.size.y / 2, 0]),
            Vector([ self.size.x / 2, -self.size.y / 2, 0]),
            Vector([ self.size.x / 2, self.size.y / 2, 0]),
            Vector([-self.size.x / 2, self.size.y / 2, 0]),
        ]
        points = rotate_points(points, Vector([0, 0, 0]), self.rot)
        return list(map(lambda x: x + self.center, points))


def aabb_from_points(points):
    min_x = min(points, key=lambda p: p.x).x
    min_y = min(points, key=lambda p: p.y).y
    max_x = max(points, key=lambda p: p.x).x
    max_y = max(points, key=lambda p: p.y).y
    return AABB(Vector([min_x, min_y, 0]), Vector([max_x, max_y, 0]))


class AABB(object):
    def __init__(self, min_, max_):
        self.min = min_
        self.max = max_

    def center(self):
        return (self.max + self.min) * 0.5

    def area(self):
        a = self.max - self.min
        return a.x * a.y

    def vertices(self):
        return [
            self.min,
            Vector([self.max.x, self.min.y, 0]),
            self.max,
            Vector([self.min.x, self.max.y, 0])
        ]

    def size(self):
        return self.max - self.min


def set_rot_of_matrix(transform, rot):
    transform[0, 0] = rot[0, 0]
    transform[0, 1] = rot[0, 1]
    transform[0, 2] = rot[0, 2]
    transform[1, 0] = rot[1, 0]
    transform[1, 1] = rot[1, 1]
    transform[1, 2] = rot[1, 2]
    transform[2, 0] = rot[2, 0]
    transform[2, 1] = rot[2, 1]
    transform[2, 2] = rot[2, 2]


def region_transform(chunk):
    transform = Matrix.Translation(chunk.region.center)
    set_rot_of_matrix(transform, chunk.region.rot)
    return transform


def local_to_region(chunk, points):
    transform = region_transform(chunk)
    points = [transform.inv().mulp(x) for x in points]
    return points


def convert_to_region(chunk, points):
    T = chunk.transform.matrix
    crs = chunk.shapes.crs
    points = [T.inv().mulp(crs.unproject(x)) for x in points]
    return local_to_region(chunk, points)


# Shrinks the region to a minimum bounding box around a polygon.
def conform_region_to_polygon(chunk, shape):
    assert chunk is not None
    assert shape is not None
    assert shape.type is Shape.Polygon
    assert len(shape.vertices) >= 3
    chunk.resetRegion()
    # Save the old region rotation as we will use it in the final region translation.
    old_region_rot = chunk.region.rot
    # Get the WSG vertices in region space
    vertices = convert_to_region(chunk, chunk.shapes[0].vertices)
    # Generate a convex hull around the shape vertices
    convex_hull = get_convex_hull(vertices)
    # Fit a minimal bounding box around the convex hull
    obb = fit_obb(convex_hull)
    # Maintain the chunk region height
    obb.size.z = chunk.region.size.z
    # Resize the chunk to the size of the OBB
    chunk.region.size = obb.size
    # Rotate the region by the rotation of the OBB
    chunk.region.rot = chunk.region.rot * Utils.ypr2mat(Vector([math.degrees(obb.rot), 0, 0])).inv()
    # Translate the region center to the OBB's center
    transform = Matrix.Diag([1, 1, 1, 1])
    set_rot_of_matrix(transform, old_region_rot)
    chunk.region.center += transform.mulp(obb.center)


# Do the deed!
chunk = PhotoScan.app.document.chunk
conform_region_to_polygon(chunk, chunk.shapes[0])

11
Thanks Alexey, works great!

12
Python and Java API / Translating WGS84 coordinates to local coordinates
« on: October 05, 2018, 09:15:14 PM »
Hi there,

I'm trying to convert a polygon with WGS84 coordinates to local space so I can fit Agisoft's region box around a predefined geofence.

I have tried the following, but It gives me an "Unsupported datum transformation" exception.

Code: [Select]
chunk.crs
Out[372]: 2018-10-05 11:11:56 <CoordinateSystem 'WGS 84 (EPSG::4326)'>

chunk.shapes.crs
Out[373]: 2018-10-05 11:12:10 <CoordinateSystem 'WGS 84 (EPSG::4326)'>

chunk.shapes[0].vertices
Out[374]: 2018-10-05 11:12:17 Shape.Vertices([Vector([-123.06583, 49.02038, 0.0]), Vector([-123.065345, 49.020376, 0.0]), Vector([-123.06535, 49.020779, 0.0]), Vector([-123.065825, 49.020783, 0.0]), Vector([-123.06583, 49.02038, 0.0])])

localCrs = PhotoScan.CoordinateSystem('LOCAL')

localCrs
Out[376]: 2018-10-05 11:13:43 <CoordinateSystem 'Local Coordinates (m)'>

PhotoScan.CoordinateSystem.transform(chunk.shapes[0].vertices[0], chunk.shapes.crs, localCrs)
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-377-d48b461d2fab> in <module>()
----> 1 PhotoScan.CoordinateSystem.transform(chunk.shapes[0].vertices[0], chunk.shapes.crs, localCrs)

RuntimeError: Unsupported datum transformation

Any help is appreciated! Thank you!

13
I'm trying to automate the generation of orthomosaics and models. There is only one problem currently, that being that when I export the model, the origin point does not align with the "ground" plane that I'd expect it to.

https://i.imgur.com/ICYrDxE.jpg

The photos are taken with a DJI drone, so there should be sufficient altitude data to rectify this issue.

My first plan was to simply get the transform location of the camera in PhotoScan and then offset the chunk by whatever the difference was with the reported altitude in the EXIF data. However, I believe there's some sort of coordinate system translation that needs to occur beforehand, as the translation matrices of the cameras are, for example:

Code: [Select]
cam.transform.translation()
Out[138]: 2018-08-16 11:45:05 Vector([-0.41432089205811645, 4.574950116471244, 0.02561866387081857])

Meanwhile, the chunk translation is:

Code: [Select]
chunk.transform.translation
Out[145]: 2018-08-16 11:47:10 Vector([-2238541.869777814, -3528167.781020327, 4742206.90104938])

I'm not sure in what units these are supposed to be or how to proceed. Any help is appreciated!

The camera reports it's altitude as:

Code: [Select]
2018-08-16 11:49:33    drone-dji:AbsoluteAltitude="+64.49"
2018-08-16 11:49:33    drone-dji:RelativeAltitude="+51.20"

Thanks :)

P.S. Is it possible that it's using the AbsoluteAltitude to determine the ground plane? How could I attempt to translate the chunk "down" by a set distance in meters?

Pages: [1]