Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - kcm

Pages: [1]
1
So here are some shots of what I am seeing after building the mesh out with "extrapolation" option to a bounding box (by hand in the UI). Some pretty severe extrapolation artifacts at the edge of the dense cloud used to build the mesh. Not usable for our purposes here obviously. The site is a flat agriculture field where we are running accuracy experiments for our map gen tool set. This is the most severe I have seen the issue on the edge of the dense cloud.

I wonder if those low confidence un-filtered points on the edges are what are dragging it down?

K

2
Will do and report back...

3
Hi,

In our application we are using Metashape to project a variety of UAV acquired image space features into world space using the dense cloud results. (The heart of the code that performs the projections is discussed in this post: https://www.agisoft.com/forum/index.php?topic=12781.0). The approach works well, but we are working to trap a few boundary conditions. Among the features we are projecting are the image corners. For images that are literally on the geographic boundary of the site and have a valid `camera.transform` we can get degenerate points returned for one or more of the image corners, like this (in UTM):
Code: [Select]
"type": "Polygon",
      "coordinates": [
        [
          [
            0,
            0
          ],
          [
            0,
            0
          ],
          [
            572650.2037708747,
            5102280.396009785
          ],
          [
            0,
            0
          ],
          [
            0,
            0
          ]
        ]
      ]

Is there a way to extend the dense cloud boundary, or change other model creation params, so that we are not returning zeros from `chunk.dense_cloud.pickPoint()` for image points on the boundary? We are hoping that reliably extending the interpolation boundary may also result in fewer non-valid `camera.transform`s for images on the boundary, for which we cannot project any points of course.

KCM

4
Hey Alexey,

Nice concise change, thanks! Your last line needs this correction to convert the epsg int to str:

Code: [Select]
def get_utm_epsg(lon, lat):
    zone = int((long + 180) / 6)
    if lat > 0:
        epsg = 32601 + zone
    else:
        epsg = 32701 + zone
    return Metashape.CoordinateSystem("EPSG::"+str(epsg))

KCM

5
(I am assuming there has to be a Metashape method since it returns the correct UTM code as one of the UI choices with getCoordinateSystem(). )

6
Hey Alexey, thank you but that is already what I am doing, is there a Metashape method to find the UTM EPSG code for centroid long/lat of the project? Or do I just need implement it with a custom method. For example:
Code: [Select]
def getUtmEpsg(lon, lat):
    # simplistic, misses norway
   
    lat_band = str(int((lon + 180) / 6 ) + 1)
   
    lat_band if len(lat_band) == 2 else '0' + lat_band
       
    if lat >= 0 and lat <= 84:
       
        epsg_code = '326' + lat_band
       
    elif lat >= -80:
       
        epsg_code = '327' + lat_band
       
    else:
       
        return None
       
    return 'EPSG::' + epsg_code

KCM

7
Hi,

This must be possible in the python API but I cannot figure out how to automatically obtain the UTM EPSG code to set the correct UTM CRS for a project that is currently in EPSG::4326. I can readily set it  manually in the UI using:
Code: [Select]
UTMcrs = app.getCoordinateSystem(), but cannot find a method the set it to the correct zone automatically.

KCM

8
General / Re: Agisoft Metashape 1.7.0 pre-release
« on: December 02, 2020, 09:37:40 PM »
FYI your 1.7 Python API doc still says "Python 3.5" on page 3 even though the 1.7 changelog states: "• Updated Python to version 3.8." (thank goodness ;-).

9
Alexey,

Thank you, exactly what I am after. I realized after my post that part of the answer was contained in an adjacent post (https://www.agisoft.com/forum/index.php?topic=12780.0). From that post I ended up here:
Code: [Select]
# for projecting 2D point in image coordinates to available 3D surface (sparse cloud in this case):
sensor0 = camera0.sensor
pc_pred = chunk.point_cloud.pickPoint(camera0.center, camera0.transform.mulp(sensor0.calibration.unproject(image_point)))
pc_pred.size = 4
pc_pred[3] = 1.
chunk.crs.project((chunk.transform.matrix*pc_pred)[0:3])
It's good to know the unproject using the sensor transform is identical in this case, both formulations generated the same coordinates essentially. Under what circumstance is the sensor transform needed to unproject as opposed to unprojecting with the specific camera?

Also, is the sparse cloud the source of geo-registration and reference for all subsequent products in an orthomosaic workflow (e.g., dense cloud, mesh, dem, etc.)?

Kris


10
Hi,

I am needing to perform two-way operations to obtain image coordinates of tie points, as well as the inverse, to obtain object space coordinate estimates for arbitrary image points. To obtain the tie point projections onto an image, I am using an iterated version of the following in an existing project:
Code: [Select]
doc = Metashape.app.document
chunk = doc.chunk
camera0 = chunk.cameras[0]
proj = projections[camera0][0]
print(proj.coord)

I realize I can obtain the camera transform as:
Code: [Select]
camera0.transformBut I cannot figure out how to apply it to project an image pixel to a point cloud point. I have tried using Camera.unproject(Vector) but don't obtain the correct result (I am not clear on the directionality of the camera transform matrix and Camera.unproject). 

Thank you for the help.

K

11
Thank you, exactly what I needed.

K

12
Hi, I am using the Python API to work with the geometry of a sparse point cloud generated from a set of UAV images. However, after generating the sparse cloud, I am not able to transform the point cloud vectors into geographic coordinate units (long, lat, height, w) in the API. For example, the code snippet here:
Code: [Select]
for i, p in enumerate(chunk.point_cloud.points):
    print(i)
    print(p.coord)

generates coordinates in what appears to be a model space CRS like this:
Code: [Select]
...
12243
2020-11-16 18:28:34 Vector([10.809051513671875, -2.2198383808135986, -9.173471450805664, 1.0])
2020-11-16 18:28:34 12244
2020-11-16 18:28:34 Vector([9.053605079650879, -2.364966869354248, -9.06147575378418, 1.0])
2020-11-16 18:28:34 12245
2020-11-16 18:28:34 Vector([8.126117706298828, -2.4715628623962402, -8.9992036819458, 1.0])
My CRS is set to WGS 84: <CoordinateSystem 'WGS 84 (EPSG::4326)'>

If I pull the project into the UI, I can manually output the points as long/lat/z/r/g/b, but cannot produce the same units programatically.

Anyone know what am I missing?

K

Pages: [1]