Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Erik Holmlund

Pages: [1] 2 3
1
Feature Requests / Re: Allow custom resolution for DEM generation
« on: August 25, 2021, 03:03:50 PM »
Hi Alexey,
Thank you for the quick answer.

The transform DEM tool naturally works well to resample a DEM. However, this may yield sub-optimal results at times.

Imagine I have a noisy point cloud where the "Metashape DEM resolution" is high (let's say 0.5 m). This may work well, but will yield lots of nodata-pixels. If Transform DEM is used to a target resolution (e.g. 5m), the nodata-pixels are propagated to the entire new pixel and will result in a poor DEM. A much better approach (which is easy with other tools, such as PDAL, GDAL or CloudCompare) is to specify a coarser gridding resolution (5m), which will not have the same propagating nodata issue.


Erik

2
Feature Requests / Allow custom resolution for DEM generation
« on: August 25, 2021, 02:46:41 PM »
Dear Agisoft team,

Metashape currently offers almost every feature that one can ask for in the process of deriving DEMs from images! One feature that I sorely miss, however, is specifying a custom grid (resolution and bounds) of the output DEM. Currently, my workflow requires me to export the dense cloud and grid it in PDAL – a process which is time consuming and unwieldy.

I request the resolution field to be changeable in the Build DEM step for dense clouds, exactly like is already possible when building DEMs from meshes.

Thank you in advance.

Kind regards,
Erik

3
Bug Reports / Re: Error when creating dense cloud in Metashape 1.7.0
« on: January 13, 2021, 05:56:37 PM »
Hi Alexey,
Thank you for the reply! Looking forward to seeing it working!

Best,
Erik

4
Bug Reports / Error when creating dense cloud in Metashape 1.7.0
« on: January 07, 2021, 01:58:59 PM »
Hi,
As of Metashape 1.7.0, I can no longer generate a dense cloud for a specific project. It fails with the error:

Code: [Select]
Error: Assertion 23910910009 failed at line 54!
Dense cloud generation works well on other projects but this one. I have tried remaking the project on the images, with no other result. Could it be the images?

I use four fiducial marks to align them. All images have the same image height, but the width varies with two pixels, if that is maybe relevant. Another project with fiducials has differing resolutions but work well, however. The only thing differing the two is that the failing one has near-horizontal looking images, while the working one is at nadir angles.

Thank you in advance.

Kind regards,
Erik

5
General / Change default CRS shift settings?
« on: July 16, 2019, 05:18:25 PM »
Hello,

The coordinates of the CRS I use are too big to export in a binary format, so I always apply a global shift to my exported dense clouds and models. This shift is constant for my work area, so I can conveniently remember the number in my head. I recently found the "Load Defaults" button close to the shift settings, which gives reasonable numbers, but varies from the setting I normally use.

Is there any way of changing this setting? If not, it would be an incredibly useful feature!

Attached is a screenshot of it.

Kind regards,

Erik

6
General / Re: Mavic Pro Survey - Exaggerated heights
« on: July 24, 2018, 07:31:26 PM »
Hello Millsy1,
I've had similar issues with analogue nadir air photos, due to the focal length being poorly constrained because of the little changes in perspective. Have you tried adding oblique images to the survey? These greatly reduce errors that are otherwise inherent to the camera calibration estimation. See James et al. 2017 (DOI: 10.1002/esp.4125) for example.

Also, are you shooting JPG or raw? I've read here that the lens corrections in the JPG processing will mess up your camera model in PhotoScan, so that should be avoided.

Hope it might help.

7
Python and Java API / Re: Height Above Ground
« on: May 14, 2018, 01:01:13 PM »
Hi again,
I tried a more brute-force approach which seems to do the trick for me. This code measures distances for each camera to every point in the sparse cloud, and gets relative heights from the ten horizontally closest points. It takes forever to run, but it worked well when I tested it.

Code: [Select]
import PhotoScan
chunk = PhotoScan.app.document.chunk

# Function to transform coordinates to used crs
def transformed(vector):
return chunk.crs.project(chunk.transform.matrix.mulp(vector))

camera_heights = {}

for camera in chunk.cameras:
cam_pos = transformed(camera.center)


# Iterate through every point in the sparse cloud, and measure their distances to the camera.
        heights = {}
for point in chunk.point_cloud.points:
point_pos = transformed(point.coord[:3])

distance = cam_pos - point_pos
xy_distance = distance[:2].norm()  # To be used later for sorting

heights[xy_distance] = distance[2]

# Sort a list of keys (xy distances)
keys = list(heights.keys())
keys.sort()

# Mean of ten points with the lowest xy distances.
h_mean = sum([heights[k] for k in keys[:10]]) / 10

camera_heights[camera.label] = h_mean

# 'break' could be added here to test the loop.


for k, v in camera_heights.items():
print(k, v)

A suggested addition is to actually save the results in a csv or something, since this is a quite lengthy process. Hope it helps!

8
Python and Java API / Re: Height Above Ground
« on: May 14, 2018, 12:04:21 PM »
Hi Alexey,
Might I chime in and say that this would only work on entirely vertical images, as oblique images can give tie points that are very far away from the camera.

This could be fixed quite easily however, by adding something along these lines:

Code: [Select]
# 'distance' is the distance from the tie point to the camera, where distance[2] is the Z value.
# 'threshold' defines how horizontally distant the tie point can be.

if distance.norm() / distance[2] < threshold:
    # Include this distance in the averaging

This threshold could be figured out with some trial and error, or just maths and angles.

9
Hi Alexey,

What I imagined was if either fiducial locations were specified or if all fiducials are placed in at least one image. With that, wouldn't just two fiducials be needed for an additional image for a (crude) estimation of the internal coordinate system, and thus make an approximate placement of the others possible to aid the user?

I often find myself working with 40-50 images, with 8 fiducials each, thus requiring hundreds of manual placements. I've seen other photogrammetric software where such a feature already exists, and having such a feature here would really speed it up!

10
Hi Alexey,
That sounds like a really useful feature!

May I also suggest adding "gray fiducials" that appear in the approximate vicinity when more than two fiducials are placed in an image? This would definitely speed up the workflow, as right clicking and adding each fiducial through the menu is quite time consuming.

11
Hi,
It's the "Calibrate Fiducials" that I was talking about.

I would also love some documentation, as I'm only by experience convinced that I'm doing everything correctly!

12
Hi jooles,
Are you sure you're using the coordinate signs (plus/minus) in the right direction? I had issues to begin with, where I set y as positive when it was supposed to be negative there, and so on.

If you don't know which dimensions should be positive or negative, I'd suggest doing an automatic calibration of the fiducials' position when you've placed them correctly. The numbers will likely be wrong, depending on which pixel size you use, but then you can note where the positive/negative signs should be.

13
General / Re: Lens corrections in Camera RAW good or not?
« on: March 09, 2018, 12:30:42 PM »
Hi,
The camera calibration needs to be very exact in order to not adversely affect the results. I can't remember where I've read it, but even thermal expansion on the sensor changes its characteristics. I also assume that mounting/unmounting a lens might shift e.g. the principal point by a tiny amount.

A kind of preprocessing can be done in PhotoScan, by just saving a camera calibration that you're certain in, but academia generally recommends against that.

14
Hi,
I tried both placing markers and shapes on the cloud in the Model view, as well as placing markers (Add Marker) and shapes in the Photo view. The marker appears in the reference pane, but with no estimated position.

There are 50 aligned cameras within the same region as the imported point cloud. I've also (semi-) successfully performed dense reconstructions with the aligned images, so there's nothing wrong with the alignment.

The result is not pretty though, as these images are from 1910, so that's why I'm attempting to project features onto a modern DEM instead.

15
Hi,
I just tried to project markers and point shapes on an imported point cloud, and saw that it didn't work. I also noticed that neither markers nor shapes can be projected in the regular Model view. Is this a hard fix?

The context to what I tried is to use old aligned images, and digitise features in them onto a DEM (converted to a point cloud). Sadly that doesn't seem to work... Orthophotos generation and digitising in QGIS works, which is what I will do now,  but doing it in PhotoScan would be much quicker!

Erik

Pages: [1] 2 3