Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Erik Holmlund

Pages: [1] 2 3
1
General / Change default CRS shift settings?
« on: July 16, 2019, 05:18:25 PM »
Hello,

The coordinates of the CRS I use are too big to export in a binary format, so I always apply a global shift to my exported dense clouds and models. This shift is constant for my work area, so I can conveniently remember the number in my head. I recently found the "Load Defaults" button close to the shift settings, which gives reasonable numbers, but varies from the setting I normally use.

Is there any way of changing this setting? If not, it would be an incredibly useful feature!

Attached is a screenshot of it.

Kind regards,

Erik

2
General / Re: Mavic Pro Survey - Exaggerated heights
« on: July 24, 2018, 07:31:26 PM »
Hello Millsy1,
I've had similar issues with analogue nadir air photos, due to the focal length being poorly constrained because of the little changes in perspective. Have you tried adding oblique images to the survey? These greatly reduce errors that are otherwise inherent to the camera calibration estimation. See James et al. 2017 (DOI: 10.1002/esp.4125) for example.

Also, are you shooting JPG or raw? I've read here that the lens corrections in the JPG processing will mess up your camera model in PhotoScan, so that should be avoided.

Hope it might help.

3
Python Scripting / Re: Height Above Ground
« on: May 14, 2018, 01:01:13 PM »
Hi again,
I tried a more brute-force approach which seems to do the trick for me. This code measures distances for each camera to every point in the sparse cloud, and gets relative heights from the ten horizontally closest points. It takes forever to run, but it worked well when I tested it.

Code: [Select]
import PhotoScan
chunk = PhotoScan.app.document.chunk

# Function to transform coordinates to used crs
def transformed(vector):
return chunk.crs.project(chunk.transform.matrix.mulp(vector))

camera_heights = {}

for camera in chunk.cameras:
cam_pos = transformed(camera.center)


# Iterate through every point in the sparse cloud, and measure their distances to the camera.
        heights = {}
for point in chunk.point_cloud.points:
point_pos = transformed(point.coord[:3])

distance = cam_pos - point_pos
xy_distance = distance[:2].norm()  # To be used later for sorting

heights[xy_distance] = distance[2]

# Sort a list of keys (xy distances)
keys = list(heights.keys())
keys.sort()

# Mean of ten points with the lowest xy distances.
h_mean = sum([heights[k] for k in keys[:10]]) / 10

camera_heights[camera.label] = h_mean

# 'break' could be added here to test the loop.


for k, v in camera_heights.items():
print(k, v)

A suggested addition is to actually save the results in a csv or something, since this is a quite lengthy process. Hope it helps!

4
Python Scripting / Re: Height Above Ground
« on: May 14, 2018, 12:04:21 PM »
Hi Alexey,
Might I chime in and say that this would only work on entirely vertical images, as oblique images can give tie points that are very far away from the camera.

This could be fixed quite easily however, by adding something along these lines:

Code: [Select]
# 'distance' is the distance from the tie point to the camera, where distance[2] is the Z value.
# 'threshold' defines how horizontally distant the tie point can be.

if distance.norm() / distance[2] < threshold:
    # Include this distance in the averaging

This threshold could be figured out with some trial and error, or just maths and angles.

5
Hi Alexey,

What I imagined was if either fiducial locations were specified or if all fiducials are placed in at least one image. With that, wouldn't just two fiducials be needed for an additional image for a (crude) estimation of the internal coordinate system, and thus make an approximate placement of the others possible to aid the user?

I often find myself working with 40-50 images, with 8 fiducials each, thus requiring hundreds of manual placements. I've seen other photogrammetric software where such a feature already exists, and having such a feature here would really speed it up!

6
Hi Alexey,
That sounds like a really useful feature!

May I also suggest adding "gray fiducials" that appear in the approximate vicinity when more than two fiducials are placed in an image? This would definitely speed up the workflow, as right clicking and adding each fiducial through the menu is quite time consuming.

7
Hi,
It's the "Calibrate Fiducials" that I was talking about.

I would also love some documentation, as I'm only by experience convinced that I'm doing everything correctly!

8
Hi jooles,
Are you sure you're using the coordinate signs (plus/minus) in the right direction? I had issues to begin with, where I set y as positive when it was supposed to be negative there, and so on.

If you don't know which dimensions should be positive or negative, I'd suggest doing an automatic calibration of the fiducials' position when you've placed them correctly. The numbers will likely be wrong, depending on which pixel size you use, but then you can note where the positive/negative signs should be.

9
General / Re: Lens corrections in Camera RAW good or not?
« on: March 09, 2018, 12:30:42 PM »
Hi,
The camera calibration needs to be very exact in order to not adversely affect the results. I can't remember where I've read it, but even thermal expansion on the sensor changes its characteristics. I also assume that mounting/unmounting a lens might shift e.g. the principal point by a tiny amount.

A kind of preprocessing can be done in PhotoScan, by just saving a camera calibration that you're certain in, but academia generally recommends against that.

10
Hi,
I tried both placing markers and shapes on the cloud in the Model view, as well as placing markers (Add Marker) and shapes in the Photo view. The marker appears in the reference pane, but with no estimated position.

There are 50 aligned cameras within the same region as the imported point cloud. I've also (semi-) successfully performed dense reconstructions with the aligned images, so there's nothing wrong with the alignment.

The result is not pretty though, as these images are from 1910, so that's why I'm attempting to project features onto a modern DEM instead.

11
Hi,
I just tried to project markers and point shapes on an imported point cloud, and saw that it didn't work. I also noticed that neither markers nor shapes can be projected in the regular Model view. Is this a hard fix?

The context to what I tried is to use old aligned images, and digitise features in them onto a DEM (converted to a point cloud). Sadly that doesn't seem to work... Orthophotos generation and digitising in QGIS works, which is what I will do now,  but doing it in PhotoScan would be much quicker!

Erik

12
Python Scripting / Re: How to get makers errors
« on: February 11, 2018, 08:02:21 PM »
Hi again,
This concept turned out to be really useful for me, to evaluate marker quality in a particularly problematic dataset of mine. Therefore, I made some improvements with the code.

First off, it iterates through every possible combination of image pairs to project it, instead of choosing pairs at random, and then saves the standard deviations of X, Y, Z and Total, respectively. The output csv also notes how many iterations it handled, which should be close to n! of how many projections there are (up to the maximum limit).

Code: [Select]
import PhotoScan
import numpy as np
import itertools
import random

doc = PhotoScan.app.document
chunk = doc.chunk

max_iterations = 200 # Max allowed iterations for one marker

result = []
for marker in chunk.markers:
    num_projections = len(marker.projections)

    positions = []
    if num_projections > 2 and marker.type == PhotoScan.Marker.Type.Regular:  # Marker needs more than two projections to evaluate error, and not be a fiducial
        cam_list = [cam for cam in marker.projections.keys() if cam.center]  # Every aligned camera with projections
        random.shuffle(cam_list)  # Needed if the max_iterations is exceeded
       
        count = 0
        for a, b in itertools.combinations(cam_list, 2):  # Testing pairs of every possible combination

            if a.group and b.group and a.group == b.group and a.group.type == PhotoScan.CameraGroup.Type.Station:  # Skip if the cameras share station group
                continue

            if count >= max_iterations:  # Break if it reaches the iteration limit
                break
            count += 1

            selected_cameras = [a, b]

            # Note pinned pixel coordinates and if pinned or not (green or blue)
            px_coords = {camera: (marker.projections[camera].coord, marker.projections[camera].pinned) for camera in cam_list}

            # Unpinning the non-selected cameras
            for camera in cam_list:
                if camera not in selected_cameras:
                    marker.projections[camera] = None

            # Save the estimated position
            positions.append(list(chunk.crs.project(chunk.transform.matrix.mulp(marker.position))))

            # Revert pinned coordinates
            for camera in cam_list:
                coord, pinned = px_coords[camera]
                marker.projections[camera] = PhotoScan.Marker.Projection(coord)
                marker.projections[camera].pinned = pinned

        iterations = len(positions)  # Amount of tested positions
        positions = np.array(positions)
        std = np.std(positions, axis=0)  # Standard deviation
        rms = (np.sqrt(np.mean(std**2)))  # RMS of standard deviation

        result.append((marker.label,) + tuple(std) + (rms, iterations))

# Write a CSV at desired position
file_name = PhotoScan.app.getSaveFileName("Save output file", filter="*.csv")
if file_name:  # If an input was given
    with open(file_name, "w") as file:
        file.write("Label, X, Y, Z, Total, Iterations\n")
        for line in result:

            entry = ""
            for value in line:
                entry += str(value).replace("'", "") + ","

            file.write(entry + "\n")


It turned out that some of my markers were really poorly placed, which with this tool is incredibly apparent. So thanks, in a way!

Regards,

Erik

EDIT: The script makes PhotoScan freeze for me sometimes, yet it works perfectly after a restart... Don't know what that's about.

13
General / Re: Texture mapping
« on: February 10, 2018, 02:52:19 PM »
Hi,
What you see is the result of the type of texture atlas that you've chosen, namely the Generic option. That option is great for saving space, but as you've encountered, it's not very easy to edit head-on. One possibility is to try the Adaptive Orthophoto option, which saves larger regions and is easier to edit, but that can also lead to weird artefacts after editing, due to these regions not always blending correctly after an edit.

An option to this is to use the 3D-function in Photoshop. While it takes some time to understand, you can eventually edit even a Generic-mapped texture quite intuitively. I can't remember the workflow on the top of my head, but google has the answer!

Another software to do this in is Blender, which is free. I'ts quite a learning curve, but once you learn it you can do incredibly much with it. There you can edit the texture directly in 3D with all kinds of brushes, and it's really good once you learn how. Youtube has tons of tutorial videos of this.

For your second/first question, do you mean a texture map that looks normal or an actual map of the surface normals? For the latter, you can do it in Blender but not in PhotoScan.

Kind regards,

Erik

14
General / Re: Timelapse for Orthomosaics
« on: February 10, 2018, 02:38:04 PM »
Hi,
Adding to SAV's reply, I think the better part of this process could be scripted. Setting the extent, resolution and path in a chunk.exportOrthomosaic() that loops over every chunk, or however your project is structured, would be pretty simple.

15
General / Re: Dead Spots in DENSE CLOUD, pls HELP!
« on: February 10, 2018, 02:29:03 PM »
Hi,
What are your camera reprojection errors? If they're high, you most likely still have an alignment issue which could be the cause of your incomplete dense cloud. You could try varying the key and tie point limits to try and fix it, as well as add manual tie points (markers) to improve the alignment.

There seems to be a lot of vegetation where you survey. I'd suggest a higher overlap (more images) next time, as vegetation is hard for matching algorithms. Also including oblique imagery has anecdotally proven successful for me in terms of vegetated areas.

I also often have issues with my region cropping the dense cloud output, due to it being too small. I see that the horizontal extent seems good, but could the region be cropping it height-wise?

Hope it helps.

Pages: [1] 2 3