Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Erik Holmlund

Pages: 1 [2] 3
16
Python and Java API / Re: How to get makers errors
« on: February 11, 2018, 08:02:21 PM »
Hi again,
This concept turned out to be really useful for me, to evaluate marker quality in a particularly problematic dataset of mine. Therefore, I made some improvements with the code.

First off, it iterates through every possible combination of image pairs to project it, instead of choosing pairs at random, and then saves the standard deviations of X, Y, Z and Total, respectively. The output csv also notes how many iterations it handled, which should be close to n! of how many projections there are (up to the maximum limit).

Code: [Select]
import PhotoScan
import numpy as np
import itertools
import random

doc = PhotoScan.app.document
chunk = doc.chunk

max_iterations = 200 # Max allowed iterations for one marker

result = []
for marker in chunk.markers:
    num_projections = len(marker.projections)

    positions = []
    if num_projections > 2 and marker.type == PhotoScan.Marker.Type.Regular:  # Marker needs more than two projections to evaluate error, and not be a fiducial
        cam_list = [cam for cam in marker.projections.keys() if cam.center]  # Every aligned camera with projections
        random.shuffle(cam_list)  # Needed if the max_iterations is exceeded
       
        count = 0
        for a, b in itertools.combinations(cam_list, 2):  # Testing pairs of every possible combination

            if a.group and b.group and a.group == b.group and a.group.type == PhotoScan.CameraGroup.Type.Station:  # Skip if the cameras share station group
                continue

            if count >= max_iterations:  # Break if it reaches the iteration limit
                break
            count += 1

            selected_cameras = [a, b]

            # Note pinned pixel coordinates and if pinned or not (green or blue)
            px_coords = {camera: (marker.projections[camera].coord, marker.projections[camera].pinned) for camera in cam_list}

            # Unpinning the non-selected cameras
            for camera in cam_list:
                if camera not in selected_cameras:
                    marker.projections[camera] = None

            # Save the estimated position
            positions.append(list(chunk.crs.project(chunk.transform.matrix.mulp(marker.position))))

            # Revert pinned coordinates
            for camera in cam_list:
                coord, pinned = px_coords[camera]
                marker.projections[camera] = PhotoScan.Marker.Projection(coord)
                marker.projections[camera].pinned = pinned

        iterations = len(positions)  # Amount of tested positions
        positions = np.array(positions)
        std = np.std(positions, axis=0)  # Standard deviation
        rms = (np.sqrt(np.mean(std**2)))  # RMS of standard deviation

        result.append((marker.label,) + tuple(std) + (rms, iterations))

# Write a CSV at desired position
file_name = PhotoScan.app.getSaveFileName("Save output file", filter="*.csv")
if file_name:  # If an input was given
    with open(file_name, "w") as file:
        file.write("Label, X, Y, Z, Total, Iterations\n")
        for line in result:

            entry = ""
            for value in line:
                entry += str(value).replace("'", "") + ","

            file.write(entry + "\n")


It turned out that some of my markers were really poorly placed, which with this tool is incredibly apparent. So thanks, in a way!

Regards,

Erik

EDIT: The script makes PhotoScan freeze for me sometimes, yet it works perfectly after a restart... Don't know what that's about.

17
General / Re: Texture mapping
« on: February 10, 2018, 02:52:19 PM »
Hi,
What you see is the result of the type of texture atlas that you've chosen, namely the Generic option. That option is great for saving space, but as you've encountered, it's not very easy to edit head-on. One possibility is to try the Adaptive Orthophoto option, which saves larger regions and is easier to edit, but that can also lead to weird artefacts after editing, due to these regions not always blending correctly after an edit.

An option to this is to use the 3D-function in Photoshop. While it takes some time to understand, you can eventually edit even a Generic-mapped texture quite intuitively. I can't remember the workflow on the top of my head, but google has the answer!

Another software to do this in is Blender, which is free. I'ts quite a learning curve, but once you learn it you can do incredibly much with it. There you can edit the texture directly in 3D with all kinds of brushes, and it's really good once you learn how. Youtube has tons of tutorial videos of this.

For your second/first question, do you mean a texture map that looks normal or an actual map of the surface normals? For the latter, you can do it in Blender but not in PhotoScan.

Kind regards,

Erik

18
General / Re: Timelapse for Orthomosaics
« on: February 10, 2018, 02:38:04 PM »
Hi,
Adding to SAV's reply, I think the better part of this process could be scripted. Setting the extent, resolution and path in a chunk.exportOrthomosaic() that loops over every chunk, or however your project is structured, would be pretty simple.

19
General / Re: Dead Spots in DENSE CLOUD, pls HELP!
« on: February 10, 2018, 02:29:03 PM »
Hi,
What are your camera reprojection errors? If they're high, you most likely still have an alignment issue which could be the cause of your incomplete dense cloud. You could try varying the key and tie point limits to try and fix it, as well as add manual tie points (markers) to improve the alignment.

There seems to be a lot of vegetation where you survey. I'd suggest a higher overlap (more images) next time, as vegetation is hard for matching algorithms. Also including oblique imagery has anecdotally proven successful for me in terms of vegetated areas.

I also often have issues with my region cropping the dense cloud output, due to it being too small. I see that the horizontal extent seems good, but could the region be cropping it height-wise?

Hope it helps.

20
General / Re: how much points are enough to generate the dense cloud?
« on: February 02, 2018, 04:28:26 PM »
But have you checked that the region (bounding box) is around your tie points (and thus the scene you want to reconstruct)?

You will only get points occurring within the region, so if it's off you won't get anything.

21
General / Re: how much points are enough to generate the dense cloud?
« on: January 31, 2018, 03:34:43 PM »
Hi,
Could it be that your region is misaligned, and thus won't cover your area of interest? This has happened to me multiple times.

22
Python and Java API / Re: How to get makers errors
« on: January 29, 2018, 07:34:44 PM »
Hi,
I modified an old script to work with what I think you're after, since I'm quite interested in it myself. It does only half of the work since I'm not sure that numpy works out of the box in 1.4.0 (an input on this would be lovely, I installed it manually so I don't know if it's 'supposed' to work or not).

This code takes two random images at a time and notes the resulting estimated coordinate of a marker, and then takes another pair of images and so on. It does so for all the markers, for half as many iterations as there are marker projections, and doesn't do anything if there are less than three projections. The output is a csv-table with the different estimated positions of each marker. The standard deviation of the values is a good measure of its precision.

With numpy this could be given as an output directly, e.g. through just np.std(marker_specific_result)

Code: [Select]
import PhotoScan
import random

doc = PhotoScan.app.document
chunk = doc.chunk


result = []
for marker in chunk.markers:
    num_projections = len(marker.projections)
    print(num_projections)

    if num_projections > 2:  # Marker needs more than two projections to evaluate error
        cam_list = list(marker.projections.keys())

        for x in range(int(round(num_projections / 2, 0))):  # Do half as many iterations as there are projections

            random.shuffle(cam_list)
            selected_cameras = cam_list[:2]  # Two random cameras

            # Note pinned pixel coordinates
            px_coords = {camera: marker.projections[camera].coord for camera in cam_list}

            # Unpinning the non-selected cameras
            for camera in cam_list:
                if camera not in selected_cameras:
                    marker.projections[camera] = None

            # Save the estimated position and marker label
            output = (marker.label,) + tuple(chunk.crs.project(chunk.transform.matrix.mulp(marker.position)))
            result.append(output)

            # Revert pinned coordinates
            for camera in cam_list:
                coord = PhotoScan.Marker.Projection(px_coords[camera])
                marker.projections[camera] = coord
                marker.projections[camera].pinned = True

# Write a CSV at desired position
file_name = PhotoScan.app.getSaveFileName("Save output file", filter="*.csv")
with open(file_name, "w") as file:
    for line in result:
       
        entry = ""
        for value in line:
            entry += str(value).replace("'", "") + ","
       
        file.write(entry + "\n")


Regards

Erik

23
Bug Reports / Re: Error using Refine Mesh in the latest build
« on: January 25, 2018, 10:01:00 PM »
Hi,
Interesting, this just gave me a new error..

After disabling rolling shutter compensation and optimizing cameras, I got this as a message box:
Code: [Select]
invalid pitch argument (12) at line 284

Let me know if you'd be helped by a more in-depth error assessment on my part.

Erik

24
Bug Reports / Error using Refine Mesh in the latest build
« on: January 05, 2018, 04:55:42 PM »
Hi,
I just encountered a bug with the new Refine Mesh using version 1.4.0 build 5650, where I can't seem to use it on a higher quality than Low. I have a project with 121 images (12Mpx, 8bit, LZW compressed TIFF's), and I'm running refinement on a mesh with 491 814 faces. Mesh refinement works on the low setting, raising the face count to 638 675 faces. If I try it on Medium, it shows the progress bar, until it finishes at 100% and then gives two error messages:

Code: [Select]
Analyzing mesh detalization...

2018-01-05 13:23:40 Warning: cudaStreamDestroy failed: unknown error (30)

2018-01-05 13:23:40 Finished processing in 112.946 sec (exit code 0)

2018-01-05 13:23:40 Error: Kernel failed: unknown error (30) at line 150


If I try on High, the error message comes up immediately. I've tried reloading the program several times and restarting my system.

I used the Refine Mesh tool the day before yesterday on the last experimental build (5585?) and it worked fine.

Attached is a log of the work flow: Load project, Refine Mesh (Low), Duplicate mesh, Refine Mesh (Medium) and a subsequent error.

I hope it helps, and that the error can be fixed.

25
Hi,
I just updated to build 5532 and it seems to be working in the two instances I tested! I'll update on any issues if they arise. But for now, thank you for these great new features!

26
Python and Java API / Re: Automize Error Reduction - Gradual Selection
« on: December 14, 2017, 05:44:44 PM »
For your first question, this list comprehension works:
Code: [Select]
pc = chunk.point_cloud
nselected = len([p for p in pc.points if p.selected])

Also, I'm pretty sure a neater way of accomplishing what you want is to use this:
Code: [Select]
chunk.buildPoints(error=threshold)
chunk.optimizeCameras()
And repeat it until the result is better.  The perk of using this, apart from that it's shorter, is that statistical 'outliers' that were removed in previous stages could be reinstated if their errors are reduced to within the threshold.

This however raises a question of mine, which is what these iterations really accomplish? If my knowledge in the bundle adjustment (Camera Optimization) is correct, a similar thresholding is applied and only the points considered valid are used for deciding the orientation. Thus, by removing statistical outliers before, we're just doing the bundle adjustment's job.

Or am I wrong about that? I would love a word from the Agisoft team!

EDIT: Fixed the first script

27
General / Re: smooth mesh in certain areas
« on: December 14, 2017, 05:12:37 PM »
Stunnline, the free software MeshLab supports both decimation and smoothing of selected faces. The selection process is in my experience a bit clunky however.

28
Nice to hear!

To clarify, marker-based chunk alignment works well. It's the chunk update and optimization that don't work for me.

29
Hello again,
I seem to have been a bit quick in saying it works perfectly, since I have quite the list of bugs.

  • When exporting markers and reimporting them, the fiducials show up in the markers panel, with image coordinates as reference coords. If they're removed, the fiducials on the image are removed as well.
  • If I add a new fiducial in an image, it doesn't show up in the camera calibration. The only way to then remove it is using the console (chunk.remove(chunk.markers[-1]).
  • If I have a good (subpixel) alignment on the 'fiducial chunk' and specify the projections of three GCP's, they show up correctly in the point cloud, with reasonable pixel errors. I can then properly align two chunks with the same GCP's using Align Chunks (with markers). If I however give the GCP's reference information and update the transform, the errors shoot up to thousands of meters. The whole terrain is also vertical. I know that the GCP's have correct coordinates, since I can align them with Align Chunks to a chunk with digital imagery.

    If I subsequently run Optimize Cameras (with marker reference data unchecked), the whole alignment goes haywire, with both marker and camera reprojection error ranging from 52-1483 pixels. If I run the optimization with reference checked, the errors are even higher at 1269-7553 pixels.
  • I also noticed something weird after using Align Chunks with markers on the fiducial chunk. If I look around in the model that I used to align, everything is normal. If I however switch to the fiducial chunk and rotate the camera, the whole model jumps so that everything is tilted by maybe 10 degrees. If I just move the camera, nothing happens.

The only thing I suspect I might do differently is to rotate the images in PhotoScan so they face north. Could that mess with the fiducials?

I'd love to help out with providing projects and such, if it might help.

30
Python and Java API / Re: How to get makers errors
« on: December 07, 2017, 11:01:19 AM »
If you want the total marker error:

Code: [Select]
doc = PhotoScan.app.document
chunk = doc.chunk

for marker in chunk.markers:
    est = chunk.crs.project(chunk.transform.matrix.mulp(marker.position))  # Gets estimated marker coordinate
    ref = marker.reference.location

    if est and ref:
        error = (est - ref).norm()  # The .norm() method gives the total error. Removing it gives X/Y/Z error
        print(marker.label, error)

That code gives the same error values for me.

Pages: 1 [2] 3