Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Erik Holmlund

Pages: 1 [2] 3
16
General / Re: how much points are enough to generate the dense cloud?
« on: February 02, 2018, 04:28:26 PM »
But have you checked that the region (bounding box) is around your tie points (and thus the scene you want to reconstruct)?

You will only get points occurring within the region, so if it's off you won't get anything.

17
General / Re: how much points are enough to generate the dense cloud?
« on: January 31, 2018, 03:34:43 PM »
Hi,
Could it be that your region is misaligned, and thus won't cover your area of interest? This has happened to me multiple times.

18
Python Scripting / Re: How to get makers errors
« on: January 29, 2018, 07:34:44 PM »
Hi,
I modified an old script to work with what I think you're after, since I'm quite interested in it myself. It does only half of the work since I'm not sure that numpy works out of the box in 1.4.0 (an input on this would be lovely, I installed it manually so I don't know if it's 'supposed' to work or not).

This code takes two random images at a time and notes the resulting estimated coordinate of a marker, and then takes another pair of images and so on. It does so for all the markers, for half as many iterations as there are marker projections, and doesn't do anything if there are less than three projections. The output is a csv-table with the different estimated positions of each marker. The standard deviation of the values is a good measure of its precision.

With numpy this could be given as an output directly, e.g. through just np.std(marker_specific_result)

Code: [Select]
import PhotoScan
import random

doc = PhotoScan.app.document
chunk = doc.chunk


result = []
for marker in chunk.markers:
    num_projections = len(marker.projections)
    print(num_projections)

    if num_projections > 2:  # Marker needs more than two projections to evaluate error
        cam_list = list(marker.projections.keys())

        for x in range(int(round(num_projections / 2, 0))):  # Do half as many iterations as there are projections

            random.shuffle(cam_list)
            selected_cameras = cam_list[:2]  # Two random cameras

            # Note pinned pixel coordinates
            px_coords = {camera: marker.projections[camera].coord for camera in cam_list}

            # Unpinning the non-selected cameras
            for camera in cam_list:
                if camera not in selected_cameras:
                    marker.projections[camera] = None

            # Save the estimated position and marker label
            output = (marker.label,) + tuple(chunk.crs.project(chunk.transform.matrix.mulp(marker.position)))
            result.append(output)

            # Revert pinned coordinates
            for camera in cam_list:
                coord = PhotoScan.Marker.Projection(px_coords[camera])
                marker.projections[camera] = coord
                marker.projections[camera].pinned = True

# Write a CSV at desired position
file_name = PhotoScan.app.getSaveFileName("Save output file", filter="*.csv")
with open(file_name, "w") as file:
    for line in result:
       
        entry = ""
        for value in line:
            entry += str(value).replace("'", "") + ","
       
        file.write(entry + "\n")


Regards

Erik

19
Bug Reports / Re: Error using Refine Mesh in the latest build
« on: January 25, 2018, 10:01:00 PM »
Hi,
Interesting, this just gave me a new error..

After disabling rolling shutter compensation and optimizing cameras, I got this as a message box:
Code: [Select]
invalid pitch argument (12) at line 284

Let me know if you'd be helped by a more in-depth error assessment on my part.

Erik

20
Bug Reports / Error using Refine Mesh in the latest build
« on: January 05, 2018, 04:55:42 PM »
Hi,
I just encountered a bug with the new Refine Mesh using version 1.4.0 build 5650, where I can't seem to use it on a higher quality than Low. I have a project with 121 images (12Mpx, 8bit, LZW compressed TIFF's), and I'm running refinement on a mesh with 491 814 faces. Mesh refinement works on the low setting, raising the face count to 638 675 faces. If I try it on Medium, it shows the progress bar, until it finishes at 100% and then gives two error messages:

Code: [Select]
Analyzing mesh detalization...

2018-01-05 13:23:40 Warning: cudaStreamDestroy failed: unknown error (30)

2018-01-05 13:23:40 Finished processing in 112.946 sec (exit code 0)

2018-01-05 13:23:40 Error: Kernel failed: unknown error (30) at line 150


If I try on High, the error message comes up immediately. I've tried reloading the program several times and restarting my system.

I used the Refine Mesh tool the day before yesterday on the last experimental build (5585?) and it worked fine.

Attached is a log of the work flow: Load project, Refine Mesh (Low), Duplicate mesh, Refine Mesh (Medium) and a subsequent error.

I hope it helps, and that the error can be fixed.

21
Hi,
I just updated to build 5532 and it seems to be working in the two instances I tested! I'll update on any issues if they arise. But for now, thank you for these great new features!

22
Python Scripting / Re: Automize Error Reduction - Gradual Selection
« on: December 14, 2017, 05:44:44 PM »
For your first question, this list comprehension works:
Code: [Select]
pc = chunk.point_cloud
nselected = len([p for p in pc.points if p.selected])

Also, I'm pretty sure a neater way of accomplishing what you want is to use this:
Code: [Select]
chunk.buildPoints(error=threshold)
chunk.optimizeCameras()
And repeat it until the result is better.  The perk of using this, apart from that it's shorter, is that statistical 'outliers' that were removed in previous stages could be reinstated if their errors are reduced to within the threshold.

This however raises a question of mine, which is what these iterations really accomplish? If my knowledge in the bundle adjustment (Camera Optimization) is correct, a similar thresholding is applied and only the points considered valid are used for deciding the orientation. Thus, by removing statistical outliers before, we're just doing the bundle adjustment's job.

Or am I wrong about that? I would love a word from the Agisoft team!

EDIT: Fixed the first script

23
General / Re: smooth mesh in certain areas
« on: December 14, 2017, 05:12:37 PM »
Stunnline, the free software MeshLab supports both decimation and smoothing of selected faces. The selection process is in my experience a bit clunky however.

24
Nice to hear!

To clarify, marker-based chunk alignment works well. It's the chunk update and optimization that don't work for me.

25
Hello again,
I seem to have been a bit quick in saying it works perfectly, since I have quite the list of bugs.

  • When exporting markers and reimporting them, the fiducials show up in the markers panel, with image coordinates as reference coords. If they're removed, the fiducials on the image are removed as well.
  • If I add a new fiducial in an image, it doesn't show up in the camera calibration. The only way to then remove it is using the console (chunk.remove(chunk.markers[-1]).
  • If I have a good (subpixel) alignment on the 'fiducial chunk' and specify the projections of three GCP's, they show up correctly in the point cloud, with reasonable pixel errors. I can then properly align two chunks with the same GCP's using Align Chunks (with markers). If I however give the GCP's reference information and update the transform, the errors shoot up to thousands of meters. The whole terrain is also vertical. I know that the GCP's have correct coordinates, since I can align them with Align Chunks to a chunk with digital imagery.

    If I subsequently run Optimize Cameras (with marker reference data unchecked), the whole alignment goes haywire, with both marker and camera reprojection error ranging from 52-1483 pixels. If I run the optimization with reference checked, the errors are even higher at 1269-7553 pixels.
  • I also noticed something weird after using Align Chunks with markers on the fiducial chunk. If I look around in the model that I used to align, everything is normal. If I however switch to the fiducial chunk and rotate the camera, the whole model jumps so that everything is tilted by maybe 10 degrees. If I just move the camera, nothing happens.

The only thing I suspect I might do differently is to rotate the images in PhotoScan so they face north. Could that mess with the fiducials?

I'd love to help out with providing projects and such, if it might help.

26
Python Scripting / Re: How to get makers errors
« on: December 07, 2017, 11:01:19 AM »
If you want the total marker error:

Code: [Select]
doc = PhotoScan.app.document
chunk = doc.chunk

for marker in chunk.markers:
    est = chunk.crs.project(chunk.transform.matrix.mulp(marker.position))  # Gets estimated marker coordinate
    ref = marker.reference.location

    if est and ref:
        error = (est - ref).norm()  # The .norm() method gives the total error. Removing it gives X/Y/Z error
        print(marker.label, error)

That code gives the same error values for me.

27
You can solve it with a list comprehension, like you started on.

Code: [Select]
chunk = doc.chunk
points = chunk.point_cloud.points
nselected = len([p for p in points if p.selected])

or just
Code: [Select]
chunk = doc.chunk
nselected = len([p for p in chunk.point_cloud.points if p.selected])

It calculates it really quickly on my computer.

28
Hi Alexey,
I had completely missed the latest build! Now it works like a charm.

Thanks!

29
Hi,
I'm very interested in the new fiducial support, as it greatly reduces the effort of using analog imagery. I can however not make it work completely, as marker placement seems non-functional.

If I use the images separately as non-analog, the marker placement works (as bad as it does without fiducials). If I specify the fiducials and place a marker, it ends up on the other side of the solar system. Below is an example of an estimated coordinate I get, from a point added through the sparse point cloud:

East 8 144 283.304 m
North: 13 169 867.186 m
Altitude: 7 982 093 517.095 m

The pixel error is also massive, at 2895.922.

My suspicion is that the internal coordinate system produced by fiducial placement is wrong, creating invalid marker projections and therefore the position. The fiducials I have in the same project are:

X,Y in millimeters, with the focal length: 152.83 mm
Upper left: -106.073, 105.904
Upper right: 105.919, 106.098
Lower right: 106.06, -105.902
Lower left: -105.906, -106.099

I've tried it in several projects, and with different X and Y directions (positive X or negative X etc.). I've also tried the calibrate fiducials tool with no change.

Reproducing the problem is as easy as importing the images, placing fiducials, calibrating them, aligning the images and then placing a marker in the point cloud.

I would love to have the tool working, but evidently it's clearly not!

30
General / Re: Agisoft PhotoScan 1.4.0 pre-release
« on: December 01, 2017, 07:10:24 PM »
I noticed a bug in the new Refine Mesh tool, where some compressions, or amount of bands, return an "Unsupported data type" error.

I used 8 bit B&W images with the color profile "Gray Gamma 2.2", and with what I think was ZIP compression (it doesn't say in the metadata). After converting them all to 8 bit 3 channel "Adobe RGB 1998 (1998)" with no compression, the tool worked fine. My suspicion is the ZIP compression, but I can't reliably say so since I'm not entirely sure what the compression was.

Either way, the tool works with that simple workaround!

Pages: 1 [2] 3