Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - LFSantosgeo

Pages: 1 [2] 3 4 5
16
Hey mks_gis,

What do you think of the following code structure, for instance, to range from a value to another of filter levels and try to remove points if a condition is True? It also can restart searching through the same range values if one is found.  I'm working on still, just thought of sharing this idea to implement filtering for the USGS workflow (thinking of reconstruction uncertainty running more than once till no points are selected based on the chosen range). This of course would rapidly decrease points in point cloud.

Code: [Select]
threshold = range(0, 11, 1)  # here would be the filter levels range
listx = []  # throwing a list just for the exemple
for i in threshold:
    listx.append(i)

restart = 0
restartLoop = True
while restartLoop:
    restartLoop = False
    for idx, i in enumerate(listx):
        print("do something as printing i:", i)

        if i > 5:  # if this condition: remove points and restart loop to search again in range
            print("found value for condition: ", i)
            del listx[idx]  # simulates removing points for a certain i value

            #  optimization would happen here

            restartLoop = True
            print("RESTARTING LOOP\n")
            restart += 1
            break  # break inner while and restart for loop
        else:
            # continue if the inner loop wasn't broken
            continue
    else:
        continue

print("restart - outer while", restart)

Looking forward to see your new version of the code!

17
Any news about this feature in Metashape?

18
Just found out. Python tricks...
Code: [Select]
points = point_cloud.points
len(points)
Gives the total valid points.
Then Gradual selection is done.
And then:
Code: [Select]
chunk = doc.chunk
len(point_cloud.points)
Gives the valid tiepoints number after gradual selection and for that you can assign to a new variable. You could reassign the variable points but you will loose the total points value before gradual selection.

19
Hey mks_gis nice coding.

    len(chunk.point_cloud.tracks) - counts valid and invalid tiepoints right?
    len(chunk.point_cloud.points) - and this counts the initial number of valid tiepoints before gradual selection

Have you managed to count tiepoints with Python after gradual selection? Every time I use len(chunk.point_cloud.points) I get the same initial value

Currently needs to be run as one session, valid ties not accessible after
    gradual selection. To be fixed in next version using tracks.
   
    len(chunk.point_cloud.points) - valid ties
    len(chunk.point_cloud.tracks) - all ties

20
Hello!

How do I know by python if the rolling shutter is enabled? Or if it was done?

Thanks!

21
Try this!
Code: [Select]
camera.meta["Image/Quality"]
Hope it helps!

22
Python and Java API / Re: Height Above Ground
« on: May 16, 2018, 07:33:27 PM »
Thank you Alexey and Erik! I'll give a try on your thoughts... And I'll post it here.

23
Python and Java API / Filtering Sparse Point Cloud
« on: May 16, 2018, 07:32:34 PM »
Hello! Is there a consensus about the gradual selection and build points to filter reprojection error of the sparse point cloud?

I've checked the following topics from which I extracted some quotes bellow:
http://www.agisoft.com/forum/index.php?topic=6287.0
http://www.agisoft.com/forum/index.php?topic=8140.0

Quote
GUI: Gradual Selection=X is not the same as buildPoints(error=X)
So they are not the same. And they are not equivalent for the filtering task. What's the difference?

For buildPoints() -- Build Points.
Code: [Select]
buildPoints(error=10[, min_image ][, progress])
Rebuild point cloud for the chunk.
Parameters
• error (float) – Reprojection error threshold.
• min_image (int) – Minimum number of point projections.
• progress (Callable[[float], None]) – Progress callback.

Quote
The perk of using this [buildPoints], apart from that it's shorter, is that statistical 'outliers' that were removed in previous stages could be reinstated if their errors are reduced to within the threshold.

As for the PointCloud.Filter()  -- Gradual Selection:
Code: [Select]
threshold = 0.5
>>> f = PhotoScan.PointCloud.Filter()
>>> f.init(chunk, criterion = PhotoScan.PointCloud.Filter.ReprojectionError)
>>> f.selectPoints(threshold)

Quote
I understand that all matches ,even filtered ones, are kept within the database which means that chunk.buildPoints(error=X) {'coordinates applied...'} will 'resuscitate' some of them and is by no means equivalent to Gradual Selection:Reprojection Error






24
Python and Java API / Height Above Ground
« on: May 10, 2018, 11:26:59 PM »
Hello!

Is there a way to access the estimated(calculated) height above ground through Python (if already available after alignment)?

Thanks!
Luiz Fernando



25
This seems to not be true for 1.4.0. It still is camera.photo.meta["Image/Quality"] in that version. Maybe in more recent version it changed. The api reference is not 100% clear about that.

I've experienced the same! Some changes are not up to 1.4.0 but maybe from 1.4.1 forward.

26
Python and Java API / Re: MetaData
« on: March 23, 2018, 05:43:59 AM »
Hello Alexey! Thank you for the reply!

I wonder what else I can retrive from chunk.meta apart from alignment duration? There's no mention on the API guide (1.4.0)  of the available tags or how can I list the metadata from the chunk?

27
Python and Java API / Re: How to point cloud export json
« on: March 20, 2018, 10:51:13 PM »
From the PhotoScan API guide (PSv.1.4.0 from Dec 2017):

class PhotoScan.PointsFormat
Point cloud format in [PointsFormatNone, PointsFormatOBJ, PointsFormatPLY, PointsFormatXYZ, PointsFormatLAS, PointsFormatExpe, PointsFormatU3D, PointsFormatPDF, PointsFormatE57, PointsFormatOC3, PointsFormatPotree, PointsFormatLAZ, PointsFormatCL3, PointsFormatPTS, PointsFormatDXF, PointsFormatCesium]

28
Python and Java API / MetaData
« on: March 20, 2018, 12:48:27 AM »
I'm trying to retrieve the alignment duration out of the sparse point cloud (point_cloud) metadata but there are only the following metadata when the input is chunk.point_cloud.meta:

{'match/duration', 'match/match_downscale', 'match/match_point_limit', 'match/match_preselection_generic', 'match/match_preselection_reference', 'match/match_tiepoint_limit''}

Does anyone knows where can I find the align duration?

Thanks in advance!

29
Python and Java API / Re: Creating and Managing Multiple Chunks
« on: March 06, 2018, 04:56:03 PM »
Just figure it out:

Code: [Select]
    # configuring the coodinate system for the project
    try:
        n_crs = PS.app.getCoordinateSystem("Select GCP Coordinate System...")
        print("GCP Coordinate System:\n{}".format(n_crs))
    except Exception:
        print("Error: unable to define GCP coordinate system")
        raise

    crs = chunk.crs  # chunk current coordinate system
    chunk = doc.chunk  # defining where the transform will happen


    # convert coordinate system for loaded images (cameras)
    for camera in chunk.cameras:
        csys = PS.CoordinateSystem
        camera.reference.location = csys.transform(camera.reference.location,
                                                   chunk.crs, n_crs)

    # convert coordinate system for chunk
    doc.chunk.crs = n_crs

    print("OLD COORDINATE SYSTEM:\n{}.".format(crs))
    print("\nNEW COORDINATE SYSTEM:\n{}\n".format(n_crs))

EDIT: code to fix the reference problem from previous post

30
General / Re: PhotoScan Handling Different Flights
« on: March 06, 2018, 03:40:37 PM »
From the results on the last post I can figure some conclusions out. Can anyone comment on it?

1. Comparing flights 1 and 2 from different altitudes: there's an error increase within the reprojection with the increase average flight altitude above ground. This is expected as the detailed terrain features are less recognizable on each aerial image making the image matching more difficult for the SIFT identifier and descriptor (mean keypoint size). With the higher altitude the higher value of ground resolution GSD. That's also reflects on the positioning error of the aerial images with a relative higher value of total error with the higher altitude above ground flight #2.

2. As expected processing both flights at the same time leads to an average value of GSD (3.04 cm/pix), mean keypoint size and all other parameters. But they are slightly different comparing with when you merge the differents flights with the Merge chunks... tool provided in PS. The parameters of the merged chunk are much more close in value with an arithmetical average between values of flights 1 and 2 than the processed all together one.

3. The RMS reprojection error and total error in the images positioning by SIFT + SfM algorithms with the merged chunks from individually processed flights are slightly lower in 2.48% and 7.45% respectively. Thus the number of tie points is also lower than the processed flights 1 + 2 in 1.71%.

4. Aligning chunks from flights 1 and 2 before merging them only affects the cameras positioning errors. Cameras locations were substantially degraded: the total error value increased 50.57%!

Final:
As first impression there is not a big difference when processing individually different flights and merging them or processing all together. In my case I need to process them together because of GCP distribution as I only have 7 and they are unevenly distributed on both flights (for both flights together maybe they're ok).

Seems that the error differences between those two ways of processing both flights are caused by the different number of tie points identified. Aligning before merging chunks affects the cameras locations and in this case degrading the positioning.

Am I missing something?
Is there a way on PS to calculate the overlap between flights?

Pages: 1 [2] 3 4 5