Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - UAV_project

Pages: [1] 2
Great it works now, thanks!

Here is a screenshot of the markers in the GUI after I ran the Python script to add them. As you can see they lack Easting and Northing coordinates, although they should be there if my code is written correctly.

Python and Java API / Adding markers for georeferencing of an orthomosaic
« on: September 17, 2020, 05:30:04 PM »
I have looked through a number of threads and pieced together what I thought should work for importing markers and assigning them their image and real world coordinates. However, after running the script I fear that something is wrong since when I check the reference tab in the GUI, it doesn't show that my markers have their real world coordinates assigned.

Here is the code I am using:

Code: [Select]
        import Metashape as MS

        # set the coordinate ref system of the markers
        crs_ = MS.CoordinateSystem("EPSG::%d" % crs)
        chunk.marker_crs = crs_
        # set the z-accuracy very high because we don't have a good measure for this
        chunk.marker_location_accuracy = [0.05, 0.05, 1000]
        file = open('./export.csv', "rt")#input file
        # skips header
        lines = file.readlines()[1:]

        for line in lines:
            line = line.strip('\n')
            sp_line = line.rsplit(separation, 6) #splitting line
            # Strip extra quotation marks if they exist
            for i, item in enumerate(sp_line):
                sp_line[i] = item.strip('"')
            camera_name = sp_line[0]
            marker_name = sp_line[1]
            x = float(sp_line[2]) #x-coord in pixels
            y = float(sp_line[3]) #y-coord in pixels
            cx = float(sp_line[4]) #world x-coord of marker
            cy = float(sp_line[5]) #world y-coord of marker
            cz = float(sp_line[6]) #world z-coord of marker

            # flag to indicate if marker is found
            flag = 0
            # Create a Projection from the pixel coordinates
            pixel_coord = MS.Marker.Projection(MS.Vector([x, y]), True)
            # get the camera with same name as the name in current line
            for cam in chunk.cameras:
                if cam.label == camera_name:
                    #searching for a marker with the same ID as in txt file
                    for marker in chunk.markers:
                        if marker.label == marker_name:
                            # assign the marker's x,y coords in the current cam
                            marker.projections[cam] = pixel_coord
                            flag = 1
                    # if marker ID not yet existing in the chunk then create
                    if flag == 0:
                        marker = chunk.addMarker()
                        marker.label = marker_name

                        # assign the marker's x,y coords in the current cam
                        marker.projections[cam] = pixel_coord
                        # Add the world coordinates to the marker
                        marker.Reference.location = MS.Vector([cx, cy, cz])
                        # Set the accuracy of the marker's coords
                        marker.Reference.accuracy = MS.Vector(accuracy)
                        # enable the marker
                        marker.Reference.enabled = True

                    # move onto the next line of the text file

        print("Import finished.")

One additional question to this. If I add the markers like this, can I then later use the reference_preselection mode when I am trying to match the images? If so, would the following be correct (after having added the markers as above):

Code: [Select] = chunk.marker_crs
chunk.matchPhotos(generic_preselection=False, reference_preselection=True)


General / Re: Only getting single band orthos when using multi-band imagery
« on: September 14, 2020, 10:39:35 AM »
Hi Alexey, attached is a screenshot showing the sensors.

General / Only getting single band orthos when using multi-band imagery
« on: September 11, 2020, 05:07:56 PM »
I am processing 6 band images into orthomosaics and for some reason it keep coming out as a single greyscale band. I have processed many orthos using the same workflow before but for some reason for this specific dataset I only get a single band output. Below are some screenshots showing my workspace.

I have seen in the forum that there is a tweak that lets you change the depth_point_threshold. However, each post about this only explains how to perform this using the GUI, but I am looking for a way to do this through Python. Is this possible, or am I forced to manually set this tweak in each new project?


I am using the Python API to generate orthomosiacs and I have noticed that very straight lines are being cut out of my orthos which I thought was a problem of the bounding box being the wrong size (see attached). However, after re-sizing the bounding box manually and then re-processing the DEM and orthomosaic, the result has not changed. Thus, I am wondering if this is caused by a failure to merge some of the image blocks together (again, see attached)? Because this is really a big problem, because in all of these orthomosaics we have lost a large amount of data, but when I open the project it appears that the point cloud has this data included, but it does not appear in my orthomosaic.

Is it just that a block of images could not be merged to the rest and thus they are being left out no matter what I do, or is there something else going on? I can gladly share the project file if necessary.

General / Re: How to manually re-size the bounding box in the GUI
« on: August 27, 2020, 11:21:05 AM »
Nevermind the re-sizing part, I found it myself now, sorry. But still my question remains, how come this is happening in the first place? Metashape has detected these points but it does not include them in the bounding box region...

General / [SOLVED] How to manually re-size the bounding box in the GUI
« on: August 27, 2020, 11:01:57 AM »
Hey guys, I have a project where, for some reason, Metashape has been cutting of large portions of my orthomosaics because the bounding box has been inappropriately sized during processing (see attached). I do all of the processing in python so I never realized this beforehand, but having manually checked I have found the issue.

For now, I want to know if I can manually just drag/extend the bounding box to increase it's dimensions so it covers all of the areas of the point cloud. But in the long run, I would like to avoid this manual step, and so I was wondering if anyone has insight into why this is occurring in the first place and how I could potentially avoid this in the future?


Hey guys and gals,

I am working on a project where we have two cameras attached to a UAV platform. These are meant to trigger simultaneously, so that for each triggering event we have 1 GPS location, and 2 images (one from each sensor).
[GPS1, GPS2, GPS3]

However, one of the cameras has trouble triggering and so for some triggers we only get the GPS and one image. We cannot efficiently determine which triggering events are associated with missed images, and so for the faulty camera we end up with a list of images that is less than the number of GPS positions.
[GPS1, GPS2, GPS3, GPS4, GPS5, GPS6]
[SONY1, SONY2, Sony3, Sony4]

So we don't know which GPS triggers the Sony images match with anymore once that a few images are missed, and so we cannot assign GPS coordinates to the Sony imagery, and we end up with an orthomosiac for the Sony without any georeferencing information and are forced to manually georeference. I know that Metashape allows for markers, but we want to automate the entire process in Python, so this is something we want to avoid.

Is there some other way that Metashape can help me to get GPS information into my images that don't have proper reference data? I have tried to load the two image sets into different chunks. So for the GPS chunk I add the images and matching reference data and give the proper CRS. For the non-GPS chunk I just add the images and leave the CRS in Local (m).

When I align the two chunks, is there maybe a way that I can check which images are the closest aligned with one another? For example, if images Sony5 and MCAW8 were aligned ontop of eachother, or shared some high amount of overlap (e.g. 95%) then I could take the GPS point from MCAW8 and assign it to Sony5

Code: [Select]
# MCAW chunk
loc = chunk1.cameras[8].reference.location

# Sony chunk
chunk2.cameras[5].reference.location = loc

I don't know if there is anything in Metashape that could help me perform something like this, but maybe someone here can give some advice?


General / Re: Coregistration of multiple orthomosaics
« on: July 30, 2020, 09:39:32 AM »
Ya sorry about the confusion.

But yes the two orthomosaics would be of the exact same area, just taken with different sensors.

I am wondering now, if it is actually just possible to create two chunks, one with the GPS location data and the images that match, and then one chunk with the images that lack GPS data, and then try to align/merge the two chunks together in order to get the images from both sensors having GPS information. But maybe this won't work because the one chunk will use the local coordinates and the other EPSG:4326.

I will give this a try, but if anyone has any further advice on how I could approach this problem I would be happy to hear it  :)

General / Re: Coregistration of multiple orthomosaics
« on: July 29, 2020, 12:32:08 PM »
@Probert, what screenshot would you like to see? I am actually doing it all using the Python API so there isn't really much to see.

@Bastiaan, I am trying to avoid the use of any manual processing steps as I am trying to develop an automated processing workflow, so manually adding in markers is something I am trying to avoid.

General / Coregistration of multiple orthomosaics
« on: July 21, 2020, 02:41:18 PM »

I was wondering if there is some way that I could use Metashape to co-register two or more orthomosaics together. The issue is that we only have GPS coordinates for one of the cameras on our UAV, and we have problems that sometimes one of the other two cameras will miss a trigger, and we can't tell where the missed triggers occur and so we cannot match the other two sensors with our GPS coordinates from the one sensor.

So if I was to create one orthomosaic with the GPS coordinates, and then two without GPS, could I then re-load these files into Metashape and somehow get them all combined into one georeferenced orthomosaic?



So I have exported the point cloud from Metashape and I have a set of planes with the names plane 1, plane 2, ...

I am wondering if these planes are in the same order as the sensors in the chunk.sensors list? Or does the setting of a master band to a different band have any effect on the ordering of the point cloud planes?


Python and Java API / Re: Getting keypoints for a Camera/Photo
« on: May 19, 2020, 12:22:39 PM »
Wouldn't the tie points be just as good, if not better, than the key points? This is the assumption I have been working with in regards to using the tie point coordinates for homography estimation. Alexey, would you say that the tie points are of better quality than the key point pairs?

Pages: [1] 2