Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - UAV_project

Pages: [1]
Python and Java API / Adding markers for georeferencing of an orthomosaic
« on: September 17, 2020, 05:30:04 PM »
I have looked through a number of threads and pieced together what I thought should work for importing markers and assigning them their image and real world coordinates. However, after running the script I fear that something is wrong since when I check the reference tab in the GUI, it doesn't show that my markers have their real world coordinates assigned.

Here is the code I am using:

Code: [Select]
        import Metashape as MS

        # set the coordinate ref system of the markers
        crs_ = MS.CoordinateSystem("EPSG::%d" % crs)
        chunk.marker_crs = crs_
        # set the z-accuracy very high because we don't have a good measure for this
        chunk.marker_location_accuracy = [0.05, 0.05, 1000]
        file = open('./export.csv', "rt")#input file
        # skips header
        lines = file.readlines()[1:]

        for line in lines:
            line = line.strip('\n')
            sp_line = line.rsplit(separation, 6) #splitting line
            # Strip extra quotation marks if they exist
            for i, item in enumerate(sp_line):
                sp_line[i] = item.strip('"')
            camera_name = sp_line[0]
            marker_name = sp_line[1]
            x = float(sp_line[2]) #x-coord in pixels
            y = float(sp_line[3]) #y-coord in pixels
            cx = float(sp_line[4]) #world x-coord of marker
            cy = float(sp_line[5]) #world y-coord of marker
            cz = float(sp_line[6]) #world z-coord of marker

            # flag to indicate if marker is found
            flag = 0
            # Create a Projection from the pixel coordinates
            pixel_coord = MS.Marker.Projection(MS.Vector([x, y]), True)
            # get the camera with same name as the name in current line
            for cam in chunk.cameras:
                if cam.label == camera_name:
                    #searching for a marker with the same ID as in txt file
                    for marker in chunk.markers:
                        if marker.label == marker_name:
                            # assign the marker's x,y coords in the current cam
                            marker.projections[cam] = pixel_coord
                            flag = 1
                    # if marker ID not yet existing in the chunk then create
                    if flag == 0:
                        marker = chunk.addMarker()
                        marker.label = marker_name

                        # assign the marker's x,y coords in the current cam
                        marker.projections[cam] = pixel_coord
                        # Add the world coordinates to the marker
                        marker.Reference.location = MS.Vector([cx, cy, cz])
                        # Set the accuracy of the marker's coords
                        marker.Reference.accuracy = MS.Vector(accuracy)
                        # enable the marker
                        marker.Reference.enabled = True

                    # move onto the next line of the text file

        print("Import finished.")

One additional question to this. If I add the markers like this, can I then later use the reference_preselection mode when I am trying to match the images? If so, would the following be correct (after having added the markers as above):

Code: [Select] = chunk.marker_crs
chunk.matchPhotos(generic_preselection=False, reference_preselection=True)


General / Only getting single band orthos when using multi-band imagery
« on: September 11, 2020, 05:07:56 PM »
I am processing 6 band images into orthomosaics and for some reason it keep coming out as a single greyscale band. I have processed many orthos using the same workflow before but for some reason for this specific dataset I only get a single band output. Below are some screenshots showing my workspace.

I have seen in the forum that there is a tweak that lets you change the depth_point_threshold. However, each post about this only explains how to perform this using the GUI, but I am looking for a way to do this through Python. Is this possible, or am I forced to manually set this tweak in each new project?


I am using the Python API to generate orthomosiacs and I have noticed that very straight lines are being cut out of my orthos which I thought was a problem of the bounding box being the wrong size (see attached). However, after re-sizing the bounding box manually and then re-processing the DEM and orthomosaic, the result has not changed. Thus, I am wondering if this is caused by a failure to merge some of the image blocks together (again, see attached)? Because this is really a big problem, because in all of these orthomosaics we have lost a large amount of data, but when I open the project it appears that the point cloud has this data included, but it does not appear in my orthomosaic.

Is it just that a block of images could not be merged to the rest and thus they are being left out no matter what I do, or is there something else going on? I can gladly share the project file if necessary.

General / [SOLVED] How to manually re-size the bounding box in the GUI
« on: August 27, 2020, 11:01:57 AM »
Hey guys, I have a project where, for some reason, Metashape has been cutting of large portions of my orthomosaics because the bounding box has been inappropriately sized during processing (see attached). I do all of the processing in python so I never realized this beforehand, but having manually checked I have found the issue.

For now, I want to know if I can manually just drag/extend the bounding box to increase it's dimensions so it covers all of the areas of the point cloud. But in the long run, I would like to avoid this manual step, and so I was wondering if anyone has insight into why this is occurring in the first place and how I could potentially avoid this in the future?


Hey guys and gals,

I am working on a project where we have two cameras attached to a UAV platform. These are meant to trigger simultaneously, so that for each triggering event we have 1 GPS location, and 2 images (one from each sensor).
[GPS1, GPS2, GPS3]

However, one of the cameras has trouble triggering and so for some triggers we only get the GPS and one image. We cannot efficiently determine which triggering events are associated with missed images, and so for the faulty camera we end up with a list of images that is less than the number of GPS positions.
[GPS1, GPS2, GPS3, GPS4, GPS5, GPS6]
[SONY1, SONY2, Sony3, Sony4]

So we don't know which GPS triggers the Sony images match with anymore once that a few images are missed, and so we cannot assign GPS coordinates to the Sony imagery, and we end up with an orthomosiac for the Sony without any georeferencing information and are forced to manually georeference. I know that Metashape allows for markers, but we want to automate the entire process in Python, so this is something we want to avoid.

Is there some other way that Metashape can help me to get GPS information into my images that don't have proper reference data? I have tried to load the two image sets into different chunks. So for the GPS chunk I add the images and matching reference data and give the proper CRS. For the non-GPS chunk I just add the images and leave the CRS in Local (m).

When I align the two chunks, is there maybe a way that I can check which images are the closest aligned with one another? For example, if images Sony5 and MCAW8 were aligned ontop of eachother, or shared some high amount of overlap (e.g. 95%) then I could take the GPS point from MCAW8 and assign it to Sony5

Code: [Select]
# MCAW chunk
loc = chunk1.cameras[8].reference.location

# Sony chunk
chunk2.cameras[5].reference.location = loc

I don't know if there is anything in Metashape that could help me perform something like this, but maybe someone here can give some advice?


General / Coregistration of multiple orthomosaics
« on: July 21, 2020, 02:41:18 PM »

I was wondering if there is some way that I could use Metashape to co-register two or more orthomosaics together. The issue is that we only have GPS coordinates for one of the cameras on our UAV, and we have problems that sometimes one of the other two cameras will miss a trigger, and we can't tell where the missed triggers occur and so we cannot match the other two sensors with our GPS coordinates from the one sensor.

So if I was to create one orthomosaic with the GPS coordinates, and then two without GPS, could I then re-load these files into Metashape and somehow get them all combined into one georeferenced orthomosaic?



So I have exported the point cloud from Metashape and I have a set of planes with the names plane 1, plane 2, ...

I am wondering if these planes are in the same order as the sensors in the chunk.sensors list? Or does the setting of a master band to a different band have any effect on the ordering of the point cloud planes?


Hey guys,

I am using Metashape for orthomosaics, but I was wondering if it could perhaps be used for another task in my project.
I have a set of MicaSense images where the bands are unaligned. What I want is to align all bands for a given image. I have tried to play around with it myself, using a 3x3 subset of the[0].transform as a homography matrix, but it doesn't seem to be working.

Is it possible for Metashape to help me with this task? Or shall I look elsewhere? I have also tried CV2 but they seem to have poor support for 16bit imagery (which MicaSense is), and my warped images are being given back as 8bit.


Python and Java API / Quality measure of keypoints/tiepoints
« on: April 09, 2020, 03:57:43 PM »
Hey guys,

I found in previous posts that I can use the projections of the tie points to find the pixels which match within different images. However, I am now wondering, is it possible for me to get some information on how confident the algorithm is that these points are matched? Something like a similarity score for a given projection track?


General / Calculation of pixel size by Metashape
« on: March 13, 2020, 12:21:05 PM »
I have noticed that Metashape will calculate the pixel_width, pixel_height, and pixel_size seemingly on its own (using the python API) when no values are explicitly provided. The values are very close to what I have calculated myself, but there is still some discrepency:

My calculation :  0.003916667
Metashape calc : 0.00403846

So I am wondering, how does Metashape come up with this value? For my own calculation I used the sensor width or height in mm divided by sensor width or height in pixels.

Also, what would the effect be, using one over the other? Does the use of values which differ by such a small margin have any real effect on the output (DEM, orthomosaic)?


Pages: [1]