Forum

Author Topic: Aligning Images from one camera using data from another camera python  (Read 5922 times)

christopher.giesige@sjsu.

  • Newbie
  • *
  • Posts: 7
    • View Profile
Hey Metashape,

I have thermal aerial imagery taken of wildland fire with two different cameras: a mid-wave IR bandpass and long-wave IR bandpass. After building the orthomosaics, the mosaics from the long-wave camera georeference nicely, however, the mid-wave camera mosaics can be shifted quite a bit from the long-wave mosaic. Ideally, the mosaics from both cameras captured during the same time frame overlap precisely.

The mid-wave images I know can be tougher to work with, but any suggestions on ways that I could use the data from the long-wave camera to essentially bring the mid-wave images to align with them?

I am currently building the mosaics from both cameras in Metashape and then running an affine transformation on them to re-georeference the mid-wave mosaics to match with the long-wave mosaics, but thought maybe there is a way to get them to match in Metashape during the georeferencing process.

Any suggestions would be helpful.
Thank you,
Chris     

christopher.giesige@sjsu.

  • Newbie
  • *
  • Posts: 7
    • View Profile
Re: Aligning Images from one camera using data from another camera python
« Reply #1 on: August 29, 2024, 12:48:57 AM »
I am trying to solve this by importing the mid-wave IR images, navigation and camera orientation reference data for the images, and reference data from the long-wave IR camera to be used as markers. The thought is by importing the long-wave reference data as markers, those markers can be used to help align the mid-wave images according to the geolocation points where the long-wave was georeferenced to.

Since the "Identification" of the long-wave images does not match that of the mid-wave (different frame numbering system), I try to match each marker to a mid-wave camera by longitude and latitude. If the marker has the same longitude and latitude as a camera, attach it to that camera. However, it has not really helped much. Any thoughts or suggestions are well appreciated!

Here is the code I have so far:
chunk = doc.addChunk()
chunk.label = "pass_" + telops_reference_path[95:-4]

# add images to chunk
chunk.addPhotos(telops_images)

# import references for photos
chunk.crs = Metashape.CoordinateSystem("EPSG::4326")


chunk.importReference(path = telops_reference_path, format = Metashape.ReferenceFormatCSV, skip_rows=1,
                      columns = 'nxyzcbaXYZCBA', delimiter = "\t", crs = Metashape.CoordinateSystem('EPSG::4326'))

chunk.importReference(path = workswell_markers_path, format = Metashape.ReferenceFormatCSV, skip_rows=2, columns = 'nxyzabcXYZABC',
                        delimiter = "\t", items = Metashape.ReferenceItemsMarkers, create_markers = True)


for camera in chunk.cameras:
    camera.reference.enabled = True
    camera.reference.rotation_enabled = True

for marker in chunk.markers:
    marker.reference.enabled = True
   
    marker_lon = marker.reference.location.x
    marker_lat = marker.reference.location.y
   
    marker_lon_str = str(marker_lon)
    marker_lat_str = str(marker_lat)

    closest_camera = None
   
    for camera in chunk.cameras:
        camera_lon = camera.reference.location.x
        camera_lat = camera.reference.location.y
       
        camera_lon_str = str(camera_lon)
        camera_lat_str = str(camera_lat)
           
        if camera_lon_str[0:8] == marker_lon_str[0:8] and camera_lat_str[0:6] == marker_lat_str[0:6]:
            closest_camera = camera
            # print(closest_camera)
   
    if closest_camera is not None:
        marker.projections[closest_camera] = Metashape.Marker.Projection()
    else:
        print(f"Could not find a matching camera for marker {marker.label} within tolerance.")
           
           

chunk.matchPhotos(downscale=0, filter_stationary_points=False, generic_preselection = True,
                  reference_preselection=True, reference_preselection_mode = Metashape.ReferencePreselectionSequential,
                  keypoint_limit = 20000, tiepoint_limit = 5000)

chunk.alignCameras()

realign_list = list()
for camera in chunk.cameras:
    if not camera.transform:
        realign_list.append(camera)
if realign_list:
    chunk.alignCameras(cameras = realign_list)

chunk.optimizeCameras(fit_f=True, fit_cx=True, fit_cy=True, fit_b1=False, fit_b2=False, fit_k1=True,
                       fit_k2=True, fit_k3=True, fit_k4=True, fit_p1=True, fit_p2=True, adaptive_fitting=True)

# doc.save()

chunk.importRaster(path = dem_path, raster_type = Metashape.DataSource.ElevationData,
                            crs = Metashape.CoordinateSystem('EPSG::4326'), nodata_value = -999999)

chunk.buildOrthomosaic(surface_data=Metashape.DataSource.ElevationData, blending_mode=Metashape.BlendingMode.MosaicBlending,
                                projection = Metashape.OrthoProjection(crs = Metashape.CoordinateSystem('EPSG::4326')))

chunk.exportRaster(path = output_mosaic_folder + str(chunk.label) + "_ww_to_telops.tif", format=Metashape.RasterFormatTiles,
                            image_format=Metashape.ImageFormat.ImageFormatTIFF, source_data = Metashape.OrthomosaicData,
                            white_background = False, save_alpha = True)

print(f"Mosaic exported for chunk: {chunk.label}")
doc.save()