Forum

Author Topic: Connectivity groups  (Read 2754 times)

Elisa Normier

  • Newbie
  • *
  • Posts: 4
    • View Profile
Connectivity groups
« on: June 19, 2023, 05:58:33 PM »
Hello,

I would like to know if it's possible to have access to the different cameras groups of the "Connectivity" tab of the Survey Statistics dialog from the Metashape API (without using the GUI).

Thanks

tkwasnitschka

  • Jr. Member
  • **
  • Posts: 66
    • View Profile
Re: Connectivity groups
« Reply #1 on: June 28, 2023, 05:19:56 AM »
We wrote a script for this, actually visualizing the connectivity through shapes and ordering the cameras into groups by their components. The script allows you to even save the components as chunks or as individual projects to easier set markers. I have the impression that the results differ from what the components tell you. Also, this shows how very often there is no loop closure even within components!!!
This is the original thread:

https://www.agisoft.com/forum/index.php?topic=6989.msg34472#msg34472

Alexey wrote the original part of it and my PhD student expanded it to run on a multiprocessor environment - the script is terribly slow on thousands of images when run from the gui. Thus, you need to set up a python virtual environment and then run it, as described here:

https://git.geomar.de/arena/photogrammetry/metashape-scripts

Code: [Select]
# plot valid matches as a connectogram
# this is a version that looks at the input photo set in windows to keep the matrix small, this is only interesting
# for time sequential sets and if you dont actually plot the connections. Controlled through a variable (step)
# it also includes the graph

# clear all old variables with %reset -f in console

# Use the following command in cmd.exe (not powershell) to install numpy
# "%programfiles%\Agisoft\Metashape Pro\python\python.exe" -m pip install numpy

import concurrent.futures
import datetime
import multiprocessing
import os
import random
import time
from typing import Tuple

# import Metashape
import numpy as np
from Metashape import *
from numpy.typing import NDArray
from tqdm import tqdm

################################################


def random_color():
    """
        generate a radom color for the graph visualization
        https://stackoverflow.com/questions/28999287/generate-random-colors-rgb/28999469
    """
    levels = range(32, 256, 32)
    return tuple(random.choice(levels) for _ in range(3))

#####################################################


def connected_tuples(matchlist):
    """
        find the connected graphs (components) in the project so they can be exported
        https://ideone.com/tz9t7m
        https://stackoverflow.com/questions/28980797/given-n-tuples-representing-pairs-return-a-list-with-connected-tuples
    """
    # for every element, we keep a reference to the list it belongs to
    lists_by_element = {}

    def make_new_list_for(x, y):
        lists_by_element[x] = lists_by_element[y] = [x, y]

    def add_element_to_list(lst, el):
        lst.append(el)
        lists_by_element[el] = lst

    def merge_lists(lst1, lst2):
        merged_list = lst1 + lst2
        for el in merged_list:
            lists_by_element[el] = merged_list

    for x, y in matchlist:
        xList = lists_by_element.get(x)
        yList = lists_by_element.get(y)

        if not xList and not yList:
            make_new_list_for(x, y)

        if xList and not yList:
            add_element_to_list(xList, y)

        if yList and not xList:
            add_element_to_list(yList, x)

        if xList and yList and xList != yList:
            merge_lists(xList, yList)

    # return the unique lists present in the dictionary
    return set(tuple(l) for l in lists_by_element.values())


def find_valid_points_for_photo(proj: Tuple[Camera, NDArray], photo_matches: dict[Camera, set[int]]):
    """
    Find valid points for a photo (proj[0]) and save them to photo_matches dict

    @param proj:
    @param photo_matches:
    """
    total = set()  # only valid
    point_index = 0

    for track_id in proj[1]:
        while point_index < npoints and points_track_ids[point_index] < track_id:
            point_index += 1
        if point_index < npoints and points_track_ids[point_index] == track_id:
            if points_valids[point_index]:
                total.add(point_index)

    photo_matches[proj[0]] = total


def process_camgroup(i: int):
    """
    Creates the lines between cameras
    """
    shapeGroup: ShapeGroup = chunk.shapes.addGroup()
    shapeGroup.label = str([i])
    shapeGroup.show_labels = False
    shapeGroup.color = random_color()
    shapeGroup.show_labels = False

    cameraGroup: CameraGroup = chunk.addCameraGroup()
    cameraGroup.type = CameraGroup.Folder
    cameraGroup.label = str([i])

    camlist = connections[i]
    for camera in camlist:
        camera.group = cameraGroup
    # for camera in camlist: # https://stackoverflow.com/questions/1403674/pythonic-way-to-return-list-of-every-nth-item-in-a-larger-list
        if camera.label not in camera_lines_dict:
            continue
        for idx, coord_tuple in enumerate(camera_lines_dict[camera.label]):

            # Skip lines if
            if not (idx % draw_only_nth_line == 0):
                continue

            # verts = graphlist[camera.label]
            shape: Shape = chunk.shapes.addShape()
            shape.label = str(camera.label)
            shape.attributes["Matches"] = str(coord_tuple)
            shape.group = shapeGroup
            shape.geometry = Geometry.LineString(coord_tuple)
            # shape.type = Metashape.Shape.Type.Polyline
            # shape.vertices = verts
            # shape.has_z = True


if __name__ == "__main__":
    #########################################
    # declare variables and initialize necessary stuff

    doc: Document = Document()
    path = input("# Input project path:\n")
    doc.open(os.path.normpath(path))

    for i, chunk in enumerate(doc.chunks):
        print(i, chunk.label)

    chunk_number = input(
        f"# Select the chunk you want to work with (a number from 0 to {len(doc.chunks)-1}):\n")

    chunk: Chunk = doc.chunks[int(chunk_number)]

    point_cloud: TiePoints = chunk.tie_points
    points: TiePoints.Points = point_cloud.points

    # Separate track ids and valid status into their own numpy arrays for faster access
    points_track_ids: NDArray = np.array([point.track_id for point in points])
    points_valids: NDArray = np.array([point.valid for point in points])

    point_projections: TiePoints.Projections = point_cloud.projections
    npoints = len(points)

    photo_matches: dict[Camera, set[int]] = dict()

    # step = 3
    # window = 1199
    step = int(input("# Minimum number of valid tie points:\n"))
    window = int(input("# Window size for tiepoint matching:\n"))
    draw_only_nth_line = int(
        input("# Draw only each n-th connection line. Enter a number, e.g. 5 (1 means draw every line, 0 will lead to error):\n"))
    copy_chunks = input(
        "# Copy chunks and write them to their own project? Enter y for yes:\n")

    print(copy_chunks)

    # create a shape group to write to
    if not chunk.shapes:
        chunk.shapes = Shapes()
        chunk.shapes.crs = chunk.crs

    print("# Starting script")
    t0 = time.time()

    print(f"# Time: {datetime.datetime.now().isoformat()}")

    ################################################
    # Select cameras to work on
    selected_photos: list[Camera] = list()

    # Choice a: Select photos in the gui on which to run the export
    print("working with preselected photos")
    photo: Camera
    for photo in chunk.cameras:
        if photo.selected and photo.transform:
            selected_photos.append(photo)
        else:
            # print("skipping photo: ", photo.label)
            continue

    if not selected_photos:
        # raise Exception("You need to select images for this operation!")
        print("nothing selected!")
        print("working with all aligned photos")

        # Choice b: Automatically select all aligned photos:
        # http://www.agisoft.com/forum/index.php?topic=6029.0
        camera: Camera
        for camera in chunk.cameras:
            if camera.transform:
                selected_photos.append(camera)

    selected_photos = selected_photos

    ############################################
    # Find image pairs that share valid matches:

    matchlist: list[Tuple[Camera, Camera]] = []
    camera_lines_dict: dict[str, list[Tuple[Vector, Vector]]] = {}

    # TODO: Work without subsets
    # https://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks
    for subset in range(0, len(selected_photos), window):
        print("processing subset:", subset, " to ", subset + window + 0)
        subset_cams = selected_photos[subset:subset + window + 0]

        # projections_iter is a list of tuples
        # Each tuple contains at the first index the Camera
        # and on the second index a numpy array
        # This numpy array contains the track_id number for each point in the Projection of the Camera
        projections_iter: list[Tuple[Camera, NDArray]] = [(photo,
                                                           np.array(
                                                               [point.track_id for point in point_projections[photo]]
                                                           ))
                                                          for photo in subset_cams
                                                          ]

        # Use multiprocessing to find the valid points/matches for each Camera in projections_iter
        with concurrent.futures.ThreadPoolExecutor(multiprocessing.cpu_count()) as executor1:
            executor1.map(lambda photo: find_valid_points_for_photo(
                photo, photo_matches), projections_iter)

        # for photo in subset_cams:
        #     process_subset_cam_photos(photo, photo_matches)
        t1 = time.time()
        t1 -= t0
        t1 = float(t1)
        print(f"# Time: {datetime.datetime.now().isoformat()}")
        print("# Creation of valid matches completed in " +
              "{:.2f}".format(t1) + " seconds.")

        print("# Start processing photo matches (i-j-matching)")

        # Iterate through the cameras in nested for-loops
        # To find the cameras/photos which are connected and get their positions
        # (for the connection lines later on)
        for i in tqdm(range(len(subset_cams))):
            if subset_cams[i] not in photo_matches.keys():
                continue

            for j in range(i + 1, len(subset_cams)):

                if subset_cams[j] not in photo_matches.keys():
                    continue

                # & creates intersection/overlap
                matches = photo_matches[subset_cams[i]
                                        ] & photo_matches[subset_cams[j]]
                if len(matches) > step:
                    pos_i: Vector = chunk.crs.project(
                        chunk.transform.matrix.mulp(subset_cams[i].center))
                    pos_j: Vector = chunk.crs.project(
                        chunk.transform.matrix.mulp(subset_cams[j].center))
                    # the camera objects
                    matchlist.append((subset_cams[i], subset_cams[j]))

                    # to export a list of the cameras that match:
        # matchlist.append((chunk.cameras[i].label, chunk.cameras[j].label)) # the camera labels
        # file.write("%s %s %s %s\n" %(chunk.cameras[i].label, pos_i, chunk.cameras[j].label, pos_j)) # formerly needed for GIS export
        # print("matching:",subset_cams[i].label," & ", subset_cams[j].label)

                    # camera_lines_dict points from a Camera to a list of Coordinate tuples
                    # i.e each coordinate tuple represents a line between two photos/cameras
                    if not subset_cams[i].label in camera_lines_dict:
                        camera_lines_dict[subset_cams[i].label] = [
                            (pos_i, pos_j)]
                    else:
                        camera_lines_dict[subset_cams[i].label].append(
                            (pos_i, pos_j))

                    # This part notes the connection lines in graph:
            # TODO: Find a way to show all the connections, or a deliberate subset
                    # This only produces one line per camera! Switch to i for a connection to any cam, j for next cam in line
                    # graphlist[subset_cams[i].label] = [pos_i, pos_j]

                else:
                    continue
        # Metashape.app.update()

    # file.close()
    t1 = time.time()
    t1 -= t0
    t1 = float(t1)
    print(f"\n# Time: {datetime.datetime.now().isoformat()}")
    print("# Definition of valid matches completed after " +
          "{:.2f}".format(t1) + " seconds (total from start of script).\n")

    ####################################
    # find the connected graphs:
    connections = connected_tuples(np.array(matchlist))
    connections: NDArray = np.fromiter(connections, tuple)

    #########################################
    # Create camera groups from list of connected graphs

    # http://www.agisoft.com/forum/index.php?topic=6383.0
    # http://www.agisoft.com/forum/index.php?topic=4076.0

    camgroups = range(connections.size)
    # See e.g. here for multiprocessing: https://github.com/agisoft-llc/metashape-scripts/blob/master/src/footprints_to_shapes.py
    with concurrent.futures.ThreadPoolExecutor(multiprocessing.cpu_count()) as executor:
        executor.map(lambda index: process_camgroup(index), camgroups)

    t1 = time.time()
    t1 -= t0
    t1 = float(t1)
    print(f"\n# Time: {datetime.datetime.now().isoformat()}")
    print("Processing camgroups finished after " +
          "{:.2f}".format(t1) + " seconds (total from start of script).\n")

    # create chunks from camera groups and export them as projects:
    # This block has problems with multiprocessing.
    # chunk.copy() assigns a key to newchunk but if a chunk with that key already exists it increments that key
    # for multiprocessing this may result in chunks with the same key because it happens simultaneously
    if copy_chunks.lower() == "y":
        for i in tqdm(camgroups):
            newchunk: Chunk = chunk.copy()
            newchunk.label = chunk.label+"_group_"+str(i)

        # delete the other cameragroups
            newchunk.remove(newchunk.camera_groups[i+1:])
            newchunk.remove(newchunk.camera_groups[:i])
        # delete the other shapegroups
            newchunk.shapes.remove(newchunk.shapes.groups)
            newchunk.shapes.remove(newchunk.shapes)
        # delete all cameras that have not been matched or have not been selected for analysis
            # http://www.agisoft.com/forum/index.php?topic=8159.0
            deletelist = []
            for camera in newchunk.cameras:
                if not camera.group:
                    deletelist.append(camera)
            newchunk.remove(deletelist)

            doc.save(path=doc.path[:-4]+"_group_" +
                     str(i)+".psx", chunks=[newchunk])

    # report counter:
    # http://www.agisoft.com/forum/index.php?topic=2666.15
    t1 = time.time()
    t1 -= t0
    t1 = float(t1)
    print(f"\n# Time: {datetime.datetime.now().isoformat()}")
    print("# Script finished in " + "{:.2f}".format(t1) + " seconds.\n")
    # Metashape.app.update()
    doc.save()

    # -------------------
    # Notes

    # for test in chunk.shapes:
    # if test.group == chunk.shapes.groups[0]:
    # chunk.shapes.remove(test)

    # every nth item from a list: list[0::n]
    # start from nth: list[n:]
« Last Edit: June 28, 2023, 05:24:18 AM by tkwasnitschka »