Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - tkwasnitschka

Pages: [1] 2 3 ... 5
1
General / Re: Alignment Components inaccessible to Automation
« on: November 28, 2023, 02:27:03 PM »
Hello Alexey,
my idea is to separate the components into chunks (or even project files) in order to better work with them
1. Duplicate the chunk as many times as there are components
2. Incrementally delete all components but one, and the images not contained in that one component, e.g. for the second component of such a sequence of five components, delete components 0,2,3,4.

May I suggest that it would be really helpful to see components side by side?

Would you be able to confirm/elaborate on the observation that referenced images align into a single component even if not connected?
Is this a way to force creation of a single component, e.g. give all the images the same fake coordinate?

Cheers
Tom

2
General / Re: Alignment Components inaccessible to Automation
« on: November 24, 2023, 06:45:23 PM »
Thank you Alexey, this saves my day!
But how do I delete all but the one component I would like to retain, which is important to separate the components into chunks?

best greetings
Tom

3
General / Alignment Components inaccessible to Automation
« on: November 24, 2023, 05:03:10 PM »
I constantly encounter data sets that, after alignment without georeferencing, produce up to 50 alignment components. This makes sense, the images are a sequence of a single track with some interruptions.

BUT:
- There is no way to view alignment components side by side to set corresponding points efficiently
- I see no reference or possibility to acces an alignment component through python or any batch function
- There is no way to separate alignment components into image groups or better into chunks, making them accessible to established workflows
- There is no way to merge these components to get rid of them - they do not overlap!
- you may want to clarify the terminology here: Alignment components, not mesh components.

My only option is to duplicate each chunk and incrementally erase all but one components as many times as you have components - manually!!

PLEASE! Clarify and/or suggest a workaround!
The one thing I notice is that including camera poses helps to create less components, but in how far?

Many thanks
Tom

4
Python and Java API / Re: Connectivity groups
« on: June 28, 2023, 05:19:56 AM »
We wrote a script for this, actually visualizing the connectivity through shapes and ordering the cameras into groups by their components. The script allows you to even save the components as chunks or as individual projects to easier set markers. I have the impression that the results differ from what the components tell you. Also, this shows how very often there is no loop closure even within components!!!
This is the original thread:

https://www.agisoft.com/forum/index.php?topic=6989.msg34472#msg34472

Alexey wrote the original part of it and my PhD student expanded it to run on a multiprocessor environment - the script is terribly slow on thousands of images when run from the gui. Thus, you need to set up a python virtual environment and then run it, as described here:

https://git.geomar.de/arena/photogrammetry/metashape-scripts

Code: [Select]
# plot valid matches as a connectogram
# this is a version that looks at the input photo set in windows to keep the matrix small, this is only interesting
# for time sequential sets and if you dont actually plot the connections. Controlled through a variable (step)
# it also includes the graph

# clear all old variables with %reset -f in console

# Use the following command in cmd.exe (not powershell) to install numpy
# "%programfiles%\Agisoft\Metashape Pro\python\python.exe" -m pip install numpy

import concurrent.futures
import datetime
import multiprocessing
import os
import random
import time
from typing import Tuple

# import Metashape
import numpy as np
from Metashape import *
from numpy.typing import NDArray
from tqdm import tqdm

################################################


def random_color():
    """
        generate a radom color for the graph visualization
        https://stackoverflow.com/questions/28999287/generate-random-colors-rgb/28999469
    """
    levels = range(32, 256, 32)
    return tuple(random.choice(levels) for _ in range(3))

#####################################################


def connected_tuples(matchlist):
    """
        find the connected graphs (components) in the project so they can be exported
        https://ideone.com/tz9t7m
        https://stackoverflow.com/questions/28980797/given-n-tuples-representing-pairs-return-a-list-with-connected-tuples
    """
    # for every element, we keep a reference to the list it belongs to
    lists_by_element = {}

    def make_new_list_for(x, y):
        lists_by_element[x] = lists_by_element[y] = [x, y]

    def add_element_to_list(lst, el):
        lst.append(el)
        lists_by_element[el] = lst

    def merge_lists(lst1, lst2):
        merged_list = lst1 + lst2
        for el in merged_list:
            lists_by_element[el] = merged_list

    for x, y in matchlist:
        xList = lists_by_element.get(x)
        yList = lists_by_element.get(y)

        if not xList and not yList:
            make_new_list_for(x, y)

        if xList and not yList:
            add_element_to_list(xList, y)

        if yList and not xList:
            add_element_to_list(yList, x)

        if xList and yList and xList != yList:
            merge_lists(xList, yList)

    # return the unique lists present in the dictionary
    return set(tuple(l) for l in lists_by_element.values())


def find_valid_points_for_photo(proj: Tuple[Camera, NDArray], photo_matches: dict[Camera, set[int]]):
    """
    Find valid points for a photo (proj[0]) and save them to photo_matches dict

    @param proj:
    @param photo_matches:
    """
    total = set()  # only valid
    point_index = 0

    for track_id in proj[1]:
        while point_index < npoints and points_track_ids[point_index] < track_id:
            point_index += 1
        if point_index < npoints and points_track_ids[point_index] == track_id:
            if points_valids[point_index]:
                total.add(point_index)

    photo_matches[proj[0]] = total


def process_camgroup(i: int):
    """
    Creates the lines between cameras
    """
    shapeGroup: ShapeGroup = chunk.shapes.addGroup()
    shapeGroup.label = str([i])
    shapeGroup.show_labels = False
    shapeGroup.color = random_color()
    shapeGroup.show_labels = False

    cameraGroup: CameraGroup = chunk.addCameraGroup()
    cameraGroup.type = CameraGroup.Folder
    cameraGroup.label = str([i])

    camlist = connections[i]
    for camera in camlist:
        camera.group = cameraGroup
    # for camera in camlist: # https://stackoverflow.com/questions/1403674/pythonic-way-to-return-list-of-every-nth-item-in-a-larger-list
        if camera.label not in camera_lines_dict:
            continue
        for idx, coord_tuple in enumerate(camera_lines_dict[camera.label]):

            # Skip lines if
            if not (idx % draw_only_nth_line == 0):
                continue

            # verts = graphlist[camera.label]
            shape: Shape = chunk.shapes.addShape()
            shape.label = str(camera.label)
            shape.attributes["Matches"] = str(coord_tuple)
            shape.group = shapeGroup
            shape.geometry = Geometry.LineString(coord_tuple)
            # shape.type = Metashape.Shape.Type.Polyline
            # shape.vertices = verts
            # shape.has_z = True


if __name__ == "__main__":
    #########################################
    # declare variables and initialize necessary stuff

    doc: Document = Document()
    path = input("# Input project path:\n")
    doc.open(os.path.normpath(path))

    for i, chunk in enumerate(doc.chunks):
        print(i, chunk.label)

    chunk_number = input(
        f"# Select the chunk you want to work with (a number from 0 to {len(doc.chunks)-1}):\n")

    chunk: Chunk = doc.chunks[int(chunk_number)]

    point_cloud: TiePoints = chunk.tie_points
    points: TiePoints.Points = point_cloud.points

    # Separate track ids and valid status into their own numpy arrays for faster access
    points_track_ids: NDArray = np.array([point.track_id for point in points])
    points_valids: NDArray = np.array([point.valid for point in points])

    point_projections: TiePoints.Projections = point_cloud.projections
    npoints = len(points)

    photo_matches: dict[Camera, set[int]] = dict()

    # step = 3
    # window = 1199
    step = int(input("# Minimum number of valid tie points:\n"))
    window = int(input("# Window size for tiepoint matching:\n"))
    draw_only_nth_line = int(
        input("# Draw only each n-th connection line. Enter a number, e.g. 5 (1 means draw every line, 0 will lead to error):\n"))
    copy_chunks = input(
        "# Copy chunks and write them to their own project? Enter y for yes:\n")

    print(copy_chunks)

    # create a shape group to write to
    if not chunk.shapes:
        chunk.shapes = Shapes()
        chunk.shapes.crs = chunk.crs

    print("# Starting script")
    t0 = time.time()

    print(f"# Time: {datetime.datetime.now().isoformat()}")

    ################################################
    # Select cameras to work on
    selected_photos: list[Camera] = list()

    # Choice a: Select photos in the gui on which to run the export
    print("working with preselected photos")
    photo: Camera
    for photo in chunk.cameras:
        if photo.selected and photo.transform:
            selected_photos.append(photo)
        else:
            # print("skipping photo: ", photo.label)
            continue

    if not selected_photos:
        # raise Exception("You need to select images for this operation!")
        print("nothing selected!")
        print("working with all aligned photos")

        # Choice b: Automatically select all aligned photos:
        # http://www.agisoft.com/forum/index.php?topic=6029.0
        camera: Camera
        for camera in chunk.cameras:
            if camera.transform:
                selected_photos.append(camera)

    selected_photos = selected_photos

    ############################################
    # Find image pairs that share valid matches:

    matchlist: list[Tuple[Camera, Camera]] = []
    camera_lines_dict: dict[str, list[Tuple[Vector, Vector]]] = {}

    # TODO: Work without subsets
    # https://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks
    for subset in range(0, len(selected_photos), window):
        print("processing subset:", subset, " to ", subset + window + 0)
        subset_cams = selected_photos[subset:subset + window + 0]

        # projections_iter is a list of tuples
        # Each tuple contains at the first index the Camera
        # and on the second index a numpy array
        # This numpy array contains the track_id number for each point in the Projection of the Camera
        projections_iter: list[Tuple[Camera, NDArray]] = [(photo,
                                                           np.array(
                                                               [point.track_id for point in point_projections[photo]]
                                                           ))
                                                          for photo in subset_cams
                                                          ]

        # Use multiprocessing to find the valid points/matches for each Camera in projections_iter
        with concurrent.futures.ThreadPoolExecutor(multiprocessing.cpu_count()) as executor1:
            executor1.map(lambda photo: find_valid_points_for_photo(
                photo, photo_matches), projections_iter)

        # for photo in subset_cams:
        #     process_subset_cam_photos(photo, photo_matches)
        t1 = time.time()
        t1 -= t0
        t1 = float(t1)
        print(f"# Time: {datetime.datetime.now().isoformat()}")
        print("# Creation of valid matches completed in " +
              "{:.2f}".format(t1) + " seconds.")

        print("# Start processing photo matches (i-j-matching)")

        # Iterate through the cameras in nested for-loops
        # To find the cameras/photos which are connected and get their positions
        # (for the connection lines later on)
        for i in tqdm(range(len(subset_cams))):
            if subset_cams[i] not in photo_matches.keys():
                continue

            for j in range(i + 1, len(subset_cams)):

                if subset_cams[j] not in photo_matches.keys():
                    continue

                # & creates intersection/overlap
                matches = photo_matches[subset_cams[i]
                                        ] & photo_matches[subset_cams[j]]
                if len(matches) > step:
                    pos_i: Vector = chunk.crs.project(
                        chunk.transform.matrix.mulp(subset_cams[i].center))
                    pos_j: Vector = chunk.crs.project(
                        chunk.transform.matrix.mulp(subset_cams[j].center))
                    # the camera objects
                    matchlist.append((subset_cams[i], subset_cams[j]))

                    # to export a list of the cameras that match:
        # matchlist.append((chunk.cameras[i].label, chunk.cameras[j].label)) # the camera labels
        # file.write("%s %s %s %s\n" %(chunk.cameras[i].label, pos_i, chunk.cameras[j].label, pos_j)) # formerly needed for GIS export
        # print("matching:",subset_cams[i].label," & ", subset_cams[j].label)

                    # camera_lines_dict points from a Camera to a list of Coordinate tuples
                    # i.e each coordinate tuple represents a line between two photos/cameras
                    if not subset_cams[i].label in camera_lines_dict:
                        camera_lines_dict[subset_cams[i].label] = [
                            (pos_i, pos_j)]
                    else:
                        camera_lines_dict[subset_cams[i].label].append(
                            (pos_i, pos_j))

                    # This part notes the connection lines in graph:
            # TODO: Find a way to show all the connections, or a deliberate subset
                    # This only produces one line per camera! Switch to i for a connection to any cam, j for next cam in line
                    # graphlist[subset_cams[i].label] = [pos_i, pos_j]

                else:
                    continue
        # Metashape.app.update()

    # file.close()
    t1 = time.time()
    t1 -= t0
    t1 = float(t1)
    print(f"\n# Time: {datetime.datetime.now().isoformat()}")
    print("# Definition of valid matches completed after " +
          "{:.2f}".format(t1) + " seconds (total from start of script).\n")

    ####################################
    # find the connected graphs:
    connections = connected_tuples(np.array(matchlist))
    connections: NDArray = np.fromiter(connections, tuple)

    #########################################
    # Create camera groups from list of connected graphs

    # http://www.agisoft.com/forum/index.php?topic=6383.0
    # http://www.agisoft.com/forum/index.php?topic=4076.0

    camgroups = range(connections.size)
    # See e.g. here for multiprocessing: https://github.com/agisoft-llc/metashape-scripts/blob/master/src/footprints_to_shapes.py
    with concurrent.futures.ThreadPoolExecutor(multiprocessing.cpu_count()) as executor:
        executor.map(lambda index: process_camgroup(index), camgroups)

    t1 = time.time()
    t1 -= t0
    t1 = float(t1)
    print(f"\n# Time: {datetime.datetime.now().isoformat()}")
    print("Processing camgroups finished after " +
          "{:.2f}".format(t1) + " seconds (total from start of script).\n")

    # create chunks from camera groups and export them as projects:
    # This block has problems with multiprocessing.
    # chunk.copy() assigns a key to newchunk but if a chunk with that key already exists it increments that key
    # for multiprocessing this may result in chunks with the same key because it happens simultaneously
    if copy_chunks.lower() == "y":
        for i in tqdm(camgroups):
            newchunk: Chunk = chunk.copy()
            newchunk.label = chunk.label+"_group_"+str(i)

        # delete the other cameragroups
            newchunk.remove(newchunk.camera_groups[i+1:])
            newchunk.remove(newchunk.camera_groups[:i])
        # delete the other shapegroups
            newchunk.shapes.remove(newchunk.shapes.groups)
            newchunk.shapes.remove(newchunk.shapes)
        # delete all cameras that have not been matched or have not been selected for analysis
            # http://www.agisoft.com/forum/index.php?topic=8159.0
            deletelist = []
            for camera in newchunk.cameras:
                if not camera.group:
                    deletelist.append(camera)
            newchunk.remove(deletelist)

            doc.save(path=doc.path[:-4]+"_group_" +
                     str(i)+".psx", chunks=[newchunk])

    # report counter:
    # http://www.agisoft.com/forum/index.php?topic=2666.15
    t1 = time.time()
    t1 -= t0
    t1 = float(t1)
    print(f"\n# Time: {datetime.datetime.now().isoformat()}")
    print("# Script finished in " + "{:.2f}".format(t1) + " seconds.\n")
    # Metashape.app.update()
    doc.save()

    # -------------------
    # Notes

    # for test in chunk.shapes:
    # if test.group == chunk.shapes.groups[0]:
    # chunk.shapes.remove(test)

    # every nth item from a list: list[0::n]
    # start from nth: list[n:]

5
Python and Java API / Re: Extract the tie point positions (u and v)
« on: June 28, 2023, 05:01:59 AM »
So how do I modify above script to copy the INVALID tie points among two images back to the project as markers? Ideally, with the full length of their tracks across the other photos they were detected in?

I found that
Code: [Select]
npoints = len(points)
returns an error whenever there are only invalid points, since npoints = 0. This is the case I need to fix so often!

On the other hand, commenting out conditions like:
Code: [Select]
if not points[point_index].valid:
continue
actually did not change the output of many of the examples I looked at (because npoints did not iterate through them?)

I have seen Paulo write invalid tie points to file, though - but how?

Not being able to select in the photo view, validate, and copy over tie points from photo to photo has been a BIG problem for us, for many years...
Please, Alexey, improve this part! :)
Cheers
Tom

6
Feature Requests / Re: Import Video - Drone location metadata
« on: January 19, 2023, 06:15:26 PM »
Could Agisoft support please give a comprehensive reference as to which geospatial video metadata standards are currently supported?

There is no such reference in the manual and I read in the forum about SRT and some other standard tags that may be supported, but which are not named.

STANAG KLV Metadata support would be a great fetaure and work well along with the adaptive frame extraction feature.
Many thanks!
Tom

7
General / Components and parts of them
« on: March 17, 2021, 06:55:12 PM »
Hi,
I have a big 10k image project that falls apart into 1 component of 2 parts.
Part 2 is clearly disconnected from Part 1 and has 22 images, part 1 contains all the others that are clearly internally connected with matches - yet part1 shows me a complex pyramid of smaller parts.
Why is that?
What is the difference between a part and a component in the first place?
Are only "level one parts" disconnected?

I should add that I processed the data set on a cluster of 7 machines. Is this the reason?

Please explain here or expand the reference!
thanks
Tom

8
Python and Java API / Re: Merge identical cameras
« on: December 13, 2019, 02:07:29 PM »
allow my to simplify my question:
How can I merge identical cameras of two chunks and their projections the same way I can merge markers?
Thanks!

9
Python and Java API / KeyError for cams in thinned sparse point cloud
« on: December 13, 2019, 02:04:37 PM »
Hi,
I want to export the per-camera uv coordinates of a heavily thinned sparse point cloud with the script provided in this thread https://www.agisoft.com/forum/index.php?topic=10730.0.

As soon as the script hits a camera that does not contain any projections due to cloud thinning, I get a key error:

Code: [Select]
projections[chunk.cameras[2]]
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-31-d4c1239ec97f> in <module>()
----> 1 projections[chunk.cameras[2]]

KeyError: <Camera '20160325_154525_IMG_102768.JPG'>

How can let the loop ignore those cameras? I dont understand how to grab the cameras with anything else but "projections".

EDIT:
two more observations:
1. the GUI Reference pane still lists projections even it there are no points at all left on the image
2. Decimating by quality makes sense, but it creates the situation where some images do not have projections left at all. How do I decimate the sparse pointcloud by spatial subsampling? I actually just want to subsample "a point per area" of the cloud.

10
Python and Java API / Merge identical cameras
« on: December 03, 2019, 05:45:27 PM »
I run very large projects with bad camera calibration that I split into overlapping chunks, align them separately and then merge them back. Then, they need to be optimized to improve on my calibration and vague referencing, so I need matches shared among the components of the former chunks. This leaves me with the follwing simplified situation:

Chunk A (Cam1, Cam2, Cam3, Cam4) + Chunk B (Cam3, Cam4, Cam5, Cam6) =

Merged Chunk (Cam1, Cam2, Cam3, Cam4

                                                                   Cam3, Cam4, Cam5, Cam6)

For reasons I dont understand MS does not match (i.e., re-align after reset) the identical cameras unless I do a full alignment from scratch, which is not an option.

To link them with control points would mean many hundred points, slowing down the GUI considerably. To be fair, this is what Alexey recommended in the past: https://www.agisoft.com/forum/index.php?topic=10097.msg46129#msg46129

But as he points out this is not a perfect merging solution. I know the cameras are identical, so alignment isn't actually necessary.

I want to be able to merge those identical cameras the same way I can merge markers!

I thought this could be done in python, and yes you can transfer all projections from one camera to another with the following code:

Code: [Select]
projections = doc.chunk.point_cloud.projections
camera_3A = doc.chunks[0].cameras[2]
camera_3B = doc.chunks[1].cameras[0]
projections[camera_3B] = projections[camera_3A] # replaces projections even if target is empty

But I want to append, not replace the projections! How can this be done? Apparently there are no operands or write functions for projections or their dependencies:
Code: [Select]
projections[camera_3B] = projections[camera_3A].append(projections[camera_3B])     # Doesnt work, but wanted!
# then reduce number of tiepoints
# then delete duplicates

By the way, could someone once and for all clarify the relationship of
  • cameras
  • camera keys
  • keypoints
  • tiepoints
  • matches (deprecated??)
  • projections
  • tracks
  • track ids
  • points
  • sparse cloud
This is so central that there should be a document, preferrably with python code showing the relationship.
Thanks!

Tom

11
General / Re: Optimize overlapping chunks
« on: December 17, 2018, 02:06:24 PM »
Bump...
Am I really the only one who needs to optimize several very large chunks relative to each other?

12
General / Optimize overlapping chunks
« on: December 13, 2018, 07:28:02 PM »
I have 25 chunks in a grid that overlap to each of their neighbors, i.e. they partly contain the same cameras. Even though I ran them all with the same intrinsic parameters, overlapping areas dont perfectly match since the calibration is imperfect and cannot be done any better. Thus, the residual misfit was pushed into the extrinsics.

So, how do I
- optimize chunks relative to each other so that overlapping areas actually overlap
- merge chunks so that there are no double images (-> do I really have to manually pick the cameras or find them with a script?)

-> I cannot just run all chunks in one optimization step as each chunk already has 10k images, and there are 25 chunks.
Thanks!
Tom

13
Python and Java API / Re: Numpy array to mask/image?
« on: September 14, 2018, 03:16:30 PM »
This is my updated script. It creates a mask contained in the alpha channel but I fail to load it back to Photoscan.
The mask always returns the full image area and disregards what I saved in the A channel. Saving the Photoscan image, I see the mask is contained in the image. What am I doing wrong?
Code: [Select]
import PhotoScan, cv2
import numpy as np

print("start")
chunk = PhotoScan.app.document.chunk
scale = chunk.transform.scale
camera = chunk.cameras[0]
image = camera.photo.image()
depth = chunk.model.renderDepth(camera.transform, camera.sensor.calibration) #unscaled depth
map = np.fromstring(depth.tostring(), dtype=np.float32)

# scale array:
map_scaled = map*scale

# apply treshold:
threshold = 4
mask = ((map_scaled > threshold) * 255).astype("uint8")

# write back:
mask_img = PhotoScan.Image.fromstring(mask, image.width, image.height, 'K', datatype='U8')  <-- WRONG??
camera.mask = PhotoScan.Mask()
camera.mask.setImage(mask_img)
UPDATE: Found the Error. The channel must be K, not A. Updated the code above.

14
Python and Java API / Re: Numpy array to mask/image?
« on: September 13, 2018, 03:11:26 PM »
Alexey, I have seen that post but I dont get it.
Please, how do I convert a numpy.ndarray back to a photoscan image?
Thanks so much
Tom

15
Bug Reports / Re: Function createDifferenceMask not working correctly?
« on: September 13, 2018, 03:04:45 PM »
Maybe this is really stuped (not for me):
could you please show "the other way around" = how to write a numpy array back to a photoscan image? I just dont get it.
Thanks!
Tom

Pages: [1] 2 3 ... 5