Agisoft Metashape

Agisoft Metashape => Python and Java API => Topic started by: mwbm07 on April 09, 2019, 12:17:28 AM

Title: Extract the tie point positions (u and v)
Post by: mwbm07 on April 09, 2019, 12:17:28 AM
How to extract the tie point positions (u and v, in pixel)? is it possible for python? What is the process for accessing this information?
Title: Re: Extract the tie point positions (u and v)
Post by: Alexey Pasumansky on April 10, 2019, 03:40:25 PM
Hello mwbm07,

Please check if the following code gives you the idea how to get the coordinates of the tie points projections on the source images:


Code: [Select]
#compatibility Metashape Pro 1.5.2

from PySide2 import QtGui, QtCore, QtWidgets
import Metashape, time


def var0():
t0 = time.time()

point_cloud = chunk.point_cloud
points = point_cloud.points
npoints = len(points)
projections = chunk.point_cloud.projections
point_proj = [] #list of strings
max_count = 0

app.processEvents()
print("\nScript started ...")
app.processEvents()

point_ids = [-1] * len(point_cloud.tracks)

for point_id in range(0, npoints):
point_ids[points[point_id].track_id] = point_id

for camera in chunk.cameras:
if not camera.type == Metashape.Camera.Type.Regular: #skipping camera track keyframes
continue
if not camera.transform: #skipping NA cameras
continue

for proj in projections[camera]:
track_id = proj.track_id
point_id = point_ids[track_id]
if point_id < 0:
continue
if not points[point_id].valid:
continue

line = "{:s},{:d},{:.2f},{:.2f}".format(camera.label, point_id, proj.coord.x, proj.coord.y)
point_proj.append(line) #or write the line to file
#print(line)


t2 = time.time()
t2 -= t0
t2 = float(t2)

app.processEvents()
print("Script finished in " + "{:.2f}".format(t2) + " seconds.")
app.processEvents()
return point_proj


####
global app, chunk
app = QtWidgets.QApplication.instance()

chunk = Metashape.app.document.chunk
result0  = var0()
print(len(result0))
Title: Re: Extract the tie point positions (u and v)
Post by: darkl1ght on July 02, 2021, 03:25:28 PM
Hi Alexey,

On the same lines, is it possible to find from and to of those track_id(s)? Like from which camera is the track started, and on which camera it ended, a particular track started and which camera(image) the track ended, the positions(x  and y coordinates) on the source and destination cameras(images). Just like how we are able to visualize on view matches. Attached image of the same.

Is there a code to get that?

Thanks,
Ayush

Title: Re: Extract the tie point positions (u and v)
Post by: Alexey Pasumansky on July 03, 2021, 03:34:50 PM
Hello Ayush,

Do you mean to get the list of common tie points (with their respective 2D coordinates) for user-defined pair of cameras?
Title: Re: Extract the tie point positions (u and v)
Post by: darkl1ght on July 03, 2021, 04:59:34 PM
Hi Alexey,

Yes, that is exactly what I am looking for. Currently, it just prints the 2D coordinates on one of the images. But I need it for user-defined pair of cameras OR whatever images have been matched by the Agisoft. (Those that get aligned in the Align Photos and Match photos steps)

Regards,
Ayush
Title: Re: Extract the tie point positions (u and v)
Post by: Alexey Pasumansky on July 17, 2021, 01:19:38 AM
Hello Ayush,

Please check the following script, that should return the list of 2D vector tuples, representing the projections of common points for the camera pair passed to the function:

Code: [Select]
import Metashape


def get_tie_points(camera1, camera2, chunk):

if not camera1.transform:
Metashape.app.messageBox("Not aligned camera selected.")
return 0
if not camera2.transform:
Metashape.app.messageBox("Not aligned camera selected.")
return 0

projections = chunk.point_cloud.projections
points = chunk.point_cloud.points
tracks = chunk.point_cloud.tracks
npoints = len(points)
point_ids = [-1] * len(chunk.point_cloud.tracks)
for point_id in range(0, npoints):
point_ids[points[point_id].track_id] = point_id
camera_matches_valid = dict()

tie_points = dict()
tie_points[camera1] = dict()
tie_points[camera2] = dict()

for camera in [camera1, camera2]:
T = camera.transform.inv()
calib = camera.sensor.calibration
valid_matches = set()
for proj in projections[camera]:
track_id = proj.track_id
point_id = point_ids[track_id]

if point_id < 0:
continue
if not points[point_id].valid: #skipping invalid points
continue
valid_matches.add(point_id)
tie_points[camera][point_id] = proj.coord

camera_matches_valid[camera] = valid_matches
valid = camera_matches_valid[camera1].intersection(camera_matches_valid[camera2])

tie_point_output = list()
for point_id in valid:
tie_point_output.append((tie_points[camera1][point_id], tie_points[camera2][point_id]))
        if not len(tie_point_output):
                return None
return tie_point_output

chunk = Metashape.app.document.chunk
common_tiepoints = get_tie_points(chunk.cameras[0], chunk.cameras[1], chunk)
Title: Re: Extract the tie point positions (u and v)
Post by: xabierr on April 05, 2023, 02:48:48 AM
Hi Alexey,

I get the following error:  an unindent does not match any outer indentation level (line 47).

Tried modifying the indentation but still no success.

cheers,

Javier
Title: Re: Extract the tie point positions (u and v)
Post by: Paulo on April 05, 2023, 09:28:26 PM
Hi xabierr,

try this code where indentation was fixed:
Code: [Select]
import Metashape


def get_tie_points(camera1, camera2, chunk):

if not camera1.transform:
Metashape.app.messageBox("Not aligned camera selected.")
return 0
if not camera2.transform:
Metashape.app.messageBox("Not aligned camera selected.")
return 0

projections = chunk.point_cloud.projections
points = chunk.point_cloud.points
tracks = chunk.point_cloud.tracks
npoints = len(points)
point_ids = [-1] * len(chunk.point_cloud.tracks)
for point_id in range(0, npoints):
point_ids[points[point_id].track_id] = point_id
camera_matches_valid = dict()

tie_points = dict()
tie_points[camera1] = dict()
tie_points[camera2] = dict()

for camera in [camera1, camera2]:
T = camera.transform.inv()
calib = camera.sensor.calibration
valid_matches = set()
for proj in projections[camera]:
track_id = proj.track_id
point_id = point_ids[track_id]

if point_id < 0:
continue
if not points[point_id].valid: #skipping invalid points
continue
valid_matches.add(point_id)
tie_points[camera][point_id] = proj.coord

camera_matches_valid[camera] = valid_matches
valid = camera_matches_valid[camera1].intersection(camera_matches_valid[camera2])

tie_point_output = list()
for point_id in valid:
tie_point_output.append((tie_points[camera1][point_id], tie_points[camera2][point_id]))
if not len(tie_point_output):
return None
return tie_point_output

chunk = Metashape.app.document.chunk
common_tiepoints = get_tie_points(chunk.cameras[0], chunk.cameras[1], chunk)

Note this code is valid for versions 1.8 or lower...
Title: Re: Extract the tie point positions (u and v)
Post by: xabierr on April 05, 2023, 11:24:23 PM
Thanks so much. Should I be selecting or being asked for a pair of cameras?
Title: Re: Extract the tie point positions (u and v)
Post by: Paulo on April 06, 2023, 01:16:46 AM
Right now it uses the 1st 2 cameras in project... so you would need to adapt it if you want to use 2 selected cameras....
Title: Re: Extract the tie point positions (u and v)
Post by: xabierr on April 11, 2023, 01:41:19 AM
Thanks that works great.

I have modified the code to write tie points to a csv file, including camera label and date/time:

Code: [Select]
import Metashape
from PyQt5.QtWidgets import QFileDialog
import os

def get_tie_points(camera1, camera2, chunk):

    if not camera1.transform:
        Metashape.app.messageBox("Not aligned camera selected.")
        return 0
    if not camera2.transform:
        Metashape.app.messageBox("Not aligned camera selected.")
        return 0
       
    projections = chunk.point_cloud.projections
    points = chunk.point_cloud.points
    tracks = chunk.point_cloud.tracks
    npoints = len(points)
    point_ids = [-1] * len(chunk.point_cloud.tracks)
    for point_id in range(0, npoints):
        point_ids[points[point_id].track_id] = point_id
    camera_matches_valid = dict()
   
    tie_points = dict()
    tie_points[camera1] = dict()
    tie_points[camera2] = dict()
       
    for camera in [camera1, camera2]:
        T = camera.transform.inv()
        calib = camera.sensor.calibration
        valid_matches = set()
        for proj in projections[camera]:
            track_id = proj.track_id
            point_id = point_ids[track_id]

            if point_id < 0:
                continue
            if not points[point_id].valid: #skipping invalid points
                continue
            valid_matches.add(point_id)
            tie_points[camera][point_id] = proj.coord
           
        camera_matches_valid[camera] = valid_matches
    valid = camera_matches_valid[camera1].intersection(camera_matches_valid[camera2])
   
    tie_point_output = list()
    for point_id in valid:
        tie_point_output.append((tie_points[camera1][point_id], tie_points[camera2][point_id]))
    if not len(tie_point_output):
        return None
    return tie_point_output

chunk = Metashape.app.document.chunk
project_path = Metashape.app.document.path
common_tiepoints = get_tie_points(chunk.cameras[0], chunk.cameras[1], chunk)

def save_tiepoints_to_csv(tiepoints):
    camera1 = chunk.cameras[0]
    camera2 = chunk.cameras[1]
    filename = f"{camera1.label}_{camera2.label}.csv"
    parent_dir = os.path.dirname(project_path)
    filepath = os.path.join(parent_dir, filename)
   
    if not filepath:
        return

    with open(filepath, 'w') as f:
        # write the tiepoints to the CSV file
        f.write('Camera1, DateTime1, x1, y1, Camera2, DateTime2, x2, y2\n')
        for tiepoint in tiepoints:
            x1, y1 = tiepoint[0]
            x2, y2 = tiepoint[1]
            camera1_datetime = chunk.cameras[0].photo.meta['Exif/DateTime']
            camera2_datetime = chunk.cameras[1].photo.meta['Exif/DateTime']
            f.write(f'{chunk.cameras[0].label},{camera1_datetime},{x1},{y1},{chunk.cameras[1].label},{camera2_datetime},{x2},{y2}\n')
                 
chunk = Metashape.app.document.chunk
common_tiepoints = get_tie_points(chunk.cameras[0], chunk.cameras[1], chunk)

if common_tiepoints:
    save_tiepoints_to_csv(common_tiepoints)
else:
    print("No common tiepoints found.")

I would also like to include the point ids and raw digital number/color for the grayscale band. Is that possible?
Title: Re: Extract the tie point positions (u and v)
Post by: Paulo on April 11, 2023, 06:58:22 PM
xabier,

if you want the point_id you could just add it to the tie_point_output list tuple. Also you can add the point color (3 tuple RGB) using tracks[points[point_id].track_id].color as :
Code: [Select]
tie_point_output.append((point_id,tracks[points[point_id].track_id].color,tie_points[camera1][point_id], tie_points[camera2][point_id]))
and you would get following info in each tie_point_output
tie_point_output[-1] = (2047,  (149, 170, 185),  Vector([1587.5377197265625, 2149.742919921875]),  Vector([4565.5849609375, 2531.593505859375]))
                                   = (PointId,  Color(R, G, B),  Projected coordinates Camera1 Vector,  Projected coordinates Camera2 Vector)
Hope this can help
Title: Re: Extract the tie point positions (u and v)
Post by: Alexey Pasumansky on April 14, 2023, 08:22:48 PM
Hello xabierr,

Thank you for sharing the sample data.

It appears that access to non 8-bit RGB colors for the tie points is currently limited via Python API. We'll try to resolve this issue by some changes in next Python API versions, meanwhile I will try to suggest a workaround to solve your task.
Title: Re: Extract the tie point positions (u and v)
Post by: Alexey Pasumansky on May 03, 2023, 03:17:36 PM
Hello xabierr,

In pre-release of 2.0.2 (build 16220) the issue with the non-RGB colors of tie points has been fixed:
https://s3-eu-west-1.amazonaws.com/download.agisoft.com/metashape-pro_2_0_2_x64.msi
https://s3-eu-west-1.amazonaws.com/download.agisoft.com/metashape-pro_2_0_2.dmg
https://s3-eu-west-1.amazonaws.com/download.agisoft.com/metashape-pro_2_0_2_amd64.tar.gz

Please check, if your script works as expected now.

Note that to adapt the code from 1.8.5 to 2.0.2 you need to modify chunk.point_cloud to chunk.tie_points in your script.