Forum

Author Topic: Extract the tie point positions (u and v)  (Read 3073 times)

mwbm07

  • Newbie
  • *
  • Posts: 7
    • View Profile
Extract the tie point positions (u and v)
« on: April 09, 2019, 12:17:28 AM »
How to extract the tie point positions (u and v, in pixel)? is it possible for python? What is the process for accessing this information?

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 13812
    • View Profile
Re: Extract the tie point positions (u and v)
« Reply #1 on: April 10, 2019, 03:40:25 PM »
Hello mwbm07,

Please check if the following code gives you the idea how to get the coordinates of the tie points projections on the source images:


Code: [Select]
#compatibility Metashape Pro 1.5.2

from PySide2 import QtGui, QtCore, QtWidgets
import Metashape, time


def var0():
t0 = time.time()

point_cloud = chunk.point_cloud
points = point_cloud.points
npoints = len(points)
projections = chunk.point_cloud.projections
point_proj = [] #list of strings
max_count = 0

app.processEvents()
print("\nScript started ...")
app.processEvents()

point_ids = [-1] * len(point_cloud.tracks)

for point_id in range(0, npoints):
point_ids[points[point_id].track_id] = point_id

for camera in chunk.cameras:
if not camera.type == Metashape.Camera.Type.Regular: #skipping camera track keyframes
continue
if not camera.transform: #skipping NA cameras
continue

for proj in projections[camera]:
track_id = proj.track_id
point_id = point_ids[track_id]
if point_id < 0:
continue
if not points[point_id].valid:
continue

line = "{:s},{:d},{:.2f},{:.2f}".format(camera.label, point_id, proj.coord.x, proj.coord.y)
point_proj.append(line) #or write the line to file
#print(line)


t2 = time.time()
t2 -= t0
t2 = float(t2)

app.processEvents()
print("Script finished in " + "{:.2f}".format(t2) + " seconds.")
app.processEvents()
return point_proj


####
global app, chunk
app = QtWidgets.QApplication.instance()

chunk = Metashape.app.document.chunk
result0  = var0()
print(len(result0))
Best regards,
Alexey Pasumansky,
Agisoft LLC

darkl1ght

  • Newbie
  • *
  • Posts: 15
    • View Profile
Re: Extract the tie point positions (u and v)
« Reply #2 on: July 02, 2021, 03:25:28 PM »
Hi Alexey,

On the same lines, is it possible to find from and to of those track_id(s)? Like from which camera is the track started, and on which camera it ended, a particular track started and which camera(image) the track ended, the positions(x  and y coordinates) on the source and destination cameras(images). Just like how we are able to visualize on view matches. Attached image of the same.

Is there a code to get that?

Thanks,
Ayush


Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 13812
    • View Profile
Re: Extract the tie point positions (u and v)
« Reply #3 on: July 03, 2021, 03:34:50 PM »
Hello Ayush,

Do you mean to get the list of common tie points (with their respective 2D coordinates) for user-defined pair of cameras?
Best regards,
Alexey Pasumansky,
Agisoft LLC

darkl1ght

  • Newbie
  • *
  • Posts: 15
    • View Profile
Re: Extract the tie point positions (u and v)
« Reply #4 on: July 03, 2021, 04:59:34 PM »
Hi Alexey,

Yes, that is exactly what I am looking for. Currently, it just prints the 2D coordinates on one of the images. But I need it for user-defined pair of cameras OR whatever images have been matched by the Agisoft. (Those that get aligned in the Align Photos and Match photos steps)

Regards,
Ayush

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 13812
    • View Profile
Re: Extract the tie point positions (u and v)
« Reply #5 on: July 17, 2021, 01:19:38 AM »
Hello Ayush,

Please check the following script, that should return the list of 2D vector tuples, representing the projections of common points for the camera pair passed to the function:

Code: [Select]
import Metashape


def get_tie_points(camera1, camera2, chunk):

if not camera1.transform:
Metashape.app.messageBox("Not aligned camera selected.")
return 0
if not camera2.transform:
Metashape.app.messageBox("Not aligned camera selected.")
return 0

projections = chunk.point_cloud.projections
points = chunk.point_cloud.points
tracks = chunk.point_cloud.tracks
npoints = len(points)
point_ids = [-1] * len(chunk.point_cloud.tracks)
for point_id in range(0, npoints):
point_ids[points[point_id].track_id] = point_id
camera_matches_valid = dict()

tie_points = dict()
tie_points[camera1] = dict()
tie_points[camera2] = dict()

for camera in [camera1, camera2]:
T = camera.transform.inv()
calib = camera.sensor.calibration
valid_matches = set()
for proj in projections[camera]:
track_id = proj.track_id
point_id = point_ids[track_id]

if point_id < 0:
continue
if not points[point_id].valid: #skipping invalid points
continue
valid_matches.add(point_id)
tie_points[camera][point_id] = proj.coord

camera_matches_valid[camera] = valid_matches
valid = camera_matches_valid[camera1].intersection(camera_matches_valid[camera2])

tie_point_output = list()
for point_id in valid:
tie_point_output.append((tie_points[camera1][point_id], tie_points[camera2][point_id]))
        if not len(tie_point_output):
                return None
return tie_point_output

chunk = Metashape.app.document.chunk
common_tiepoints = get_tie_points(chunk.cameras[0], chunk.cameras[1], chunk)
Best regards,
Alexey Pasumansky,
Agisoft LLC