Agisoft Metashape
Agisoft Metashape => Python and Java API => Topic started by: mwbm07 on April 09, 2019, 12:17:28 AM
-
How to extract the tie point positions (u and v, in pixel)? is it possible for python? What is the process for accessing this information?
-
Hello mwbm07,
Please check if the following code gives you the idea how to get the coordinates of the tie points projections on the source images:
#compatibility Metashape Pro 1.5.2
from PySide2 import QtGui, QtCore, QtWidgets
import Metashape, time
def var0():
t0 = time.time()
point_cloud = chunk.point_cloud
points = point_cloud.points
npoints = len(points)
projections = chunk.point_cloud.projections
point_proj = [] #list of strings
max_count = 0
app.processEvents()
print("\nScript started ...")
app.processEvents()
point_ids = [-1] * len(point_cloud.tracks)
for point_id in range(0, npoints):
point_ids[points[point_id].track_id] = point_id
for camera in chunk.cameras:
if not camera.type == Metashape.Camera.Type.Regular: #skipping camera track keyframes
continue
if not camera.transform: #skipping NA cameras
continue
for proj in projections[camera]:
track_id = proj.track_id
point_id = point_ids[track_id]
if point_id < 0:
continue
if not points[point_id].valid:
continue
line = "{:s},{:d},{:.2f},{:.2f}".format(camera.label, point_id, proj.coord.x, proj.coord.y)
point_proj.append(line) #or write the line to file
#print(line)
t2 = time.time()
t2 -= t0
t2 = float(t2)
app.processEvents()
print("Script finished in " + "{:.2f}".format(t2) + " seconds.")
app.processEvents()
return point_proj
####
global app, chunk
app = QtWidgets.QApplication.instance()
chunk = Metashape.app.document.chunk
result0 = var0()
print(len(result0))
-
Hi Alexey,
On the same lines, is it possible to find from and to of those track_id(s)? Like from which camera is the track started, and on which camera it ended, a particular track started and which camera(image) the track ended, the positions(x and y coordinates) on the source and destination cameras(images). Just like how we are able to visualize on view matches. Attached image of the same.
Is there a code to get that?
Thanks,
Ayush
-
Hello Ayush,
Do you mean to get the list of common tie points (with their respective 2D coordinates) for user-defined pair of cameras?
-
Hi Alexey,
Yes, that is exactly what I am looking for. Currently, it just prints the 2D coordinates on one of the images. But I need it for user-defined pair of cameras OR whatever images have been matched by the Agisoft. (Those that get aligned in the Align Photos and Match photos steps)
Regards,
Ayush
-
Hello Ayush,
Please check the following script, that should return the list of 2D vector tuples, representing the projections of common points for the camera pair passed to the function:
import Metashape
def get_tie_points(camera1, camera2, chunk):
if not camera1.transform:
Metashape.app.messageBox("Not aligned camera selected.")
return 0
if not camera2.transform:
Metashape.app.messageBox("Not aligned camera selected.")
return 0
projections = chunk.point_cloud.projections
points = chunk.point_cloud.points
tracks = chunk.point_cloud.tracks
npoints = len(points)
point_ids = [-1] * len(chunk.point_cloud.tracks)
for point_id in range(0, npoints):
point_ids[points[point_id].track_id] = point_id
camera_matches_valid = dict()
tie_points = dict()
tie_points[camera1] = dict()
tie_points[camera2] = dict()
for camera in [camera1, camera2]:
T = camera.transform.inv()
calib = camera.sensor.calibration
valid_matches = set()
for proj in projections[camera]:
track_id = proj.track_id
point_id = point_ids[track_id]
if point_id < 0:
continue
if not points[point_id].valid: #skipping invalid points
continue
valid_matches.add(point_id)
tie_points[camera][point_id] = proj.coord
camera_matches_valid[camera] = valid_matches
valid = camera_matches_valid[camera1].intersection(camera_matches_valid[camera2])
tie_point_output = list()
for point_id in valid:
tie_point_output.append((tie_points[camera1][point_id], tie_points[camera2][point_id]))
if not len(tie_point_output):
return None
return tie_point_output
chunk = Metashape.app.document.chunk
common_tiepoints = get_tie_points(chunk.cameras[0], chunk.cameras[1], chunk)
-
Hi Alexey,
I get the following error: an unindent does not match any outer indentation level (line 47).
Tried modifying the indentation but still no success.
cheers,
Javier
-
Hi xabierr,
try this code where indentation was fixed:
import Metashape
def get_tie_points(camera1, camera2, chunk):
if not camera1.transform:
Metashape.app.messageBox("Not aligned camera selected.")
return 0
if not camera2.transform:
Metashape.app.messageBox("Not aligned camera selected.")
return 0
projections = chunk.point_cloud.projections
points = chunk.point_cloud.points
tracks = chunk.point_cloud.tracks
npoints = len(points)
point_ids = [-1] * len(chunk.point_cloud.tracks)
for point_id in range(0, npoints):
point_ids[points[point_id].track_id] = point_id
camera_matches_valid = dict()
tie_points = dict()
tie_points[camera1] = dict()
tie_points[camera2] = dict()
for camera in [camera1, camera2]:
T = camera.transform.inv()
calib = camera.sensor.calibration
valid_matches = set()
for proj in projections[camera]:
track_id = proj.track_id
point_id = point_ids[track_id]
if point_id < 0:
continue
if not points[point_id].valid: #skipping invalid points
continue
valid_matches.add(point_id)
tie_points[camera][point_id] = proj.coord
camera_matches_valid[camera] = valid_matches
valid = camera_matches_valid[camera1].intersection(camera_matches_valid[camera2])
tie_point_output = list()
for point_id in valid:
tie_point_output.append((tie_points[camera1][point_id], tie_points[camera2][point_id]))
if not len(tie_point_output):
return None
return tie_point_output
chunk = Metashape.app.document.chunk
common_tiepoints = get_tie_points(chunk.cameras[0], chunk.cameras[1], chunk)
Note this code is valid for versions 1.8 or lower...
-
Thanks so much. Should I be selecting or being asked for a pair of cameras?
-
Right now it uses the 1st 2 cameras in project... so you would need to adapt it if you want to use 2 selected cameras....
-
Thanks that works great.
I have modified the code to write tie points to a csv file, including camera label and date/time:
import Metashape
from PyQt5.QtWidgets import QFileDialog
import os
def get_tie_points(camera1, camera2, chunk):
if not camera1.transform:
Metashape.app.messageBox("Not aligned camera selected.")
return 0
if not camera2.transform:
Metashape.app.messageBox("Not aligned camera selected.")
return 0
projections = chunk.point_cloud.projections
points = chunk.point_cloud.points
tracks = chunk.point_cloud.tracks
npoints = len(points)
point_ids = [-1] * len(chunk.point_cloud.tracks)
for point_id in range(0, npoints):
point_ids[points[point_id].track_id] = point_id
camera_matches_valid = dict()
tie_points = dict()
tie_points[camera1] = dict()
tie_points[camera2] = dict()
for camera in [camera1, camera2]:
T = camera.transform.inv()
calib = camera.sensor.calibration
valid_matches = set()
for proj in projections[camera]:
track_id = proj.track_id
point_id = point_ids[track_id]
if point_id < 0:
continue
if not points[point_id].valid: #skipping invalid points
continue
valid_matches.add(point_id)
tie_points[camera][point_id] = proj.coord
camera_matches_valid[camera] = valid_matches
valid = camera_matches_valid[camera1].intersection(camera_matches_valid[camera2])
tie_point_output = list()
for point_id in valid:
tie_point_output.append((tie_points[camera1][point_id], tie_points[camera2][point_id]))
if not len(tie_point_output):
return None
return tie_point_output
chunk = Metashape.app.document.chunk
project_path = Metashape.app.document.path
common_tiepoints = get_tie_points(chunk.cameras[0], chunk.cameras[1], chunk)
def save_tiepoints_to_csv(tiepoints):
camera1 = chunk.cameras[0]
camera2 = chunk.cameras[1]
filename = f"{camera1.label}_{camera2.label}.csv"
parent_dir = os.path.dirname(project_path)
filepath = os.path.join(parent_dir, filename)
if not filepath:
return
with open(filepath, 'w') as f:
# write the tiepoints to the CSV file
f.write('Camera1, DateTime1, x1, y1, Camera2, DateTime2, x2, y2\n')
for tiepoint in tiepoints:
x1, y1 = tiepoint[0]
x2, y2 = tiepoint[1]
camera1_datetime = chunk.cameras[0].photo.meta['Exif/DateTime']
camera2_datetime = chunk.cameras[1].photo.meta['Exif/DateTime']
f.write(f'{chunk.cameras[0].label},{camera1_datetime},{x1},{y1},{chunk.cameras[1].label},{camera2_datetime},{x2},{y2}\n')
chunk = Metashape.app.document.chunk
common_tiepoints = get_tie_points(chunk.cameras[0], chunk.cameras[1], chunk)
if common_tiepoints:
save_tiepoints_to_csv(common_tiepoints)
else:
print("No common tiepoints found.")
I would also like to include the point ids and raw digital number/color for the grayscale band. Is that possible?
-
xabier,
if you want the point_id you could just add it to the tie_point_output list tuple. Also you can add the point color (3 tuple RGB) using tracks[points[point_id].track_id].color as :
tie_point_output.append((point_id,tracks[points[point_id].track_id].color,tie_points[camera1][point_id], tie_points[camera2][point_id]))
and you would get following info in each tie_point_output
tie_point_output[-1] = (2047, (149, 170, 185), Vector([1587.5377197265625, 2149.742919921875]), Vector([4565.5849609375, 2531.593505859375]))
= (PointId, Color(R, G, B), Projected coordinates Camera1 Vector, Projected coordinates Camera2 Vector)
Hope this can help
-
Hello xabierr,
Thank you for sharing the sample data.
It appears that access to non 8-bit RGB colors for the tie points is currently limited via Python API. We'll try to resolve this issue by some changes in next Python API versions, meanwhile I will try to suggest a workaround to solve your task.
-
Hello xabierr,
In pre-release of 2.0.2 (build 16220) the issue with the non-RGB colors of tie points has been fixed:
https://s3-eu-west-1.amazonaws.com/download.agisoft.com/metashape-pro_2_0_2_x64.msi
https://s3-eu-west-1.amazonaws.com/download.agisoft.com/metashape-pro_2_0_2.dmg
https://s3-eu-west-1.amazonaws.com/download.agisoft.com/metashape-pro_2_0_2_amd64.tar.gz
Please check, if your script works as expected now.
Note that to adapt the code from 1.8.5 to 2.0.2 you need to modify chunk.point_cloud to chunk.tie_points in your script.
-
So how do I modify above script to copy the INVALID tie points among two images back to the project as markers? Ideally, with the full length of their tracks across the other photos they were detected in?
I found that
npoints = len(points)
returns an error whenever there are only invalid points, since npoints = 0. This is the case I need to fix so often!
On the other hand, commenting out conditions like:
if not points[point_index].valid:
continue
actually did not change the output of many of the examples I looked at (because npoints did not iterate through them?)
I have seen Paulo write invalid tie points to file, though - but how?
Not being able to select in the photo view, validate, and copy over tie points from photo to photo has been a BIG problem for us, for many years...
Please, Alexey, improve this part! :)
Cheers
Tom