Hi all,
I'm using Agisoft Metashape as part of my workflow studying animal behaviour. I used the software to construct an orthomosaic and subsequently obtained drone videos of animals moving in this area. I then used computer vision algorithms to obtain coordinates of animals in these videos. To then convert these x-y coordinates from pixel space to geographical coordinates, I used the following code:
import Metashape
import pandas as pd
# Load the Metashape document and access the first chunk and its surface model
doc = Metashape.app.document
chunk = doc.chunks[0]
surface = chunk.model
# Switch to a different chunk for processing
chunk = doc.chunks[2]
print(f"Processing Chunk: {chunk.label}")
# Read the CSV file into a DataFrame
csv_path = '/Users/vivekhsridhar/Library/Mobile Documents/com~apple~CloudDocs/Documents/Metashape/TalChhapar/output/test_uv.csv'
df = pd.read_csv(csv_path)
# Initialize an empty list to store the data
data = []
# Iterate through cameras in the chunk
for camera in chunk.cameras:
# Filter the DataFrame to only include rows related to a specific camera
df_filtered = df[df['Camera'] == camera.label]
# Iterate through the filtered DataFrame and process each point
for index, row in df_filtered.iterrows():
idx = row['idx']
camera_label = row['Camera']
u = row['u']
v = row['v']
# Create a 2D vector from the coordinates
coords_2D = Metashape.Vector([u, v])
# Pick a point on the model surface using the camera's center and the unprojected 2D coordinates
point_internal = surface.pickPoint(camera.center, camera.unproject(coords_2D))
# Transform the internal 3D point to world coordinates
point3D_world = chunk.crs.project(chunk.transform.matrix.mulp(point_internal))
# Append the data to the list
data.append({
'idx': idx,
'Camera': camera_label,
'u': u,
'v': v,
'x': point3D_world.x,
'y': point3D_world.y,
'z': point3D_world.z
})
# Convert the list to a DataFrame
df_output = pd.DataFrame(data)
# Save the DataFrame to a CSV file
output_csv_path = '/Users/vivekhsridhar/Library/Mobile Documents/com~apple~CloudDocs/Documents/Metashape/TalChhapar/output/test_3D_world.csv'
df_output.to_csv(output_csv_path, index=False)
print(f"3D world coordinates saved to {output_csv_path}")
I know the code works because I've tested it with data from a single frame. However, when I run entire videos, it fails with the following error message.
2024-07-17 10:21:31 Traceback (most recent call last):
2024-07-17 10:21:31 File "/Users/vivekhsridhar/Library/Mobile Documents/com~apple~CloudDocs/Documents/Metashape/TalChhapar/scripts/test.py", line 40, in <module>
2024-07-17 10:21:31 point3D_world = chunk.crs.project(chunk.transform.matrix.mulp(point_internal))
2024-07-17 10:21:31 TypeError: argument 1 must be Metashape.Vector, not None
2024-07-17 10:21:31 Error: argument 1 must be Metashape.Vector, not None
I understand this occurs because my variable point_internal returns None. However, this shouldn't occur. I manually examined one specific case where the software unprojects the pixel coordinates (5025.5, 1802.5) to None. However, even a small change in the coordinates (5025.50001, 1802.5) or (5025.49999) works well and the software unprojects the coordinate to return reasonable values for the variable point_internal.
Since I'm not looking for that level of precision, I will proceed by jittering the value by small amounts when the software returns None and this will solve my problem. I however wanted to report this in case this is a bug with the software? If it isn't a bug, I would of course also like to understand what's causing it.
Best,
Vivek