Agisoft Metashape
Agisoft Metashape => Python and Java API => Topic started by: spatialdigger on July 30, 2021, 10:38:07 AM
-
I have an external scripted workflow which does the usual processes (adds photos, aligns, import target coords, through to dense point cloud, code included for reference)
I would like to access the individual image/photo name and the calculated camera xyz location and any other directional information that exists so this can be brought into a pandas dataframe.
Any idea where to start?
doc = Metashape.Document()
# set up a chunk
chunk = doc.addChunk()
# add photos
chunk.addPhotos(filenames=photo_list)
print(str(len(photo_list)) + " Added")
# detect markers
chunk.detectMarkers()
# assign crs
chunk.crs = Metashape.CoordinateSystem("EPSG::27700")
count_photos = len(image_list)
# import target coords
chunk.importReference(path=targets_path, format=Metashape.ReferenceFormatCSV,
columns='nxyz', delimiter=',',create_markers=True, skip_rows=1)
chunk.matchPhotos(downscale=2, generic_preselection=True, reference_preselection=True, keypoint_limit=80000,
tiepoint_limit=3000) # , filter_mask=False, mask_tiepoints=True, keypoint_limit=40000, tiepoint_limit=4000
chunk.alignCameras(adaptive_fitting=True)
chunk.buildDepthMaps(downscale=4)
chunk.buildDenseCloud()
chunk.buildModel(source_data=Metashape.DataSource.DenseCloudData, surface_type=Metashape.Arbitrary, interpolation=Metashape.EnabledInterpolation, face_count=Metashape.MediumFaceCount)
doc.save(filepath)
chunk = doc.chunk
espgCode = 27700
localCRS = Metashape.CoordinateSystem("EPSG::" + str(espgCode))
proj = Metashape.OrthoProjection()
proj.crs = localCRS
chunk.buildOrthomosaic(surface_data=Metashape.DataSource.ModelData, blending_mode=Metashape.MosaicBlending, projection=proj)
chunk.exportRaster(path=ortho_path, image_format=Metashape.ImageFormatJPEG, save_world=True, projection=proj)
docu_path = os.path.join(docu_path, "report.pdf")
chunk.exportReport(path=docu_path, title=job + selected_job)
doc.save(filepath)
-
Hi spatial digger,
look at following script from Agisoft Github https://github.com/agisoft-llc/metashape-scripts/blob/master/src/save_estimated_reference.py. It will save to a text file for each camera its reference source location, estimated location, error and sigma as well as reference source rotation, estimated rotation, error and sigma....
as in (where XYZ is location info and YPR rotation info in yaw, pitch, roll format):
IMG_6256.JPG
XYZ source: 350457.283156 2852834.941277 760.147899
XYZ error: -0.501244 0.393616 0.685989
XYZ estimated: 350456.782032 2852835.334798 760.833885
XYZ sigma: 0.014008 0.014108 0.019027
YPR source: 77.144 6.770 0.814
YPR error: 5.127 1.539 -2.952
YPR estimated: 82.271 8.309 -2.138
YPR sigma: 0.001 0.004 0.004
IMG_6257.JPG
XYZ source: 350505.734175 2852832.854006 759.132261
XYZ error: 0.582248 -0.318101 1.874715
XYZ estimated: 350506.316283 2852832.535981 761.006984
XYZ sigma: 0.013379 0.013864 0.018723
YPR source: 96.045 4.840 -7.320
YPR error: -0.441 2.846 7.053
YPR estimated: 95.604 7.686 -0.267
YPR sigma: 0.001 0.004 0.005
...
This should get you a good start.... of course this is a more elaborate script developed to help understand how different items are calculated...
if you just want to export to some txt file the camera id, estimated location and rotation just use the chunk.exportReference(....) as in attached screen copy .... consult API reference manual for details on parameters used...
-
Thanks Paulo,
I'm trying to build it up bit by bit so I understand.
I can access the cameras via chunk.cameras
But when I run the following I get:
TypeError: argument 1 must be Metashape.Metashape.Vector, not None
for camera in chunk.cameras:
error = chunk.transform.matrix.mulp(camera.center) - chunk.crs.unproject(camera.reference.location)
-
ok this kind of does the job, I'd rather have it in a pandas data frame but I can read it in from the csv.
chunk.exportReference(path=archive_path + '\\cameras.csv',format=Metashape.ReferenceFormatCSV, items=Metashape.ReferenceItemsCameras, columns='nuvwdef', delimiter =',')
-
Good to hear the CSV export can do the job,
for the previous error, it means one of your chunk's cameras is either not aligned (camera.center is None) or has no reference location (camera.reference.location is None)....also could be you have some camera keyframes (part of an animation or Camera Track) in your project and these camera types have no reference location....
-
We have a slate with the project details, so obviously that wont align, and we wouldn't want it to, I've added a try/except clause in, but I welcome recommendations, incl. a way to avoid writing to csv and keep in pandas df
-
Hi again,
if you want to write a pandas Df with ImageId, Estimated position (3) and estimated rotation (3), i.e. 7 colums following code would do the trick:
import numpy as np
import pandas as pd
def getAntennaTransform(sensor):
location = sensor.antenna.location
if location is None:
location = sensor.antenna.location_ref
rotation = sensor.antenna.rotation
if rotation is None:
rotation = sensor.antenna.rotation_ref
return Metashape.Matrix.Diag((1, -1, -1, 1)) * Metashape.Matrix.Translation(location) * Metashape.Matrix.Rotation(Metashape.Utils.ypr2mat(rotation))
def getcolumnsName(euler_angles):
if euler_angles == Metashape.EulerAnglesOPK:
return ['Id','X_Est','Y_Est','Z_Est','Omega_Est','Phi_Est','Kappa_Est']
if euler_angles == Metashape.EulerAnglesPOK:
return ['Id','X_Est','Y_Est','Z_Est','Phi_Est','Omega_Est','Kappa_Est']
if euler_angles == Metashape.EulerAnglesYPR:
return ['Id','X_Est','Y_Est','Z_Est','Yaw_Est','Pitch_Est','Roll_Est']
if euler_angles == Metashape.EulerAnglesANK:
return ['Id','X_Est','Y_Est','Z_Est','Alpha_Est','Nu_Est','Kappa_Est']
chunk = Metashape.app.document.chunk
est = list()
for camera in chunk.cameras:
if not camera.transform:
continue
transform = chunk.transform.matrix
crs = chunk.crs
if chunk.camera_crs:
transform = Metashape.CoordinateSystem.datumTransform(crs, chunk.camera_crs) * transform
crs = chunk.camera_crs
ecef_crs = crs.geoccs
camera_transform = transform * camera.transform
antenna_transform = getAntennaTransform(camera.sensor)
location_ecef = camera_transform.translation() + camera_transform.rotation() * antenna_transform.translation()
rotation_ecef = camera_transform.rotation() * antenna_transform.rotation()
est_loc = Metashape.CoordinateSystem.transform(location_ecef, ecef_crs, crs)
if chunk.euler_angles == Metashape.EulerAnglesOPK or chunk.euler_angles == Metashape.EulerAnglesPOK:
localframe = crs.localframe(location_ecef)
else:
localframe = ecef_crs.localframe(location_ecef)
est_rot = Metashape.utils.mat2euler(localframe.rotation() * rotation_ecef, chunk.euler_angles)
est.append([camera.label,est_loc.x,est_loc.y,est_loc.z,est_rot.x,est_rot.y,est_rot.z])
pd.DataFrame(np.array(est),columns = getcolumnsName(chunk.euler_angles))
and resut would be like the following for 3 image data set....