Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jo

Pages: [1]
1
General / Re: Altitude used in Processing UAV (DJI Phantom 4 Pro)
« on: April 26, 2018, 11:59:37 PM »
I believe both embedded GPS and gcp are important for calibrating the non metric camera we use in uas mapping

Incoherence between embedded GPS position and gcp position are unfortunately easily balanced by changing either calibration or exterior orientation during the optimize process. (provided that they are both used in the self calibrating bundle block adjustment of course)

I have observed drift in altitude information from phantom 4 pro images (fw 1.06)  that affect the accuracy of photogrammetric results. Also incoherence between altitude of different flights. I do not know yet how to handle this..

Jojo

2
Python and Java API / Re: Export Alignement result to another software
« on: December 22, 2015, 01:35:58 PM »
Hi everyone,

Here is a current version of the script, not perfect but works well on PS 1.2.1

Coordinate system must be defined as a projected coordinate system in PS (reference settings) otherwise the script may not work.

Code: [Select]
# compatibility PhotoScan Pro 1.2.1
# saves multiple files with tie-points per photo pair -- Alexey Pasumansky (AgiSoft LLC) (great thanks Alexey!), Jonathan Lisein and Samuel Quevauviller (Ulg), 12/2015
# the aim of the script is to use the alignment result (tie point, camera orientation) of photoscan in the photogrammetric suite Micmac.
# the script export valid tie points and external orientation but no proper conversion of camera calibration is performed. The command Tapas RadialExtended ".*.JPG" ExpTxt=1 Out=PS2MM InOri=PS FrozenPoses=".*" will compute a proper camera calibration whithout changing external orientation.

import time
import PhotoScan
import os
import re # regular expression, form string manipulation

doc = PhotoScan.app.document
chunk = doc.chunk
Prof = 100

print("Export of tie points and camera orientation from Photoscan 1.2.1 to Mimac software : started...")

Ori = PhotoScan.app.getString("How do you want to name the orientation database? ex : PSmax1000TP", "PS")
path = PhotoScan.app.getExistingDirectory("Specify the output folder for Homol and Orientation database :")  #Opens save-to dialog box
myint = PhotoScan.app.getInt(label="Do you want to export tie points? (or just ori). 0=no export of tie points", value=1)
if myint==0:
ExportTP=False
else:
ExportTP=True
pathHomol = os.path.join(path+"/Homol/")
if not os.path.exists(pathHomol):
os.mkdir(pathHomol)
t0 = time.time()

OriBD = "Ori-"+Ori
pathOri = path+ "/" + OriBD
if not os.path.exists(pathOri):
os.mkdir(pathOri)

point_cloud = chunk.point_cloud
point_proj = point_cloud.projections

# loop on every camera of the image block
for i in range(0,len(chunk.cameras)):

photo1 = chunk.cameras[i]
#camera.transform is the rotation matrix + translation matrix(= camera center). The following code ligne is a way to test if the camera is aligned.
if not photo1.transform:
continue
#---------------------------export of camera external orientation
# rotation matrix
T = chunk.transform.matrix
#C = photo1.transform * PhotoScan.Matrix.diag((1, -1, -1, 1))# com Alexey :camera transformation matrix has been multiplied by 180-degree rotation matrix, since the direction of Z axis in camera system is inverted compared to the world coordinate system. but micmac have the same convention, so ok
C = photo1.transform
# compute camera center . no need of the photo.transform, used for transforming PIXEL location into chunk location.
cen_p = photo1.center
cen_t = T.mulp(cen_p)
cen_t_prj = chunk.crs.project(cen_t)
#  chunk.transform returns the camera position/orientation in geocentric coordinate system, but it is also necessary to use crs.localframe() function that will allow to estimate orientation angles in the point related to the camera center.
m = chunk.crs.localframe(cen_t)
R = m * T * C
# The rows of the rotational matrix should be normalized
r1 = PhotoScan.Vector([R[0,0], R[0,1], R[0,2]])
r2 = PhotoScan.Vector([R[1,0], R[1,1], R[1,2]])
r3 = PhotoScan.Vector([R[2,0], R[2,1], R[2,2]])
rot = PhotoScan.Matrix([r1.normalized(), r2.normalized(), r3.normalized()])
# for computing angles : PhotoScan.utils.mat2opk(rB)
# creation of orientation file for the camera (micmac format)
oriFile = pathOri + "/Orientation-" + photo1.label + ".xml"
file = open(oriFile, "w")
file.write("<?xml version=\"1.0\" ?>\n")
file.write("<ExportAPERO>\n")
file.write(" <OrientationConique>\n")
file.write(" <OrIntImaM2C>\n")
# all these tags are useless in this situation but are still mandatory for compliance with micmac convention
file.write(" <I00>0 0</I00>\n")
file.write(" <V10>1 0</V10>\n")
file.write(" <V01>0 1</V01>\n")
file.write(" </OrIntImaM2C>\n")
file.write(" <TypeProj>eProjStenope</TypeProj>\n")
# Determine the calibration name. In micmac, the calibration file is named AutoCal_Foc + the focal in 1/1000 mm + _Cam- + camera model + .xml
foc = int(1000*photo1.sensor.focal_length)
model = photo1.sensor.label.split()[0]
model = re.sub('[-!@#$]', '', model)
CalibName = OriBD + "/AutoCal_Foc-" + str(foc) + "_Cam-"+ model +".xml"
file.write(" <FileInterne>" + CalibName + "</FileInterne>\n")
file.write(" <RelativeNameFI>true</RelativeNameFI>\n")
file.write(" <Externe>\n")
# AltiSol (flight altitude) and Profondeur (english: depth) are redundant info which do not need to be accurate here
# mm chose, un bug pour certaine image qui ne semble pas avoir de point ayant le track.id identifie ci-dessous; mm solution que focale, je donne une valeur pour tout le jeux d'image, celle calculee avec l'image numero 1
if i==1:
tiep_proj = point_cloud.projections[photo1][0]
XYZchunk = point_cloud.points[tiep_proj.track_id].coord # 4 colums instead of 3
XYZchunk = PhotoScan.Vector([XYZchunk[0], XYZchunk[1], XYZchunk[2]])
XYZground = T.mulp(XYZchunk)
XYZground = chunk.crs.project(XYZground)
Prof = cen_t_prj[2] - XYZground[2]
file.write(" <AltiSol>"+ str(Prof) +"</AltiSol>\n")
file.write(" <Profondeur>"+ str(Prof)  +" </Profondeur>\n")
file.write(" <Time>-1.00000000000000002e+30</Time>\n")
file.write(" <KnownConv>eConvApero_DistM2C</KnownConv>\n")
# camera position center
file.write(" <Centre>"+ str(cen_t_prj[0]) + " " + str(cen_t_prj[1]) + " " + str(cen_t_prj[2]) + "</Centre>\n")
file.write(" <ParamRotation>\n")
file.write(" <CodageMatr>\n")
# rotation matrix.
file.write(" <L1>" + str(rot[0, 0]) + " " + str(rot[0, 1]) + " " + str(rot[0, 2]) + "</L1>\n")
file.write(" <L2>" + str(rot[1, 0]) + " " + str(rot[1, 1]) + " " + str(rot[1, 2]) + "</L2>\n")
file.write(" <L3>" + str(rot[2, 0]) + " " + str(rot[2, 1]) + " " + str(rot[2, 2]) + "</L3>\n")
file.write(" </CodageMatr>\n")
file.write(" </ParamRotation>\n")
file.write(" </Externe>\n")
file.write(" <ConvOri>\n")
file.write(" <KnownConv>eConvApero_DistM2C</KnownConv>\n")
file.write(" </ConvOri>\n")
file.write("</OrientationConique>\n")
file.write("</ExportAPERO>\n")
file.close()
print("End of orientation export for camera "+str(photo1.label)+"  numero dans boucle: "+str(i))

if ExportTP:
# create the folder for storing the set of tie points for this image
pathPastis1=os.path.join(pathHomol+"Pastis"+photo1.label)                   
if not os.path.exists(pathPastis1):
os.mkdir(pathPastis1)

# loop on all the remainder cameras of the image block (=chunk)
for j in range(i + 1, len(chunk.cameras)):

photo2 = chunk.cameras[j]
# is the camera aligned?
if not photo2.transform:
continue

pathPastis2=os.path.join(pathHomol+"Pastis"+photo2.label)                   
if not os.path.exists(pathPastis2):
os.mkdir(pathPastis2)

# associative arrays = dictionary object class in python
matches1 = dict()
matches2 = dict()

for proj in point_proj[photo1]:
# the dictionnary index for this projection (U, V) is the track index of the tie point (X,Y,Z)
matches1[proj.track_id] = proj.coord
for proj in point_proj[photo2]:
matches2[proj.track_id] = proj.coord

# check if there are tie points which are shared by the camera pair
if len(set(matches1.keys()).intersection(set(matches2.keys()))):

# Tie point of the camera pair are stored in two (redundant) txt files:
fileHomol12 = open(pathPastis1 + "\\" + photo2.label + ".txt", "wt")
fileHomol21 = open(pathPastis2 + "\\" + photo1.label + ".txt", "wt")

# In photoscan, there are a few tie point duplicate in the sparce point cloud. Duplicates are not allowed in micmac. These duplicate have same value for model measurements X,Y,Z but different track_id.
# Dictionaries do not contain duplicate keys. The following code remove the tie point duplicates
list=[]
for key in matches1.keys():
u,v=matches1[key]
list.append(str(u)+";"+str(v)+"#"+str(key))
list.sort()
matches11=dict()
for i in range(len(list)-1):
ss1=list[i].split("#")
ss2=list[i+1].split("#")
if ss1[0]!=ss2[0]:
ss3=ss1[0].split(";")
matches11[int(ss1[1])]=[ss3[0],ss3[1]]
# same removal of duplicates but on the second matches list
list=[]
for key in matches2.keys():
u,v=matches2[key]
list.append(str(u)+";"+str(v)+"#"+str(key))
list.sort()
matches21=dict()
for i in range(len(list)-1):
ss1=list[i].split("#")
ss2=list[i+1].split("#")
if ss1[0]!=ss2[0]:
ss3=ss1[0].split(";")
matches21[int(ss1[1])]=[ss3[0],ss3[1]]

# loop on every tie points shared by the two camera
for tiepoint in set(matches11.keys()).intersection(set(matches21.keys())):

u1, v1 = matches1[tiepoint]
u2, v2 = matches2[tiepoint]

# u1, v1, u2, v2
fileHomol12.write("{:.2f}".format(u1) + " {:.2f}".format(v1) + " {:.2f}".format(u2) + " {:.2f}\n".format(v2))
#  u2, v2, u1, v1,
fileHomol21.write("{:.2f}".format(u2) + " {:.2f}".format(v2) + " {:.2f}".format(u1) + " {:.2f}\n".format(v1))

fileHomol12.close()
fileHomol21.close()

for calib in chunk.sensors:

# generation of the calibration file / distorsion model "RadialBasic"
foc = int(1000*calib.focal_length)
model = calib.label.split()[0]
model = re.sub('[-!@#$]', '', model)
CalibName = pathOri + "/AutoCal_Foc-" + str(foc) + "_Cam-"+ model +".xml"
file = open(CalibName, "w")
file.write("<?xml version=\"1.0\" ?>\n")
file.write("<ExportAPERO>\n")
file.write(" <CalibrationInternConique>\n")
file.write(" <KnownConv>eConvApero_DistM2C</KnownConv>\n")
file.write(" <PP>"+str(calib.calibration.cx) + "  " +  str(calib.calibration.cy)   +"</PP>\n")
file.write(" <F>"+ str(calib.calibration.fx) + "</F>\n")
file.write(" <SzIm>" + str(calib.calibration.width) + "  " +  str(calib.calibration.height) +"</SzIm>\n")
file.write(" <CalibDistortion>\n")
file.write(" <ModRad>\n")
file.write(" <CDist>" + str(calib.calibration.cx) + "  " +  str(calib.calibration.cy) +"</CDist>\n")
# set additionnal parameters of the calibration to 0, camera calibration must be refined in micmac on the basis of tie point and external ori
file.write(" <CoeffDist>0</CoeffDist>\n")
file.write(" <CoeffDist>0</CoeffDist>\n")
file.write(" <PPaEqPPs>true</PPaEqPPs>\n")
file.write(" </ModRad>\n")
file.write(" </CalibDistortion>\n")
file.write(" </CalibrationInternConique>\n")
file.write("</ExportAPERO>\n")
file.close()

t1 = time.time()
t1 -= t0
t1 = float(t1)

print("Script finished in " + "{:.2f}".format(t1) + " seconds.")



3
Python and Java API / Export Alignement result to another software
« on: December 11, 2014, 04:13:42 PM »
Hi Everyone,

I use two photogrammetric(/structure from motion) softwares for my scientific research, micmac (command line tools) and photoscan. I like both softwares and I feel very insteresting to compare them in order to  improve my understanding of the fascinating computer vision algorithms. I recently exported the result of the alignement of photoscan in micmac by means of the following script. I would like to share it just in case someone else is willing to use it. In addition, I would thank Alexey for his help.

Jo

Code: [Select]
# compatibility PhotoScan Pro 1.1.0
# saves multiple files with tie-points per photo pair -- Alexey Pasumansky (AgiSoft LLC) (great thanks Alexey!), Jonathan Lisein and Samuel Quevauviller (Ulg), 12/2014
# the aim of the script is to use the alignment result (tie point, camera orientation and calibration) of photoscan in the photogrammetric suite Micmac

import time
import PhotoScan
import os

doc = PhotoScan.app.document
chunk = doc.chunk

print("Export of tie points and camera orientation from Photoscan 1.1.0 to Mimac software : started...")
path = PhotoScan.app.getExistingDirectory("Specify the output folder for Homol and Orientation database :")  #Opens save-to dialog box
Ori = PhotoScan.app.getString("How do you want to name the orientation database? ex : PSmax1000TP", "PS")


t0 = time.time()

pathHomol = os.path.join(path+"/Homol/")
if not os.path.exists(pathHomol):
os.mkdir(pathHomol)

OriBD = "Ori-"+Ori
pathOri = path+ "/" + OriBD
if not os.path.exists(pathOri):
os.mkdir(pathOri)

point_cloud = chunk.point_cloud
point_proj = point_cloud.projections

flag = False

# loop on every camera of the image block
for i in range(0, len(chunk.cameras)):

photo1 = chunk.cameras[i]
#camera.transform is the rotation matrix + translation matrix(= camera center). The following code ligne is a way to test if the camera is aligned.
if not photo1.transform:
continue

#---------------------------export of camera external orientation
# matrice de rotation
rot = photo1.transform
# creation of orientation file for the camera (micmac format)
oriFile = pathOri + "/Orientation-" + photo1.label + ".xml"
file = open(oriFile, "w")
file.write("<?xml version=\"1.0\" ?>\n")
file.write("<ExportAPERO>\n")
file.write(" <OrientationConique>\n")
file.write(" <OrIntImaM2C>\n")
# all these tags are useless in this situation but are still mandatory for compliance with micmac convention
file.write(" <I00>0 0</I00>\n")
file.write(" <V10>1 0</V10>\n")
file.write(" <V01>0 1</V01>\n")
file.write(" </OrIntImaM2C>\n")
file.write(" <TypeProj>eProjStenope</TypeProj>\n")
# get the focal on the camera.  In micmac, the calibration file in named AutoCal + the focal in 1/10 mm
foc = 10*int(photo1.photo.meta["Exif/FocalLength"])
file.write(" <FileInterne>" + OriBD +"/AutoCal" +str(foc) +".xml</FileInterne>\n")
file.write(" <RelativeNameFI>true</RelativeNameFI>\n")
file.write(" <Externe>\n")
# AltiSol (flight altitude) and Profondeur (english: depth) are redundant info which do not need to be accurate here
tiep_proj = point_cloud.projections[photo1][0]
XYZ = point_cloud.points[tiep_proj.track_id].coord
file.write(" <AltiSol>"+ str(photo1.center[2] - XYZ[2]) +"</AltiSol>\n")
file.write(" <Profondeur>"+ str(photo1.center[2] - XYZ[2]) +" </Profondeur>\n")
file.write(" <Time>-1.00000000000000002e+30</Time>\n")
file.write(" <KnownConv>eConvApero_DistM2C</KnownConv>\n")
# camera position center
center = photo1.center
file.write(" <Centre>"+ str(center[0]) + " " + str(center[1]) + " " + str(center[2]) + "</Centre>\n")
file.write(" <ParamRotation>\n")
file.write(" <CodageMatr>\n")
# rotation matrix.
file.write(" <L1>" + str(rot[0, 0]) + " " + str(rot[0, 1]) + " " + str(rot[0, 2]) + "</L1>\n")
file.write(" <L2>" + str(rot[1, 0]) + " " + str(rot[1, 1]) + " " + str(rot[1, 2]) + "</L2>\n")
file.write(" <L3>" + str(rot[2, 0]) + " " + str(rot[2, 1]) + " " + str(rot[2, 2]) + "</L3>\n")
file.write(" </CodageMatr>\n")
file.write(" </ParamRotation>\n")
file.write(" </Externe>\n")
file.write(" <Verif>\n")
file.write(" <Tol>0.00100000000000000002</Tol>\n")
file.write(" <ShowMes>true</ShowMes>\n")

# add 3 tie points for verification purpose, required in micmac
for i in (0, 1, 2):
tiep_proj = point_cloud.projections[photo1][i]
XYZ = point_cloud.points[tiep_proj.track_id].coord
file.write(" <Appuis>\n")
file.write(" <Num>"+str(i)+"</Num>\n")
# image measurement of the tie point : U, V
file.write("        <Im>" + str(tiep_proj.coord[0]) + " " + str(tiep_proj.coord[1]) + "</Im>\n")
# model measurement of the tie point : X Y Z
file.write(" <Ter>" + str(XYZ[0]) + " " + str(XYZ[1]) + " " + str(XYZ[2]) + "</Ter>\n")
file.write(" </Appuis>\n")
file.write(" </Verif>\n")
file.write(" <ConvOri>\n")
file.write(" <KnownConv>eConvApero_DistM2C</KnownConv>\n")
file.write(" </ConvOri>\n")
file.write("</OrientationConique>\n")
file.write("</ExportAPERO>\n")
file.close()
print("End of orientation export for camera "+str(photo1.label))

# create the folder for storing the set of tie points for this image
pathPastis1=os.path.join(pathHomol+"Pastis"+photo1.label)                   
if not os.path.exists(pathPastis1):
os.mkdir(pathPastis1)

# loop on all the remainder cameras of the image block (=chunk)
for j in range(i + 1, len(chunk.cameras)):

photo2 = chunk.cameras[j]
# is the camera aligned?
if not photo2.transform:
continue

pathPastis2=os.path.join(pathHomol+"Pastis"+photo2.label)                   
if not os.path.exists(pathPastis2):
os.mkdir(pathPastis2)

# associative arrays = dictionary object class in python
matches1 = dict()
matches2 = dict()

for proj in point_proj[photo1]:
# the dictionnary index for this projection (U, V) is the track index of the tie point (X,Y,Z)
matches1[proj.track_id] = proj.coord
for proj in point_proj[photo2]:
matches2[proj.track_id] = proj.coord

# check if there are tie points which are shared by the camera pair
if len(set(matches1.keys()).intersection(set(matches2.keys()))):

# Tie point of the camera pair are stored in two (redundant) txt files:
fileHomol12 = open(pathPastis1 + "\\" + photo2.label + ".txt", "wt")
fileHomol21 = open(pathPastis2 + "\\" + photo1.label + ".txt", "wt")

# In photoscan, there are a few tie point duplicate in the sparce point cloud. Duplicates are not allowed in micmac. These duplicate have same value for model measurements X,Y,Z but different track_id.
# Dictionaries do not contain duplicate keys. The following code remove the tie point duplicates
list=[]
for key in matches1.keys():
u,v=matches1[key]
list.append(str(u)+";"+str(v)+"#"+str(key))
list.sort()
matches11=dict()
for i in range(len(list)-1):
ss1=list[i].split("#")
ss2=list[i+1].split("#")
if ss1[0]!=ss2[0]:
ss3=ss1[0].split(";")
matches11[int(ss1[1])]=[ss3[0],ss3[1]]
# same removal of duplicates but on the second matches list
list=[]
for key in matches2.keys():
u,v=matches2[key]
list.append(str(u)+";"+str(v)+"#"+str(key))
list.sort()
matches21=dict()
for i in range(len(list)-1):
ss1=list[i].split("#")
ss2=list[i+1].split("#")
if ss1[0]!=ss2[0]:
ss3=ss1[0].split(";")
matches21[int(ss1[1])]=[ss3[0],ss3[1]]

# loop on every tie points shared by the two camera
for tiepoint in set(matches11.keys()).intersection(set(matches21.keys())):

u1, v1 = matches1[tiepoint]
u2, v2 = matches2[tiepoint]

# u1, v1, u2, v2
fileHomol12.write("{:.2f}".format(u1) + " {:.2f}".format(v1) + " {:.2f}".format(u2) + " {:.2f}\n".format(v2))
#  u2, v2, u1, v1,
fileHomol21.write("{:.2f}".format(u2) + " {:.2f}".format(v2) + " {:.2f}".format(u1) + " {:.2f}\n".format(v1))

fileHomol12.close()
fileHomol21.close()

for calib in chunk.sensors:

# generation of the calibration file / distorsion model "RadialBasic"
oriFile = pathOri + "/AutoCal" + str(int(10*calib.focal_length))+ ".xml"
file = open(oriFile, "w")
file.write("<?xml version=\"1.0\" ?>\n")
file.write("<ExportAPERO>\n")
file.write(" <CalibrationInternConique>\n")
file.write(" <KnownConv>eConvApero_DistM2C</KnownConv>\n")
file.write(" <PP>"+str(calib.calibration.cx) + "  " +  str(calib.calibration.cy)   +"</PP>\n")
file.write(" <F>"+ str(calib.calibration.fx) + "</F>\n")
file.write(" <SzIm>" + str(calib.calibration.width) + "  " +  str(calib.calibration.height) +"</SzIm>\n")
file.write(" <CalibDistortion>\n")
file.write(" <ModRad>\n")
file.write(" <CDist>" + str(calib.calibration.cx) + "  " +  str(calib.calibration.cy) +"</CDist>\n")
# set additionnal parameters of the calibration to 0 because micmac convention are different and I was lazy to convert the calibration model.
file.write(" <CoeffDist>0</CoeffDist>\n")
file.write(" <CoeffDist>0</CoeffDist>\n")
file.write(" <PPaEqPPs>true</PPaEqPPs>\n")
file.write(" </ModRad>\n")
file.write(" </CalibDistortion>\n")
file.write(" </CalibrationInternConique>\n")
file.write("</ExportAPERO>\n")
file.close()


t1 = time.time()
t1 -= t0
t1 = float(t1)

print("Script finished in " + "{:.2f}".format(t1) + " seconds.")



4
General / Re: decrasing the number of tie points : how is it done ?
« on: December 08, 2014, 01:20:17 PM »
Hi bruno,

Exactly like you, I try to have the best sparse points clouds to optimise camera alignment. And I am so very interested in understanding the functionning of the great new feature of photoscan 1.1 regarding the selection of "best" tie points. But I think there is currently no agreement on what would be the best set of tie point for optimization, apart from the fact that it is better to have a limited number of tie point well distributed on every images and viewed on more than two cameras than having a large number of tie points seen only on two cameras and  localised on a specific location of the images.

Thanks, to Alexey Pasumansky's help,  I exported tie points of photoscan 1.1 (generated with a tie point limit of 1000 per camera) into another software (micmac from ign France) in order to compute the residual for each tie points. What I have found out is that tie points residual, which is often used as a quality measurement, is less good than for tie points generated in photoscan 1.0.4. On the other hand, about 80 percent of  tie points computed with photoscan 1.1 were shared by more than 2 cameras.

In summary, I think that the  "gradual slection"  to suppress outliers in the sparce point cloud is (was) a bad approach, because tie point residual is not a fair measurement of its quality. The quality of cameras alignement should be checked with external measures (ground control points or embedded GPS/camera location).

Jo

Pages: [1]