Agisoft Metashape > Feature Requests

Export georeferenced view / option in Orthomosaic export

(1/2) > >>

Seboon:
Hello,

For my  work, I oftenly use orthomosaics generated in Photoscan into Qgis, for drawing some contours over it.
And sometimes  I find easier to draw over the shaded or texured mesh only, as i get more details and contrast than I have with orthomosaic.
But the only way - at least, the one I've found - to get this shaded "pattern" into the right position in Qgis is to extract a capture view of the model and then georeference it in Qgis.
I've attached two screenshots for a better understanding:-)

To your mind, would it be possible to get directly a geotiff of the capture view, or even better, to get an option in the export orthomosaic command which would  allow to choose also  between shaded and  textured mesh for the "orthomosaic" generation?

Best regards!

Yoann Courtois:
Hi Seboon,

I won't be able to help so much, but I can maybe share some ideas I got while reading your post.

To get more from your othomosaic:
Orthomosaics are only composed of pieces of pictures you have taken, sometimes a bit transform during the orthomosaic generation. You may be able to work more on the generation parameters in order to get better orthomosaic.

If you really need your shaded mesh view:
This mesh view looks like an extrapolation of the real shade of your object. But if it's benefic for you, I can give you an idea to export georeferenced view of the shaded model. Indeed, it's possible to look through pictures you've taken in the model view. Then, if you set the view as shaded mesh, you will see you shaded model from the exact position of the taken pictures. So you get the georeferencement of this view. BUT, this view is obviously never exactly vertical... Then if you press 7, the view will be orientated to the vertical, normally without changing the position. Verticality + picture coordinates, it looks you can get what you may want !

Seboon:
Thanks Yoann for your suggestion!

But in fact it doen't really fit my needs.
I finally succeed with a script.
This works for top view, and  the script needs the corresponding georeferenced orthomosaic with associated worldfile inside the chunk.

--- Code: ---import os
import PhotoScan
import math
from PIL import Image
import io



class Georef_Snapshot(QtWidgets.QDialog):

def __init__ (self, parent):
QtWidgets.QDialog.__init__(self, parent)

chunk = doc.chunk

self.setWindowTitle("Warning! ")

self.Txt = QtWidgets.QLabel()
self.Txt.setText("Le script nécessite la présence de l'orthomosaique géoréférencéeet son fichier worldfile.")
self.Txt.setFixedSize(310, 25)

self.Quit = QtWidgets.QPushButton("Annuler")
self.Quit.setFixedSize(50,20)

self.OK = QtWidgets.QPushButton("Poursuivre")
self.OK.setFixedSize(60,20)

layout = QtWidgets.QGridLayout()   
layout.setSpacing(10)

layout.addWidget(self.Txt, 0, 0)
layout.addWidget(self.OK, 1, 0)
layout.addWidget(self.Quit, 1, 1)

self.setLayout(layout)

proc_export = lambda : self.procExport()

QtCore.QObject.connect(self.OK, QtCore.SIGNAL("clicked()"), proc_export)
QtCore.QObject.connect(self.Quit, QtCore.SIGNAL("clicked()"), self, QtCore.SLOT("reject()"))

self.exec()

def procExport(self):

self.OK.setDisabled(True)
self.Quit.setDisabled(True)

doc = PhotoScan.app.document
chunk = doc.chunk
crs = chunk.crs
region = chunk.region
T = chunk.transform.matrix

m = PhotoScan.Vector([10E+10, 10E+10, 10E+10])
M = -m


#Réinitialiser la Région
chunk.resetRegion()

for point in chunk.model.vertices:

coord = T.mulp(point.coord)
coord = chunk.crs.project(coord)
for i in range(3):
m[i] = min(m[i], coord[i])
M[i] = max(M[i], coord[i])

center = (M + m) / 2.
side1g = crs.unproject(M) - crs.unproject(PhotoScan.Vector([m.x, M.y, M.z]))
side2g = crs.unproject(M) - crs.unproject(PhotoScan.Vector([M.x, m.y, M.z]))
side3g = crs.unproject(M) - crs.unproject(PhotoScan.Vector([M.x, M.y, m.z]))
size = PhotoScan.Vector([side1g.norm() , side2g.norm(), side3g.norm()])


region.center = T.inv().mulp(crs.unproject(center))
region.size = size * (1 / T.scale())

v_t = T.mulp(region.center)
R = crs.localframe(v_t) * T
region.rot = R.rotation().t()

chunk.region = region

viewpoint = PhotoScan.app.viewpoint
cx = viewpoint.width
cy = viewpoint.height

r_center = region.center
r_rotate = region.rot
r_size = region.size
r_vert = list()

#Bounding box corners:

for i in range(8):   
r_vert.append(PhotoScan.Vector([0.5 * r_size[0] * ((i & 2) - 1), r_size[1] * ((i & 1) - 0.5), 0.25 * r_size[2] * ((i & 4) - 2)]))
r_vert[i] = r_center + r_rotate * r_vert[i]

height =  T.mulv(r_vert[1] - r_vert[0]).norm()
width  = T.mulv(r_vert[2] - r_vert[0]).norm()

if width / cx > height /cy:
scale = cx / width
else:
scale = cy / height  

viewpoint.coo = T.mulp(chunk.region.center)
viewpoint.mag = scale
vue_Top = PhotoScan.Matrix([[1,0,0],[0,1,0],[0,0,1]])
viewpoint.rot = chunk.transform.rotation * r_rotate * vue_Top
PhotoScan.app.viewpoint = viewpoint

#Calcul dimension région en pixel:
if width / cx > height /cy:
scale = cx / width
H_Bbox_px = (scale*float(height))
Ecart_pxy= (cy-(scale*float(height)))/2

else:
scale = cy / height
L_Bbox_px = (scale*float(width))
Ecart_pxx= (cx-(scale*float(width)))/2

#Capture de la vue:
Capture = PhotoScan.app.captureModelView(width = cx, height = cy, transparent = (True), hide_items = (True), source = PhotoScan.DataSource.ModelData, mode = PhotoScan.ModelViewMode.ShadedModelView)
Cap = PhotoScan.Image(Capture.width, Capture.height, "RGBA", "U8")
#Export de la vue:
path = PhotoScan.app.getSaveFileName("Enregister la vue sous (en png) :")
Capture.save(path)
img = Image.open(path)
if width / cx > height /cy:
left = 0
top = Ecart_pxy
right = cx
bottom = (Ecart_pxy + H_Bbox_px)
else:
left = Ecart_pxx
top =0
right = (Ecart_pxx + L_Bbox_px)
bottom = cy
result = img.crop((left, top, right, bottom))
ratio= chunk.orthomosaic.width/result.size[0]
Rx = int(result.size[0]*ratio)
Ry = int(result.size[1]*ratio)
result2 = result.resize((Rx,Ry), Image.BICUBIC)
buffer = io.BytesIO()
result2.save(buffer, format = "png")
open(path, "wb").write(buffer.getvalue())

# Create WorldFile for Crop Capture

#Dimensions de l'image :
rasterx = result2.size[0]
rastery = result2.size[1]

#Taille de l'ortho
Res_px = chunk.orthomosaic.resolution
largeur_ortho = chunk.orthomosaic.width
hauteur_ortho = chunk.orthomosaic.height
taille_orthox = Res_px*largeur_ortho
taille_orthoy = Res_px*hauteur_ortho
print("Taille de l'ortho : " +str(round(taille_orthox,3)) + " * " +str(round(taille_orthoy,3)) + " m")

#Taille du pixel de l'ortho:
print("Taille du pixel de l'ortho : " + str(Res_px) + " m")

#Taille du pixel de l'image:
ppx = taille_orthox/rasterx
ppy = taille_orthox/rastery
print("Taillle du pixel de l'image croppée : "  + str(ppx) + " m")


#Taille de la capture:
Taille_x = ppx*rasterx
Taille_y = ppx*rastery
print("Taille de l'image croppée : " + str(round(Taille_x,3)) + " * " + str(round(Taille_y,3)) + " m")

path = PhotoScan.app.getOpenFileName("Ouvrir le worldfile de l'ortho concernée :")
path2 = PhotoScan.app.getSaveFileName("Enregister le worldfile de la capture (en pngw) :")

#Copie du worldfile:
with open(path2, "w") as f:
with open(path, "r") as f2:
lines = f2.readlines()
for line in lines:
f.write(line)
f.close()
f2.close()

print("Script terminé")

self.OK.setDisabled(False)
self.Quit.setDisabled(False)

def main():
global doc
doc = PhotoScan.app.document

app = QtWidgets.QApplication.instance()
parent = app.activeWindow()

dlg = Georef_Snapshot(parent)

PhotoScan.app.addMenuItem("Topview Model Géoref", main)
--- End code ---

Yoann Courtois:
Hi Seboon !

What a script ! If it fits your needs, well it's good !

But, what is your model face mean size ?
Have you tried to compare a contour extraction from the shaded model with an extraction from the orthomosaic ?
I still think the most accurate way is to work with the orthomosaic. You can produce a mosaic with the same resolution than your original images and get a more accurate contour extraction.

Don't know well about QGIS, but many softwares give the possibility to adjust the lightness and the contrast of the inserted ortho-images.
We produce a lot from orthomosaics and such adjustment (in AutoCAD or Mensura) permit us to extract feature in very dark (very white) or non-contrast area !

Regards

JRM:
Seboon > have you tried generating an hillshade and overlay it on the orthomosaic ? You can do so using the DEM and gdaldem (combined or multidirectionnal) or by styling the dem layer in QGIS with the real time hillshade. You can evn go further by using ambiant occlusion map.

This method has been used for years in our archeological department and seems to fit the need ; with the good fusion modes/styles applied, it even can replace the detailed drawing without getting uproar from CTRA.

Navigation

[0] Message Index

[#] Next page

Go to full version