Forum

Author Topic: Export georeferenced view / option in Orthomosaic export  (Read 8252 times)

Seboon

  • Jr. Member
  • **
  • Posts: 72
    • View Profile
Export georeferenced view / option in Orthomosaic export
« on: April 15, 2017, 03:52:02 PM »
Hello,

For my  work, I oftenly use orthomosaics generated in Photoscan into Qgis, for drawing some contours over it.
And sometimes  I find easier to draw over the shaded or texured mesh only, as i get more details and contrast than I have with orthomosaic.
But the only way - at least, the one I've found - to get this shaded "pattern" into the right position in Qgis is to extract a capture view of the model and then georeference it in Qgis.
I've attached two screenshots for a better understanding:-)

To your mind, would it be possible to get directly a geotiff of the capture view, or even better, to get an option in the export orthomosaic command which would  allow to choose also  between shaded and  textured mesh for the "orthomosaic" generation?

Best regards!
« Last Edit: April 16, 2017, 09:40:33 AM by Seboon »
S.Poudroux
Archaeologist - Topographer - Drone remote pilot

Yoann Courtois

  • Sr. Member
  • ****
  • Posts: 316
  • Engineer in Geodesy, Cartography and Surveying
    • View Profile
Re: Export georeferenced view / option in Orthomosaic export
« Reply #1 on: April 18, 2017, 09:42:49 AM »
Hi Seboon,

I won't be able to help so much, but I can maybe share some ideas I got while reading your post.

To get more from your othomosaic:
Orthomosaics are only composed of pieces of pictures you have taken, sometimes a bit transform during the orthomosaic generation. You may be able to work more on the generation parameters in order to get better orthomosaic.

If you really need your shaded mesh view:
This mesh view looks like an extrapolation of the real shade of your object. But if it's benefic for you, I can give you an idea to export georeferenced view of the shaded model. Indeed, it's possible to look through pictures you've taken in the model view. Then, if you set the view as shaded mesh, you will see you shaded model from the exact position of the taken pictures. So you get the georeferencement of this view. BUT, this view is obviously never exactly vertical... Then if you press 7, the view will be orientated to the vertical, normally without changing the position. Verticality + picture coordinates, it looks you can get what you may want !
--
Yoann COURTOIS
R&D Engineer in photogrammetric process and mobile application
Lyon, FRANCE
--

Seboon

  • Jr. Member
  • **
  • Posts: 72
    • View Profile
Re: Export georeferenced view / option in Orthomosaic export
« Reply #2 on: January 07, 2018, 01:57:57 PM »
Thanks Yoann for your suggestion!

But in fact it doen't really fit my needs.
I finally succeed with a script.
This works for top view, and  the script needs the corresponding georeferenced orthomosaic with associated worldfile inside the chunk.
Code: [Select]
import os
import PhotoScan
import math
from PIL import Image
import io



class Georef_Snapshot(QtWidgets.QDialog):

def __init__ (self, parent):
QtWidgets.QDialog.__init__(self, parent)

chunk = doc.chunk

self.setWindowTitle("Warning! ")

self.Txt = QtWidgets.QLabel()
self.Txt.setText("Le script nécessite la présence de l'orthomosaique géoréférencéeet son fichier worldfile.")
self.Txt.setFixedSize(310, 25)

self.Quit = QtWidgets.QPushButton("Annuler")
self.Quit.setFixedSize(50,20)

self.OK = QtWidgets.QPushButton("Poursuivre")
self.OK.setFixedSize(60,20)

layout = QtWidgets.QGridLayout()   
layout.setSpacing(10)

layout.addWidget(self.Txt, 0, 0)
layout.addWidget(self.OK, 1, 0)
layout.addWidget(self.Quit, 1, 1)

self.setLayout(layout)

proc_export = lambda : self.procExport()

QtCore.QObject.connect(self.OK, QtCore.SIGNAL("clicked()"), proc_export)
QtCore.QObject.connect(self.Quit, QtCore.SIGNAL("clicked()"), self, QtCore.SLOT("reject()"))

self.exec()

def procExport(self):

self.OK.setDisabled(True)
self.Quit.setDisabled(True)

doc = PhotoScan.app.document
chunk = doc.chunk
crs = chunk.crs
region = chunk.region
T = chunk.transform.matrix

m = PhotoScan.Vector([10E+10, 10E+10, 10E+10])
M = -m


#Réinitialiser la Région
chunk.resetRegion()

for point in chunk.model.vertices:

coord = T.mulp(point.coord)
coord = chunk.crs.project(coord)
for i in range(3):
m[i] = min(m[i], coord[i])
M[i] = max(M[i], coord[i])

center = (M + m) / 2.
side1g = crs.unproject(M) - crs.unproject(PhotoScan.Vector([m.x, M.y, M.z]))
side2g = crs.unproject(M) - crs.unproject(PhotoScan.Vector([M.x, m.y, M.z]))
side3g = crs.unproject(M) - crs.unproject(PhotoScan.Vector([M.x, M.y, m.z]))
size = PhotoScan.Vector([side1g.norm() , side2g.norm(), side3g.norm()])


region.center = T.inv().mulp(crs.unproject(center))
region.size = size * (1 / T.scale())

v_t = T.mulp(region.center)
R = crs.localframe(v_t) * T
region.rot = R.rotation().t()

chunk.region = region

viewpoint = PhotoScan.app.viewpoint
cx = viewpoint.width
cy = viewpoint.height

r_center = region.center
r_rotate = region.rot
r_size = region.size
r_vert = list()

#Bounding box corners:

for i in range(8):   
r_vert.append(PhotoScan.Vector([0.5 * r_size[0] * ((i & 2) - 1), r_size[1] * ((i & 1) - 0.5), 0.25 * r_size[2] * ((i & 4) - 2)]))
r_vert[i] = r_center + r_rotate * r_vert[i]

height =  T.mulv(r_vert[1] - r_vert[0]).norm()
width  = T.mulv(r_vert[2] - r_vert[0]).norm()

if width / cx > height /cy:
scale = cx / width
else:
scale = cy / height  

viewpoint.coo = T.mulp(chunk.region.center)
viewpoint.mag = scale
vue_Top = PhotoScan.Matrix([[1,0,0],[0,1,0],[0,0,1]])
viewpoint.rot = chunk.transform.rotation * r_rotate * vue_Top
PhotoScan.app.viewpoint = viewpoint

#Calcul dimension région en pixel:
if width / cx > height /cy:
scale = cx / width
H_Bbox_px = (scale*float(height))
Ecart_pxy= (cy-(scale*float(height)))/2

else:
scale = cy / height
L_Bbox_px = (scale*float(width))
Ecart_pxx= (cx-(scale*float(width)))/2

#Capture de la vue:
Capture = PhotoScan.app.captureModelView(width = cx, height = cy, transparent = (True), hide_items = (True), source = PhotoScan.DataSource.ModelData, mode = PhotoScan.ModelViewMode.ShadedModelView)
Cap = PhotoScan.Image(Capture.width, Capture.height, "RGBA", "U8")
#Export de la vue:
path = PhotoScan.app.getSaveFileName("Enregister la vue sous (en png) :")
Capture.save(path)
img = Image.open(path)
if width / cx > height /cy:
left = 0
top = Ecart_pxy
right = cx
bottom = (Ecart_pxy + H_Bbox_px)
else:
left = Ecart_pxx
top =0
right = (Ecart_pxx + L_Bbox_px)
bottom = cy
result = img.crop((left, top, right, bottom))
ratio= chunk.orthomosaic.width/result.size[0]
Rx = int(result.size[0]*ratio)
Ry = int(result.size[1]*ratio)
result2 = result.resize((Rx,Ry), Image.BICUBIC)
buffer = io.BytesIO()
result2.save(buffer, format = "png")
open(path, "wb").write(buffer.getvalue())

# Create WorldFile for Crop Capture

#Dimensions de l'image :
rasterx = result2.size[0]
rastery = result2.size[1]

#Taille de l'ortho
Res_px = chunk.orthomosaic.resolution
largeur_ortho = chunk.orthomosaic.width
hauteur_ortho = chunk.orthomosaic.height
taille_orthox = Res_px*largeur_ortho
taille_orthoy = Res_px*hauteur_ortho
print("Taille de l'ortho : " +str(round(taille_orthox,3)) + " * " +str(round(taille_orthoy,3)) + " m")

#Taille du pixel de l'ortho:
print("Taille du pixel de l'ortho : " + str(Res_px) + " m")

#Taille du pixel de l'image:
ppx = taille_orthox/rasterx
ppy = taille_orthox/rastery
print("Taillle du pixel de l'image croppée : "  + str(ppx) + " m")


#Taille de la capture:
Taille_x = ppx*rasterx
Taille_y = ppx*rastery
print("Taille de l'image croppée : " + str(round(Taille_x,3)) + " * " + str(round(Taille_y,3)) + " m")

path = PhotoScan.app.getOpenFileName("Ouvrir le worldfile de l'ortho concernée :")
path2 = PhotoScan.app.getSaveFileName("Enregister le worldfile de la capture (en pngw) :")

#Copie du worldfile:
with open(path2, "w") as f:
with open(path, "r") as f2:
lines = f2.readlines()
for line in lines:
f.write(line)
f.close()
f2.close()

print("Script terminé")

self.OK.setDisabled(False)
self.Quit.setDisabled(False)

def main():
global doc
doc = PhotoScan.app.document

app = QtWidgets.QApplication.instance()
parent = app.activeWindow()

dlg = Georef_Snapshot(parent)

PhotoScan.app.addMenuItem("Topview Model Géoref", main)
S.Poudroux
Archaeologist - Topographer - Drone remote pilot

Yoann Courtois

  • Sr. Member
  • ****
  • Posts: 316
  • Engineer in Geodesy, Cartography and Surveying
    • View Profile
Re: Export georeferenced view / option in Orthomosaic export
« Reply #3 on: January 08, 2018, 10:52:54 AM »
Hi Seboon !

What a script ! If it fits your needs, well it's good !

But, what is your model face mean size ?
Have you tried to compare a contour extraction from the shaded model with an extraction from the orthomosaic ?
I still think the most accurate way is to work with the orthomosaic. You can produce a mosaic with the same resolution than your original images and get a more accurate contour extraction.

Don't know well about QGIS, but many softwares give the possibility to adjust the lightness and the contrast of the inserted ortho-images.
We produce a lot from orthomosaics and such adjustment (in AutoCAD or Mensura) permit us to extract feature in very dark (very white) or non-contrast area !

Regards
--
Yoann COURTOIS
R&D Engineer in photogrammetric process and mobile application
Lyon, FRANCE
--

JRM

  • Jr. Member
  • **
  • Posts: 81
    • View Profile
Re: Export georeferenced view / option in Orthomosaic export
« Reply #4 on: January 09, 2018, 08:17:08 PM »
Seboon > have you tried generating an hillshade and overlay it on the orthomosaic ? You can do so using the DEM and gdaldem (combined or multidirectionnal) or by styling the dem layer in QGIS with the real time hillshade. You can evn go further by using ambiant occlusion map.

This method has been used for years in our archeological department and seems to fit the need ; with the good fusion modes/styles applied, it even can replace the detailed drawing without getting uproar from CTRA.


Seboon

  • Jr. Member
  • **
  • Posts: 72
    • View Profile
Re: Export georeferenced view / option in Orthomosaic export
« Reply #5 on: January 16, 2018, 07:34:52 PM »
Hi Seboon !

What a script ! If it fits your needs, well it's good !

But, what is your model face mean size ?
Have you tried to compare a contour extraction from the shaded model with an extraction from the orthomosaic ?
I still think the most accurate way is to work with the orthomosaic. You can produce a mosaic with the same resolution than your original images and get a more accurate contour extraction.

Don't know well about QGIS, but many softwares give the possibility to adjust the lightness and the contrast of the inserted ortho-images.
We produce a lot from orthomosaics and such adjustment (in AutoCAD or Mensura) permit us to extract feature in very dark (very white) or non-contrast area !

Regards

Hello Yoann,

When you mean contours extraction, you're speaking about the use of a DEM? Or a filter directly on a raster, like sobel, or with pixel value?

I use to draw myself above the raster, that's the meaning of this script, just a cap. But of course, if you have some advices or knowledge on a good way to extract precise contours (let's say from blocks of a wall), I'll take it!!

Thanks and see you maybe one day in Lyon :-)
S.Poudroux
Archaeologist - Topographer - Drone remote pilot

Seboon

  • Jr. Member
  • **
  • Posts: 72
    • View Profile
Re: Export georeferenced view / option in Orthomosaic export
« Reply #6 on: January 16, 2018, 07:42:55 PM »
Seboon > have you tried generating an hillshade and overlay it on the orthomosaic ? You can do so using the DEM and gdaldem (combined or multidirectionnal) or by styling the dem layer in QGIS with the real time hillshade. You can evn go further by using ambiant occlusion map.

This method has been used for years in our archeological department and seems to fit the need ; with the good fusion modes/styles applied, it even can replace the detailed drawing without getting uproar from CTRA.



Hi JMR, thanks for your feedback,

Yes I usally do that with DEM ans it works well. However I will make a test with ambient occlusion, it may be a very good idea!
You're pointing exactly at the right place : " uproar" :-). You are lucky if you can pass the tedious and fastidious step of drawing ;  it seems to be difficult to change old uses (but good uses of course) in some places :-)

Regards!
S.Poudroux
Archaeologist - Topographer - Drone remote pilot

Yoann Courtois

  • Sr. Member
  • ****
  • Posts: 316
  • Engineer in Geodesy, Cartography and Surveying
    • View Profile
Re: Export georeferenced view / option in Orthomosaic export
« Reply #7 on: January 16, 2018, 07:51:49 PM »
Hello Seboon !

Automatic image extraction is a near future for our fields, but I didn't think about that in my last post so far.

I suggested you to compare rock edges, manually drawn on images :
- On a hand, from the orthomosaic
- And on the other, from your capture got with your script.

To my mind you will be able to get better resolution and better accuracy, with smaller project size, by working with orthomosaic in comparison to 3D-model mesh.


Well, why not ! See you !
--
Yoann COURTOIS
R&D Engineer in photogrammetric process and mobile application
Lyon, FRANCE
--

Seboon

  • Jr. Member
  • **
  • Posts: 72
    • View Profile
Re: Export georeferenced view / option in Orthomosaic export
« Reply #8 on: January 16, 2018, 08:30:46 PM »
Yeah for sure you are right, orthomosaic is much better than capture!
But in some cases, when my photos are not  good enough for the mosaïcing, it helps.
It's a kind of spare for drawing :-)
S.Poudroux
Archaeologist - Topographer - Drone remote pilot