Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - jmsuomal

Pages: [1]
1
I am not sure if this is a bug or "feature", but its troublesome anyway.

When importing Python-generated JPEG images, Metashape loads the camera Make and Model tags from the EXIF tag. This allows metashape to automatically separate the images taken using different cameras to different camera calibrations. However this feature does not seem to work when loading Python-generated (16-bit) TIFF images and doing the separation manually is a frustrating process. The GPS coordinates and focal length are still loaded from TIFF files, so some reading of EXIF data is definitely still happening with TIFF files, but all relevant fields are not processed.

How I found the problem:
I am using a multicamera setup, with processing chain where calibrated images are exported using Python as 16-bit TIFF files:

Code: [Select]
TIFF.imsave(OutputFile, (10*ImgBitmap).astype(np.uint16), planarconfig='contig')

As the Python TIFF module does not seem to properly support writing EXIF tags, those are then written with Phil Harvey's exiftool.exe using commands similar to this:

exiftool.exe -overwrite_original -XMP:Make="MyOrganizationName" -XMP:Model="RGB1" -XMP:ExposureTime=0.003946 -XMP:FocalLength=16.0 -XMP:FocalPlaneResolutionUnit=4 -XMP:FocalPlaneXResolution=289.855 -XMP:FocalPlaneYResolution=289.855 "D:\Test\Img_00000.tiff"

E.g. windows image file properties, show that files have these tags written correctly, but Metashape cannot read them from TIFF files. Running the very same exiftool command on a jpeg file, results in a JPEG file that can be loaded to Metashape with the correct Make and Model metadata. This makes me believe that the problem is that the Metashape TIFF import is not reading the EXIF data from TIFFs the same way as the JPEG import does.

While waiting for update, is there a workaround? Which TIFF file EXIF tags are read by the Metashape that could be used to differentiate the different cameras? E.g. the XMP:SerialNumber ("BodySerialNumber") tag does not seem to work either...

The 16-bit example TIFFs files are rather large, so I will not attach them to this post unless especially needed for troubleshooting.

2
Feature Requests / Improve visualization of 16bit TIFF images
« on: November 24, 2016, 06:24:56 PM »
Background:
We are using PhotoScan Pro to georeference our drone based remote sensing data. We do our own radiometric calibration from Canon DLSR raw images to reflectance factor images prior to georeferencing them in PhotoScan.

The reflectance factors are numbers typically in agricultural fields being between 0.00 and 0.40. Our data contains more information than an 8bit image can handle so we must use something else than standard 8bit images to store it. We could of course store the reflectance factor images using floating point numbers in TIFF files but these take 32bits per pixel so that is not preferable. Instead of floating points, in our standard processing we output the images as 16bit unsigned integer TIFF files with a scaling factor of 10 000. This way the file size is only 50% of the floating points while we still can keep practically full radiometric resolution in the data. This means that for example in a typical scenery with reflectance factors ranging between 0.00 and 0.40 the image has uint16 DN values ranging between 0DN and 4000DN.

What is nice, is that PhotoScan is capable of processing these images fine and we can produce georeferenced outputs in basic processing scenarios without any problems.

Problem:
The problem is that PhotoScan fails to visualize the 16bit images properly. The PhotoScan shows the images, but visualizes only the 8 top bits of the 16bit of data. Effectively this mean that it divides the 16bit DN values by 256 and then shows the image. So the example image mentioned above ends up showing an image with 8bit DNs in range from 0 to 16. That is basically black and you cannot see anything useful there.

This is a serious problem in the cases when the image GPS based georeferencing is not enough and we need to use ground control markers. The markers cannot be seen or pointed out in the black images.

Solution:
Could you add to the loading of the visualization images an automated min/max scaling if the file is a 16bit TIFF? I believe with the floating point images you already have implemented something like this.

If you want to test the scaling/problem, with this link you can download an example of our 16bit reflectance factor images. I will keep the link alive at least a couple of weeks.





3
We have taken UAV based images of an agricultural field. We are trying to use these images for some advanced processing for which  we need to georectify each picture separately. To smooth out variations within vegetation canopy  in georectification step we resample the images from the 7cm ground sampling distance(GSD) to 5m georeferenced pixels.

We the workflow now working but we noticed a strange effect happening to pixel intensities. Please see the attached image. 

(1.) On top row, you can see the original 7cm GSD image and its histogram. On meter-scale the field is pretty homogenous, but with the small pixels we get a large variation in the intensities. In histogram you can see that we have (reflectance factor) values approximately between 0.2 and 0.55.

(2.) On second row, you can see what should happen to histogram when pixel size get larger. Outside of Photoscan, we applied to the original image a 5m spatial smoothing and this causes the histogram to narrow down only 0.34-0.42 range while the average intensity remains the same. This what expect to see with 5m pixels.

(3.) On third row,  you can see what Photoscan outputs when we used 5m resolution to both build the orthomosaic and 5m resolution to export the orthomosaic. The histogram did not narrow down! Also we received a lot of unexplained noise (and possibly stripes/"oscillations") in the image. The large 5m pixels that should have smoothed out to show values about 0.4 were now showing randomly values approximately within the histogram range of the original 7cm image. Because of such crazy noise, we lost all remote sensing signals we have in the data.

(4.) Last row shows how we got around this. In Photoscan we generated the orthomosaic model at the original 7cm resolution and then used the 5m resolution only in the output of the orthomosaic. With this workflow things seem to work fine and the histogram matches the expected one sufficiently.

So, we got around this problem, but it shows that there is something strange happening in the "build orthomosaic" step. If this not a bug but a feature, then it seems to me that no-one should ever use resolution higher than the GSD in the building of orthomosaics. Thus it would be a good idea to remove that option or at least attach big warning labels to it. 

For this workflow we used python scripting with the following key commands
Code: [Select]
Resolution = 5
doc.chunk.buildOrthomosaic(blending=PhotoScan.BlendingMode.DisabledBlending, color_correction=False, dx=Resolution, dy=Resolution)
doc.chunk.exportOrthophotos(FileName, projection=CoordinateSystem, tiff_compression="none", dx=Resolution, dy=Resolution)

Also such strange effect raises some worries on how well does Photoscan respect the intensities we have in the images. Now we saw this to happen on the 5m pixels, but is there a similar corruption of data happening also on smaller pixel sizes. Can we trust the values of individual georectified pixels if we do the whole process on 7cm resolution? Basically what we need to have is that if we put in images with calibrated reflectance factor values, then we also need to get out the same values without them becoming randomly mutated. Or course there are reflectance anisotropy effects and some sort of fusion of those multiple observations must happen. Thus the input and output can never match perfectly, but for our use this process must be predictable and not add such random changes!

4
Bug Reports / getFloat dialog does not show the label text
« on: July 12, 2016, 04:46:52 PM »
In PhotoScan 1.2.5 build 2614 (64bit) on Windows 7 the getFloat dialog does not show the label text at all. Example:

Code: [Select]
answer = PhotoScan.app.getString(label='I can see this text in dialog', value="Hello")
answer = PhotoScan.app.getInt(label='I can also see this text in dialog', value=5)
answer = PhotoScan.app.getFloat(label='But this text is nowhere to be seen', value=0.5)


5
Python and Java API / Setting DEM no-data value in python API
« on: January 13, 2014, 06:38:03 PM »
Is there any way to set DEM no-data value in Python API? The Python API reference does not seem describe any parameter to do that, but you can set it in the normal user interface...

6
Python and Java API / Setting mapping region
« on: January 09, 2014, 06:19:32 PM »
I would like to set the mapping region with Pytho scripting, but I seem to have problems because I dont understand the chunk.region object.

Here is what I have managed to do so far:
Code: [Select]
# Define: Region [CenterX, CenterY, SizeX, SizeY, ZRotationDeg]
MapRegionVec = [304331.4, 5740607.3, 173.8, 143.6, -25.0]

# Define: Coordinate system
CoordinateSystemEPSG = "EPSG::32632"

# create chunk...

# Set the coordinate system
CoordinateSystem = PhotoScan.CoordinateSystem()
CoordinateSystem.init(CoordinateSystemEPSG)
chunk.crs = CoordinateSystem
chunk.projection = CoordinateSystem

# (load camera calib...)
# (load photos...)
# (load initial camera orientations...)
# (matchPhotos...)
# (alignPhotos...)

# Set region center:
newregion = PhotoScan.Region()
centerUTM = PhotoScan.Vector([MapRegionVec[0], MapRegionVec[1], chunk.region.center[2]])
centerGEO = chunk.crs.unproject(centerUTM)
centerLocal = chunk.crs.localframe(centerGEO)
# LocalCoordinates are a 4x4 matrix instead of XYZ vector!
# I cant put that to XYZ. I have no idea what to do so I use the original center point for now...
newregion.center = chunk.region.center

# Set region size (this one works fine)
newregion.size = PhotoScan.Vector([MapRegionVec[2], MapRegionVec[3], chunk.region.size[2]])

# Set region rotation
import math
SinRotZ= math.sin(math.radians(MapRegionVec[4]))
CosRotZ= math.cos(math.radians(MapRegionVec[4]))
newregion.rot = PhotoScan.Matrix( [[SinRotZ,-CosRotZ,0], [CosRotZ,SinRotZ,0], [0,0,1]] )
# This works technically but does not do what I want!

# put newregion to chunk
chunk.region = newregion

But on that I cannot get the chunk.region.center or chunk.region.rot configured correctly. Thus I have a couple open questions.

- What are the units/coordinatesystem in the chunk.region.center values?  My chunk coordinate system is WGS84/UTM 32. Here's how the current center numbers look:

Code: [Select]
>>> chunk.region.center
Vector([-4.059450202416892, -0.8626608305524442, 12.09349905182462])
Z could be quite sensible value for altitude, but what are those X and Y units.  They are not UTM coordinates for sure. How do I convert my UTM coordinates to these?

- What is this chunk.region.rot object? Rotation matrix? Using standard z-rotation matrix of:

[sin(RotAngleRad),-cos(RotAngleRad),0],
[cos(RotAngleRad),sin(RotAngleRad),0],
[0,0,1]

does not seem to do what I would assume it does. For example running:

Code: [Select]
region = chunk.region
region.rot = PhotoScan.Matrix([[1,0,0],[0,1,0],[0,0,1]])
chunk.region = region

...does not put the region along the XY axis, but rotates it in an angle (that might be based to the orientation of first photo?) How do I set the region to certain Kappa angle?

Any hints or even an example script how to do this?

7
Python and Java API / Example code for full processin chain
« on: January 08, 2014, 04:00:08 PM »
Hi,

I am integrating PhotoScan to be a part of a larger processing chain that is based around a Matlab processor engine. The engine is already now producing/organizing all PhotoScan input files and using again the output files. Until now, processing in PhotoScan has required a lot of manual clicking including risks for user errors and producing lots of variation in outputs.

My goal was to generate with the other PhotoScan input files also a ready script file so that user would need to do only minimal (if any) clicking in PhotoScan. I have just spent better part of two working days trying to learn Python and how to do all that scripting in PhotoScan. Now I am happy to tell that I managed to get it to work.  :)

The only thing that bugs me was that I was not able to find that good examples for this complete code, but I had to do combine it from multiple small code snippets in this forum and in the Python Reference Manual, and still add a LOT of my own head scratching in. For one thing, I would have appreciated hugely if the official Python reference manual would have contained a complete "example script" appendix with practical example how to do this all. As I found this lack of complete examples annoying/very time consuming, I now wanted to share my script with rest of the community:

Code: [Select]
# Python script for automated PhotoScan processing
# PhotoScan version 1.0.0

# DEFINE PROCESSING SETTINGS
print("---Defining processing settings...")

# Define: Home directory (later this is referred as ".")
HomeDirectory = "D:\\Temp\\PhotoScanScriptTest"

# Define: Camera calib file
CalibFile = "D:\\Temp\\AgisoftLensCameraCalib.xml"

# Define: PhotoScan Project File
PhotoScanProjectFile = ".\\PhotoScan_Project.psz"

# Define: Source images
AerialImagesPattern = ".\\Aerial_Images_8bit_tif\\*.tif"

# Define: Orientation file
OrientationFile = ".\\CameraOrientations_IMU.txt"

# Define: Cameras output file
CamerasOPKFile = ".\\CameraOrientations_PhotoScan.opk"

# Define: Coordinate system
CoordinateSystemEPSG = "EPSG::32632"

# Define: DSMResolutions (in meters),
# 0 == GSD resolution
DSMResolutions = [1.00, 0]

# Define: OrthoImageResolutions (in meters)
# 0 == GSD resolution
OrthoImageResolutions = [1.00, 0]

# Define: AlignPhotosAccuracy ["high", "medium", "low"]
AlignPhotosAccuracy = "medium"

# Define: BuildGeometryQuality ["lowest", "low", "medium", "high", "ultra"]
BuildGeometryQuality = "medium"

# Define: BuildGeometryFaces [’low’, ‘medium’, ‘high’] or exact number.
BuildGeometryFaces = "medium"



# START ACTUAL SCRIPT (DO NOT EDIT ANYTHING BELOW THIS!)

# INIT ENVIRONMENT
# import stuff
import os
import glob
import PhotoScan
import math

# Set home folder
os.chdir(HomeDirectory)
print("Home directory: " + HomeDirectory )

# get main app objects
doc = PhotoScan.app.document
app = PhotoScan.Application()

# create chunk
chunk = PhotoScan.Chunk()
chunk.label = "New_Chunk"
doc.chunks.add(chunk)

# SET COORDINATE SYSTEM
print("---Settings coordinate system...")
# init coordinate system object
CoordinateSystem = PhotoScan.CoordinateSystem()
if not(CoordinateSystem.init(CoordinateSystemEPSG)):
   app.messageBox("Coordinate system EPSG code not recognized!")
# define coordinate system in chunk
chunk.crs = CoordinateSystem
chunk.projection = CoordinateSystem

# FIND ALL PHOTOS IN PATH
AerialImageFiles = glob.glob(AerialImagesPattern)

# LOAD CAMERA CALIBRATION
print("---Loading camera calibration...")
# we need to open first image to define image size
cam = PhotoScan.Camera()
cam.open(AerialImageFiles[0])
# Load sensor calib
user_calib = PhotoScan.Calibration()
if not(user_calib.load(CalibFile)):
   app.messageBox("Loading of calibration file failed!")
# build sensor object
sensor = PhotoScan.Sensor()
sensor.label = "MyCamera"
sensor.width = cam.width
sensor.height = cam.height
sensor.fixed = False
sensor.user_calib = user_calib
chunk.sensors.add(sensor)

# LOAD AERIAL IMAGES
print("---Loading images...")
# load each image
for FileName in AerialImageFiles:
   print("File: " + FileName)
   cam = PhotoScan.Camera()
   if not(cam.open(FileName)):
      app.messageBox("Loading of image failed: " + FileName)
   cam.label = cam.path.rsplit("/",1)[1]
   cam.sensor = sensor
   chunk.cameras.add(cam)

# LOAD CAMERA ORIENTATIONS
print("---Loading initial photo orientations...")
print("File: " + OrientationFile)
# build ground control
chunk.ground_control.projection = CoordinateSystem
chunk.ground_control.crs = CoordinateSystem
if not(chunk.ground_control.load(OrientationFile,"csv")):
   app.messageBox("Loading of orientation file failed!")
chunk.ground_control.apply()

# SAVE PROJECT
print("---Saving project...")
print("File: " + PhotoScanProjectFile)
if not(doc.save(PhotoScanProjectFile)):
   app.messageBox("Saving project failed!")

# All init done! Ready to start processing.

# ALIGN PHOTOS
print("---Aligning photos ...")
print("Accuracy: " + AlignPhotosAccuracy)
chunk.matchPhotos(accuracy=AlignPhotosAccuracy, preselection="ground control")
chunk.alignPhotos()

# SAVE PROJECT
print("---Saving project...")
print("File: " + PhotoScanProjectFile)
if not(doc.save(PhotoScanProjectFile)):
   app.messageBox("Saving project failed!")

# If ground control markers are used, code should stop here and user should click them in now

# BUILD GEOMETRY
print("---Building Geometry...")
print("Quality: " + BuildGeometryQuality)
print("Faces: " + str(BuildGeometryFaces))
# THESE ROWS WERE ALTERED IN v1.0.0 UPDATE
# chunk.buildDepth(quality=BuildGeometryQuality)
# chunk.buildModel(object="height field", geometry="smooth", faces=BuildGeometryFaces)
if not(chunk.buildDenseCloud(quality=BuildGeometryQuality)):
   app.messageBox("Builde Dense Cloud failed!")

if not(chunk.buildModel(surface="height field", source="dense", interpolation="enabled", faces=BuildGeometryFaces)):
   app.messageBox("Build mesh model failed!")

# SAVE PROJECT
print("---Saving project...")
print("File: " + PhotoScanProjectFile)
if not(doc.save(PhotoScanProjectFile)):
   app.messageBox("Saving project failed!")

# Actual processing done
# EXPORT DATA:

# EXPORT CAMERAS
print("---Exporting camera positions...")
if not(chunk.exportCameras(CamerasOPKFile, "opk", chunk.projection)):
   app.messageBox("Exporting Cameras OPK failed!")

# CALCULATE THE BEST SENSIBLE PIXEL SIZES
Resolution = 0;
#if Resolution == 0:
# Get average camera altitude and focal length
N_cameras = len(chunk.cameras)
CamAlt = 0.0
focal_length = 0.0
for i in range(N_cameras):
   CamAlt += chunk.ground_control.locations[chunk.cameras[i]].coord[2]
   focal_length += (chunk.cameras[i].sensor.calibration.fx + chunk.cameras[0].sensor.calibration.fy) / 2

CamAlt /= N_cameras
focal_length /= N_cameras
# Get average ground  altitude
N_points = len(chunk.point_cloud.points)
GroundAlt = 0.0
for i in range(N_points):
   GroundAlt += chunk.point_cloud.points[i].coord.z

GroundAlt = GroundAlt / N_points
# calculate Ground sampling distance
GroundSamplingDistance = math.fabs(CamAlt-GroundAlt) / focal_length
# round it to next millimeter
GroundSamplingDistance = math.ceil(GroundSamplingDistance * 1000)/1000

# EXPORT DSM(S)
print("---Exporting Digital Surface Model(s)...")
for Resolution in DSMResolutions:
   if Resolution == 0:
      Resolution = GroundSamplingDistance
   
   FileName = ".\\DSM_PixelSize=" + str(Resolution) + "m.tif"
   print("File: " + FileName)
   if not(chunk.exportDem(FileName, format="tif", dx=Resolution, dy=Resolution, projection=chunk.projection)):
      app.messageBox("Exporting DSM failed: " + FileName)
   

# EXPORT ORTHOIMAGE(S)
print("---Exporting Orthoimage(s)...")
for Resolution in OrthoImageResolutions:
   if Resolution == 0:
      Resolution = GroundSamplingDistance
   
   FileName = ".\\OrthoImage_PixelSize=" + str(Resolution) + "m.tif"
   print("File: " + FileName)
   if not(chunk.exportOrthophoto(FileName, format="tif", projection=chunk.projection, blending="mosaic", dx=Resolution, dy=Resolution)):
      app.messageBox("Exporting orthoimage failed: " + FileName)
   
# SAVE PROJECT
print("---Saving project...")
print("File: " + PhotoScanProjectFile)
if not(doc.save(PhotoScanProjectFile)):
   app.messageBox("Saving project failed!")

# Close photoscan
# app.quit()
Instructions for fellow Python-virgins: copy-paste the script to notepad, edit the files/folders to your own ones, save file as "PythoScript.py" (or anything you want), and run it in PhotoScan using "Tools>Run Script..."

And here is an example of the CameraOrientations_IMU.txt file:
Code: [Select]
#CoordinateSystem: WGS84_UTM_Zone_32N(EPSG:32632)
#FileName EastingX NorthingY Altitude
P1440733_8bit.tif 304323.409 5740634.830 79.841
P1440734_8bit.tif 304323.758 5740635.029 79.597
P1440735_8bit.tif 304322.964 5740633.880 78.496
P1440736_8bit.tif 304322.164 5740632.638 79.890

Alexey:
(1) If you want, feel free to add any parts of this script to the Python Reference manual.
(2) Also as requested already in an earlier topic, I would love to be able to run this script from OS command line for example with syntax:

   > photoscan.exe -RunScript "D:\Temp\PhotoScanScriptTest\PythonScript.py"

...and preferably even with option to disable/hide whole user interface (so that there is no chance that the user disturbs the running of the script, for example by pressing cancel in some dialog):

   > photoscan.exe -RunScriptHidden "D:\Temp\PhotoScanScriptTest\PythonScript.py"

8
General / Setting location of temporary bitmap file
« on: February 05, 2013, 04:43:45 PM »
I noticed from the Windows Resource Monitor that PhotoScan Pro apparently uses a temporary bitmap file on hard disk in some processing phases. In my case the file was located at "E:\$Bitmap". In my configuration the the E-drive is a RAID-5 HDD array containing the bulk of images while D-drive would be a fast SSD drive especially dedicated to be a working disk. If there is any significant data flow accessing the temporary bitmap file, it would certainly be beneficial to place it on the SSD disk instead of the HDD.

Firstly, is there a setting somewhere in PhotoScan that would allow me to select the disk to place the temporary file(s)? I could not find such a setting.

Secondly, if such setting is currently not available, do you think adding such feature would speed anything up in practice?


Pages: [1]