Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - jeremyeastwood

Pages: [1] 2
1
Hi,

I am having an issue  exporting my orthomosaic in web mercator projection (epsg:3857): a blank line (missing pixels) 2 pixels wide appears as a stripe through my output file.  This does not happen when I export in wgs84 (the project coordinate system), or any other projection I have tried.

Some details - I am using version 1.2.6 and exporting with:
 - projection: epsg:3857
 - single tile output (no pixel limits)
 - default boundaries
 - tiff compression: none
 - big tiff: no

I have tried with various different compression settings but there is no difference.  One interesting thing: the stripe appears at exactly 4096 pixels into my output orthomosaic (the ortho I'm testing right now is 4274x3835 pixels, so the missing pixels appear as a vertical line near the right hand edge); when I set the pixel max size to 4096, the stripe is at the edge of each tile (so difficult to see).

Right now my workaround is to export the ortho in wgs84 (epsg:4326), then convert to epsg:3857 afterwards using gdal, however this is slower so not ideal.  Any ideas how I can eliminate the artifacts and export directly in epsg:3857?

Thanks in advance for your help

2
Hi there,

I'm using the following to calculate the orthomosaic resolution for my map:

Code: [Select]
ortho_res = float(chunk.model.meta['model/resolution']) / int(chunk.model.meta["model/depth_downscale"]) * chunk.transform.scale
This works fine for my normal workflow, however sometimes I generate a quick orthomosaic using just the sparse point cloud, and this breaks the snippet above, as
Code: [Select]
chunk.model.meta["model/depth_downscale"] no longer exists, resulting in:
Code: [Select]
int() argument must be a string or a number, not 'NoneType'
How should I calculate my orthomosaic resolution when I have not generated a dense point cloud?

Thanks

3
I'm having some trouble with very high memory spikes during the buildOrthomosaic processing step that is actually crashing / failing photoscan for some projects (this doesn't happen for all projects).  I'm running version 1.2.2, and don't seem to have the same issue when running an older version of the software (1.1.6).

For example, for a relatively small project (99 images / 500MB) with version 1.2.2, I'm seeing a large memory spike up to about 50GB, but when running version 1.1.6 the memory usage never rose above about 14GB.  Here's a link to the input images if you want to replicate: https://www.dropbox.com/s/ysmjsuahw5nsagt/sample_images.zip?dl=0

My workflow for 1.2.2 is as follows:

Code: [Select]
import PhotoScan

app = PhotoScan.Application()
doc = PhotoScan.Document()

chunk = doc.addChunk()
chunk.addPhotos(image_list)
chunk.loadReferenceExif()
coord_system = PhotoScan.CoordinateSystem('EPSG::4326')
chunk.crs = coord_system

chunk.matchPhotos(accuracy=PhotoScan.HighAccuracy, preselection=PhotoScan.ReferencePreselection, tiepoint_limit=10000)
chunk.alignCameras()
chunk.optimizeCameras()

chunk.buildDenseCloud(quality=PhotoScan.MediumQuality, filter=PhotoScan.AggressiveFiltering)
chunk.buildModel(surface=PhotoScan.HeightField, source=PhotoScan.DenseCloudData, face_count=PhotoScan.HighFaceCount, interpolation=PhotoScan.EnabledInterpolation)
chunk.model.closeHoles()
chunk.model.fixTopology()
chunk.buildUV(mapping=PhotoScan.GenericMapping, count=16)
chunk.buildTexture(blending=PhotoScan.MosaicBlending, size=4096)

doc.save(path=new_project_path, chunks=[chunk])
doc = PhotoScan.Document()
doc.open(new_project_path)
chunk = doc.chunk
chunk.buildOrthomosaic()
chunk.exportOrthomosaic(ortho_path, projection=chunk.crs)
chunk.buildDem(source=PhotoScan.DenseCloudData)
chunk.exportDem(dem_path, projection=chunk.crs)

doc.save(path=project_path, chunks=[chunk])
app.quit()

My workflow for 1.1.6 is as follows:

Code: [Select]
import PhotoScan

app = PhotoScan.Application()
doc = PhotoScan.Document()

chunk = PhotoScan.Chunk()
doc.addChunk(chunk)
chunk.addPhotos(image_list)
chunk.loadReferenceExif()
coord_system = PhotoScan.CoordinateSystem('EPSG::4326')
chunk.crs = coord_system

chunk.matchPhotos(accuracy=PhotoScan.HighAccuracy, preselection=PhotoScan.ReferencePreselection, tiepoint_limit=10000)
chunk.alignCameras()
chunk.optimizeCameras()

chunk.buildDenseCloud(quality=PhotoScan.MediumQuality, filter=PhotoScan.AggressiveFiltering)
chunk.buildModel(surface=PhotoScan.HeightField, source=PhotoScan.DensePoints, face_count=PhotoScan.HighFaceCount, interpolation=PhotoScan.EnabledInterpolation)
chunk.model.closeHoles()
chunk.model.fixTopology()
chunk.buildUV(mapping=PhotoScan.GenericMapping, count=16)  # try different mapping modes
chunk.buildTexture(blending=PhotoScan.MosaicBlending, size=4096)

chunk.exportOrthophoto(ortho_path, color_correction=False, blending=PhotoScan.MosaicBlending, projection=chunk.crs)
chunk.exportDem(dem_path, projection=chunk.crs)

doc.save(project_path)
app.quit()

 Is this a known issue / can I change my 1.2.2 workflow to avoid such high memory usage?

Any help greatly appreciated.

4
General / Can't rotate model to align with z-axis (button grayed out)
« on: January 28, 2016, 12:26:06 PM »
I have a model in version 1.2.3 which is great, but rotated about 45deg about the y axis:

https://www.dropbox.com/s/0tskd8rmcuspwy5/Screenshot%202016-01-28%2001.23.02.png?dl=0

I've rotated the bounding box to align with the model, but want to align the model so that the z-axis is vertical.  I have done this before with previous projects (maybe on earlier versions of ps), but this time the rotate model button is grayed-out, so I can't use it (see screenshot).

Is there a way to enable the rotate model button (maybe an extra processing step or setting somewhere?), or alternatively is there a way I can do this through the api?

Thanks in advance.

5
I had some difficulty replicating my 1.1.6 workflow with the new 1.2.x api, but managed to get it working by saving the project in .psx format then re-loading before calling the new chunk.buildOrthomosaic() method.  Is this the correct way to go about ortho export, or is there a cleaner method?  Will this be changing with future releases?

My workflow is summarised below:

Code: [Select]
import PhotoScan
import glob

images = glob.glob('/path/to/project/directory/images/*.jpg')
ouput_path = '/path/to/project/directory/'


app = PhotoScan.Application()
doc = PhotoScan.Document()
chunk = doc.addChunk()

chunk.addPhotos(images)
chunk.loadReferenceExif()

coord_system = PhotoScan.CoordinateSystem('EPSG::4326')
chunk.crs = coord_system

chunk.matchPhotos(accuracy=PhotoScan.LowAccuracy, preselection=PhotoScan.ReferencePreselection, tiepoint_limit=10000)
chunk.alignCameras()

chunk.optimizeCameras()

chunk.buildDenseCloud(quality=PhotoScan.LowQuality, filter=PhotoScan.AggressiveFiltering)

chunk.buildModel(surface=PhotoScan.HeightField, source=PhotoScan.DenseCloudData, face_count=PhotoScan.HighFaceCount, interpolation=PhotoScan.EnabledInterpolation)

chunk.model.closeHoles()
chunk.model.fixTopology()

chunk.buildUV(mapping=PhotoScan.GenericMapping, count=4)
chunk.buildTexture(blending=PhotoScan.MosaicBlending, size=4096)

doc.save(path=ouput_path + 'project.psx', chunks=[chunk])

doc = PhotoScan.Document()
doc.open(ouput_path + 'project.psx')
chunk = doc.chunk

chunk.buildOrthomosaic()
chunk.buildDem(source=PhotoScan.DenseCloudData)

chunk.exportOrthomosaic(ouput_path + 'ortho.tif')

chunk.exportDem(ouput_path + 'dem.tif')

chunk.exportPoints(ouput_path + 'points.las', source=PhotoScan.DenseCloudData, format='las')

chunk.exportModel(ouput_path + 'model.obj', texture_format='jpg', texture=True, format='obj')

app.quit()

Any suggestions on how to improve it based on the new api also very welcome

6
Hi there,

I'd like to link each point in the sparse point cloud (also dense cloud later on) to the cameras which "see" it (the cameras in which that point in space appears).  Alternatively I'd like a list of the points seen by each camera.

Can I do this through the python api directly, i.e. is there a property of a point in the chunk.point_cloud.points (or similar) which gives me the cameras, or do I have to calculate the camera footprint and find the point locations which fall within those bounds?

Thanks

7
General / low accuracy align produces better results than high accuracy
« on: September 15, 2015, 02:25:56 AM »
Hi there,

I aligned a large set (450) of photos using the low accuracy settings (reference pair selection, 0 tie point limit, 100,000 key point limit), producing a nice alignment and ultimately a good orthomosaic:

https://www.dropbox.com/s/a9893zd1roww822/Screenshot%202015-09-14%2016.09.31.png?dl=0
https://www.dropbox.com/s/c4cu5cmny295is6/Screenshot%202015-09-14%2016.12.23.png?dl=0

however, when re-running using the high-accuracy settings for align (other settings the same), some of the photos aren't stitched, leaving large holes in the map:

https://www.dropbox.com/s/2b1lzwg7jb8fxvx/Screenshot%202015-09-14%2016.23.48.png?dl=0
https://www.dropbox.com/s/rn45kzdly9ntvfu/Screenshot%202015-09-14%2016.24.17.png?dl=0

Any ideas what's going on, or what settings I might be able to use to improve the results of the high-accuracy alignment (or should I use low-accuracy align as a default instead)?

Thanks

8
Hi,

I am using a python script to reset my model bounding box x and y locations and dimensions, as well as its z-rotation, while keeping the bounding box z extent and center the same.  The x and y shift, scaling and rotation work well, but the bounding box seems to change its z-location relative to the model (even though a console output of the box center z value is the same before and after the transformation).

For clarity, I have attached screenshots before and after applying the transformation, showing the unwanted shift in z-location of the bounding box (which transfers the bounding box away from the model, making further processing fail); you can also see the console which shows the region center values before and after the transformation, showing the z value remains the same.

My code is below:

Code: [Select]
import PhotoScan
import math

bbox_center = [-119.43639088933767, 36.57570587655139]  # deg - wgs84
bbox_size = [193.71339323337548, 99.76327818514577]  # m
bbox_rot =  180.58462719482992  # deg

doc = PhotoScan.app.document
chunk = doc.chunk
reg = chunk.region
trans = chunk.transform.matrix

print("region center: {}".format(reg.center))
print("region size: {}".format(reg.size))
print("region rot: {}".format(reg.rot))

newregion = PhotoScan.Region()
center_geo = PhotoScan.Vector([bbox_center[0], bbox_center[1], 0.])  # uses existing region height

v_temp = chunk.crs.unproject(center_geo)
v_temp.size = 4
v_temp.w = 1
centerLocal = chunk.transform.matrix.inv() * v_temp
centerLocal.size = 3
newregion.center = PhotoScan.Vector([centerLocal[0], centerLocal[1], reg.center[2]])  # uses existing region height
print("newregion center: {}".format(newregion.center))

# Set region size:
# generate scale factor
rot_untransformed = PhotoScan.Matrix().diag([1, 1, 1, 1])
rot_temp = trans * rot_untransformed
s = math.sqrt(rot_temp[0, 0] ** 2 + rot_temp[0, 1] ** 2 + rot_temp[0, 2] ** 2)

# scale desired size in metres to chunk internal coordinate system
inter_size = PhotoScan.Vector([0, 0, 0])
geo_size = PhotoScan.Vector([bbox_size[0], bbox_size[1], 0])  # uses original chunk region z size

inter_size = geo_size / s
newregion.size = PhotoScan.Vector([inter_size[0], inter_size[1], reg.size[2]])
print("newregion size: {}".format(newregion.size))

# Set region rotation
# build z-axis rotation matrix
SinRotZ = math.sin(math.radians(bbox_rot))
CosRotZ = math.cos(math.radians(bbox_rot))
RotMat = PhotoScan.Matrix([[CosRotZ, -SinRotZ, 0, 0], [SinRotZ, CosRotZ, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]])

#  rotate region bounding box
v = PhotoScan.Vector([0, 0, 0, 1])
v_t = trans * v
v_t.size = 3
m = chunk.crs.localframe(v_t)
m = m * trans
m = RotMat*m
s = math.sqrt(m[0, 0]**2 + m[0, 1]**2 + m[0, 2]**2)  # scale factor
R = PhotoScan.Matrix([[m[0, 0], m[0, 1], m[0, 2]], [m[1, 0], m[1, 1], m[1, 2]], [m[2, 0], m[2, 1], m[2, 2]]])
R = R * (1. / s)
newregion.rot = R.t()
print("newregion rot: {}".format(newregion.rot))

# put newregion to chunk
chunk.region = newregion

Any ideas what I'm doing wrong and how to fix are greatly appreciated.

Thanks

9
General / aerial images placed beneath model after alignment
« on: March 23, 2015, 10:18:01 PM »
I have a set of georeferenced drone images (https://www.dropbox.com/s/0om9ixa32z94mv0/images.zip?dl=0) that stitch together nicely, however when aligned, they place the dense cloud (and model etc.) location above the images rather than below:

https://www.dropbox.com/s/g0wnl68xorhyzf4/Screenshot%202015-03-23%2012.05.34.png?dl=0

which leads to the georeferenced orthophoto having the wrong scale and not aligning properly with satellite imagery after export.

Is there a way I can force photoscan to always have the images pointing downwards, or flip the dense cloud / model if it appears above them?

Thanks

10
General / failure to align images from fisheye lens camera
« on: March 13, 2015, 10:12:54 PM »
I'm failing on the align step for this (https://www.dropbox.com/s/0om9ixa32z94mv0/images.zip?dl=0) set of images from a fisheye lens camera (from a DJI phantom).  I'm using high-quality align, fisheye camera type and the latest photoscan version (1.1.3).  This is weird, as I've had success with similar image sets, and there's clearly enough overlap with the images.

Any suggestions very much appreciated,

Thanks

11
General / recommended value for reprojection error limit
« on: February 03, 2015, 10:11:04 PM »
I would like to remove all points with high reprojection error after the align stage to improve my model quality - any recommendations for a sensible limit I should use?

I had been using a limit of 1.0 in photoscan 1.0.4, however in 1.1.0 this value now seems to be too aggressive (cutting out most of my data), so I was wondering if there is a specific value which indicates very inaccurate data that I could use for all my models?

Thanks

12
Python and Java API / project not saving properly
« on: January 23, 2015, 10:46:09 PM »
I've just upgraded my script to 1.1.0 and it seems to run fine and produce all the required outputs, however it isn't saving the project properly (it produces a blank project.psz).  I've included my full script below - any ideas greatly appreciated.

Code: [Select]
app = PhotoScan.Application()
doc = PhotoScan.Document()
chunk = PhotoScan.Chunk()
chunk.addPhotos(image_list)

with open(quality_path, mode='w') as fd:
    chunk.estimateImageQuality()
    n = len(chunk.cameras)
    for i, image in enumerate(image_list):
        quality = chunk.cameras[i].frames[0].meta["Image/Quality"]
        fd.write('{image} {quality}\n'.format(image=image.split('/')[-1], quality=quality))

chunk.loadReferenceExif()
coord_system = PhotoScan.CoordinateSystem('EPSG::4326')
chunk.crs = coord_system
accuracy = PhotoScan.HighAccuracy
chunk.matchPhotos(accuracy=accuracy, preselection=PhotoScan.ReferencePreselection)
chunk.alignCameras()

# transform bounding box to deisred mapping region
reg = chunk.region
trans = chunk.transform.matrix
newregion = PhotoScan.Region()

# Set region center:
center_geo = PhotoScan.Vector([bbox_center[0], bbox_center[1], 0.])  # uses existing region height
v_temp = chunk.crs.unproject(center_geo)
v_temp.size = 4
v_temp.w = 1
centerLocal = chunk.transform.matrix.inv() * v_temp
centerLocal.size = 3
newregion.center = PhotoScan.Vector([centerLocal[0], centerLocal[1], reg.center[2]])  # uses existing region height

# Set region size:
# generate scale factor
rot_untransformed = PhotoScan.Matrix().diag([1, 1, 1, 1])
rot_temp = trans * rot_untransformed
s = math.sqrt(rot_temp[0, 0] ** 2 + rot_temp[0, 1] ** 2 + rot_temp[0, 2] ** 2)

# scale desired size in metres to chunk internal coordinate system
inter_size = PhotoScan.Vector([0, 0, 0])
geo_size = PhotoScan.Vector([bbox_size[0], bbox_size[1], 0])  # uses original chunk region z size
inter_size = geo_size / s
newregion.size = PhotoScan.Vector([inter_size[0], inter_size[1], reg.size[2]])

#  rotate region bounding box
SinRotZ = math.sin(math.radians(bbox_rot))
CosRotZ = math.cos(math.radians(bbox_rot))
RotMat = PhotoScan.Matrix([[CosRotZ, -SinRotZ, 0, 0], [SinRotZ, CosRotZ, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]])
v = PhotoScan.Vector([0, 0, 0, 1])
v_t = trans * v
v_t.size = 3
m = chunk.crs.localframe(v_t)
m = m * trans
m = RotMat*m
s = math.sqrt(m[0, 0]**2 + m[0, 1]**2 + m[0, 2]**2)  # scale factor
R = PhotoScan.Matrix([[m[0, 0], m[0, 1], m[0, 2]], [m[1, 0], m[1, 1], m[1, 2]], [m[2, 0], m[2, 1], m[2, 2]]])
R = R * (1. / s)
newregion.rot = R.t()
chunk.region = newregion

chunk.buildPoints(error=1)
chunk.optimizeCameras()
chunk.saveReference(gc_path, "csv")
chunk.buildDenseCloud(quality=PhotoScan.HighQuality, filter=PhotoScan.MildFiltering)  # mild filter as default to improve trees / buildings
chunk.buildModel(surface=PhotoScan.HeightField, source=PhotoScan.DensePoints, face_count=PhotoScan.HighFaceCount)
chunk.model.closeHoles()
chunk.model.fixTopology()
chunk.buildTexture(blending=PhotoScan.MosaicBlending, size=4096)
chunk.exportReport(report_path)
chunk.exportOrthophoto(orthophoto_path, blockw=blockw, blockh=blockh, color_correction=False, blending=PhotoScan.MosaicBlending, write_kml=True, write_world=True, projection=chunk.crs)
chunk.exportDem(dem_path, dx=required_dx, dy=required_dy, blockw=blockw, blockh=blockh, write_kml=True, write_world=True, projection=chunk.crs)
open(complete_file, mode='w').write('saving project')
doc.save(project_path)
coord_system = PhotoScan.CoordinateSystem('LOCAL_CS["Local CS",LOCAL_DATUM["Local Datum",0],UNIT["metre",1]]')
chunk.exportPoints(points_path, source=PhotoScan.DensePoints, format='las', projection=coord_system)
chunk.decimateModel(2000000)  # TODO: check what size model comes out as (200MB limit from sketchfab atm)
chunk.buildTexture(blending=PhotoScan.MosaicBlending)
chunk.exportModel(model_path, texture_format='tif', format='obj', projection=coord_system)

app = PhotoScan.Application()
app.quit()

13
Ideally I'd like to define a series of points (lat, lng), and calculate the volume of the region inside - is this possible through python scripting?

Thanks,

Jeremy

14
I'd like to emulate the "Crop invalid DEM" option for DEM export than you can select through the gui, through python scripting - is there any way to do this?

Thanks

15
General / Difficulty stitching corn field
« on: October 03, 2014, 11:07:40 PM »
I'm running into difficulties stitching large corn fields from aerial imagery, as shown here:

https://www.dropbox.com/s/fazxwnvjxwqvwun/Screenshot%202014-10-03%2012.35.30.png?dl=0

Photoscan doesn't seem to be able to align the photos in the center of the field (see https://www.dropbox.com/s/oaw9qdza4l646ya/Screenshot%202014-10-03%2012.36.19.png?dl=0), and the parts it does align are done so poorly (e.g. broken / non-straight corn rows), so I was hoping I could get some advice on ways to improve the stitch quality.

The images were taken with a Sony NEX5 (although were downsampled to 2MP), have 60% front and sidelap, and the overall flight path was flown in a crosshatch pattern, such that the entire area was covered twice, with a 90deg rotation between each full coverage (to ensure at least one flight path is not aligned with the corn rows).  Each image was tagged with GPS info when captured, from a GPS devie with +/-3m accuracy.  A sample image can be viewed / downloaded here:

https://www.dropbox.com/s/pbdud431ouldyfq/pict20140923_160305_0.jpg?dl=0

The Photoscan workflow used high accuracy settings in each stage, photo alignment was optimized after the align stage, reprojection error was limited to 1 and the bounding box was fitted to the exact area of interest.  The report can be downloaded here:

https://www.dropbox.com/s/qohjnrsx8r2rbvg/report.pdf?dl=0

I have experimented with different aerial capture approaches - higher overlap, single flight paths (rather than the crosshatch pattern), higher altitudes, etc., and have had some improvements, but still run into the same issues in the center of large homogenous areas such as corn fields, so was hoping there might be improvements I could make to my photoscan workflow?

Any advice / suggestions would be greatly appreciated, and let me know if you want more details on any aspect of the image acquisition / photoscan workflow I've used.

Thanks

Pages: [1] 2