Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jeremyeastwood

Pages: [1] 2 3 4
1
Thanks for the update Alexey - will try it out again in 1.3 when it's released!

2
Hi Alexey,

I only see the line on the exported orthomosaic (and only when it is exported in web mercator).

Thanks for your help

3
Hi,

I am having an issue  exporting my orthomosaic in web mercator projection (epsg:3857): a blank line (missing pixels) 2 pixels wide appears as a stripe through my output file.  This does not happen when I export in wgs84 (the project coordinate system), or any other projection I have tried.

Some details - I am using version 1.2.6 and exporting with:
 - projection: epsg:3857
 - single tile output (no pixel limits)
 - default boundaries
 - tiff compression: none
 - big tiff: no

I have tried with various different compression settings but there is no difference.  One interesting thing: the stripe appears at exactly 4096 pixels into my output orthomosaic (the ortho I'm testing right now is 4274x3835 pixels, so the missing pixels appear as a vertical line near the right hand edge); when I set the pixel max size to 4096, the stripe is at the edge of each tile (so difficult to see).

Right now my workaround is to export the ortho in wgs84 (epsg:4326), then convert to epsg:3857 afterwards using gdal, however this is slower so not ideal.  Any ideas how I can eliminate the artifacts and export directly in epsg:3857?

Thanks in advance for your help

4
Hi there,

I'm using the following to calculate the orthomosaic resolution for my map:

Code: [Select]
ortho_res = float(chunk.model.meta['model/resolution']) / int(chunk.model.meta["model/depth_downscale"]) * chunk.transform.scale
This works fine for my normal workflow, however sometimes I generate a quick orthomosaic using just the sparse point cloud, and this breaks the snippet above, as
Code: [Select]
chunk.model.meta["model/depth_downscale"] no longer exists, resulting in:
Code: [Select]
int() argument must be a string or a number, not 'NoneType'
How should I calculate my orthomosaic resolution when I have not generated a dense point cloud?

Thanks

5
Hi Alexey,

I use the max number of faces for the polygon (not sure how many for this particular case, but this can be several million for larger projects):

Code: [Select]
chunk.buildModel(surface=PhotoScan.HeightField, source=PhotoScan.DenseCloudData, face_count=PhotoScan.HighFaceCount, interpolation=PhotoScan.EnabledInterpolation)
Would you suggest building DEM first and using that as a surface for the ortho as a sensible approach?  Will it affect orthomosaic quality at all (especially for things like vertical surfaces)?

Thanks again for your help

6
Thanks for the quick response as always Alexey!

I've actually just re-processed the project on 1.2.2 and this time it processed successfully (but again with a large memory spike during the buildOrthomosaic stage), so not sure if you'll be able to reproduce the crash I experienced previously.

Would you expect a large memory spike during the exportOrthomosaic stage (at least 2x the max memory used in other stages)? Would this be different in 1.2.4? This is not something I came across with the earlier version, so just wondering if I should be doing something different to avoid this behaviour with the newer versions of the software.

Thanks for your help,

Jeremy

7
I'm having some trouble with very high memory spikes during the buildOrthomosaic processing step that is actually crashing / failing photoscan for some projects (this doesn't happen for all projects).  I'm running version 1.2.2, and don't seem to have the same issue when running an older version of the software (1.1.6).

For example, for a relatively small project (99 images / 500MB) with version 1.2.2, I'm seeing a large memory spike up to about 50GB, but when running version 1.1.6 the memory usage never rose above about 14GB.  Here's a link to the input images if you want to replicate: https://www.dropbox.com/s/ysmjsuahw5nsagt/sample_images.zip?dl=0

My workflow for 1.2.2 is as follows:

Code: [Select]
import PhotoScan

app = PhotoScan.Application()
doc = PhotoScan.Document()

chunk = doc.addChunk()
chunk.addPhotos(image_list)
chunk.loadReferenceExif()
coord_system = PhotoScan.CoordinateSystem('EPSG::4326')
chunk.crs = coord_system

chunk.matchPhotos(accuracy=PhotoScan.HighAccuracy, preselection=PhotoScan.ReferencePreselection, tiepoint_limit=10000)
chunk.alignCameras()
chunk.optimizeCameras()

chunk.buildDenseCloud(quality=PhotoScan.MediumQuality, filter=PhotoScan.AggressiveFiltering)
chunk.buildModel(surface=PhotoScan.HeightField, source=PhotoScan.DenseCloudData, face_count=PhotoScan.HighFaceCount, interpolation=PhotoScan.EnabledInterpolation)
chunk.model.closeHoles()
chunk.model.fixTopology()
chunk.buildUV(mapping=PhotoScan.GenericMapping, count=16)
chunk.buildTexture(blending=PhotoScan.MosaicBlending, size=4096)

doc.save(path=new_project_path, chunks=[chunk])
doc = PhotoScan.Document()
doc.open(new_project_path)
chunk = doc.chunk
chunk.buildOrthomosaic()
chunk.exportOrthomosaic(ortho_path, projection=chunk.crs)
chunk.buildDem(source=PhotoScan.DenseCloudData)
chunk.exportDem(dem_path, projection=chunk.crs)

doc.save(path=project_path, chunks=[chunk])
app.quit()

My workflow for 1.1.6 is as follows:

Code: [Select]
import PhotoScan

app = PhotoScan.Application()
doc = PhotoScan.Document()

chunk = PhotoScan.Chunk()
doc.addChunk(chunk)
chunk.addPhotos(image_list)
chunk.loadReferenceExif()
coord_system = PhotoScan.CoordinateSystem('EPSG::4326')
chunk.crs = coord_system

chunk.matchPhotos(accuracy=PhotoScan.HighAccuracy, preselection=PhotoScan.ReferencePreselection, tiepoint_limit=10000)
chunk.alignCameras()
chunk.optimizeCameras()

chunk.buildDenseCloud(quality=PhotoScan.MediumQuality, filter=PhotoScan.AggressiveFiltering)
chunk.buildModel(surface=PhotoScan.HeightField, source=PhotoScan.DensePoints, face_count=PhotoScan.HighFaceCount, interpolation=PhotoScan.EnabledInterpolation)
chunk.model.closeHoles()
chunk.model.fixTopology()
chunk.buildUV(mapping=PhotoScan.GenericMapping, count=16)  # try different mapping modes
chunk.buildTexture(blending=PhotoScan.MosaicBlending, size=4096)

chunk.exportOrthophoto(ortho_path, color_correction=False, blending=PhotoScan.MosaicBlending, projection=chunk.crs)
chunk.exportDem(dem_path, projection=chunk.crs)

doc.save(project_path)
app.quit()

 Is this a known issue / can I change my 1.2.2 workflow to avoid such high memory usage?

Any help greatly appreciated.

8
General / Re: Can't rotate model to align with z-axis (button grayed out)
« on: January 28, 2016, 09:15:14 PM »
Great, thanks for the info Alexey.  Will try both methods.

9
General / Can't rotate model to align with z-axis (button grayed out)
« on: January 28, 2016, 12:26:06 PM »
I have a model in version 1.2.3 which is great, but rotated about 45deg about the y axis:

https://www.dropbox.com/s/0tskd8rmcuspwy5/Screenshot%202016-01-28%2001.23.02.png?dl=0

I've rotated the bounding box to align with the model, but want to align the model so that the z-axis is vertical.  I have done this before with previous projects (maybe on earlier versions of ps), but this time the rotate model button is grayed-out, so I can't use it (see screenshot).

Is there a way to enable the rotate model button (maybe an extra processing step or setting somewhere?), or alternatively is there a way I can do this through the api?

Thanks in advance.

10
I had some difficulty replicating my 1.1.6 workflow with the new 1.2.x api, but managed to get it working by saving the project in .psx format then re-loading before calling the new chunk.buildOrthomosaic() method.  Is this the correct way to go about ortho export, or is there a cleaner method?  Will this be changing with future releases?

My workflow is summarised below:

Code: [Select]
import PhotoScan
import glob

images = glob.glob('/path/to/project/directory/images/*.jpg')
ouput_path = '/path/to/project/directory/'


app = PhotoScan.Application()
doc = PhotoScan.Document()
chunk = doc.addChunk()

chunk.addPhotos(images)
chunk.loadReferenceExif()

coord_system = PhotoScan.CoordinateSystem('EPSG::4326')
chunk.crs = coord_system

chunk.matchPhotos(accuracy=PhotoScan.LowAccuracy, preselection=PhotoScan.ReferencePreselection, tiepoint_limit=10000)
chunk.alignCameras()

chunk.optimizeCameras()

chunk.buildDenseCloud(quality=PhotoScan.LowQuality, filter=PhotoScan.AggressiveFiltering)

chunk.buildModel(surface=PhotoScan.HeightField, source=PhotoScan.DenseCloudData, face_count=PhotoScan.HighFaceCount, interpolation=PhotoScan.EnabledInterpolation)

chunk.model.closeHoles()
chunk.model.fixTopology()

chunk.buildUV(mapping=PhotoScan.GenericMapping, count=4)
chunk.buildTexture(blending=PhotoScan.MosaicBlending, size=4096)

doc.save(path=ouput_path + 'project.psx', chunks=[chunk])

doc = PhotoScan.Document()
doc.open(ouput_path + 'project.psx')
chunk = doc.chunk

chunk.buildOrthomosaic()
chunk.buildDem(source=PhotoScan.DenseCloudData)

chunk.exportOrthomosaic(ouput_path + 'ortho.tif')

chunk.exportDem(ouput_path + 'dem.tif')

chunk.exportPoints(ouput_path + 'points.las', source=PhotoScan.DenseCloudData, format='las')

chunk.exportModel(ouput_path + 'model.obj', texture_format='jpg', texture=True, format='obj')

app.quit()

Any suggestions on how to improve it based on the new api also very welcome

11
Thanks so much for writing these scripts Alexey - really helpful!

12
Hi Alexey,

Thanks for the quick response.  I've now got what looks like camera-local coords for each tie-point seen in each camera (e.g. chunk.point_cloud.projections[chunk.cameras[0]][0].coord, so what I need to do is link this one specific point to one of the points in the sparse point cloud, i.e. a specific index in chunk.point_cloud.points.  Is there a way to do this?

What I'm trying to do is link each point in the sparse point cloud to the cameras used to locate it, so that I can generate a lookup where I can select some points and see the relevant cameras, or select some cameras and see the points they generated.

Thanks again for your help

13
Hi there,

I'd like to link each point in the sparse point cloud (also dense cloud later on) to the cameras which "see" it (the cameras in which that point in space appears).  Alternatively I'd like a list of the points seen by each camera.

Can I do this through the python api directly, i.e. is there a property of a point in the chunk.point_cloud.points (or similar) which gives me the cameras, or do I have to calculate the camera footprint and find the point locations which fall within those bounds?

Thanks

14
General / Re: low accuracy align produces better results than high accuracy
« on: September 15, 2015, 11:23:04 PM »
Thanks for the responses guys - very helpful.

@dcm39 that's very interesting - I believe the medium / low accuracy settings downscale the images before alignment, so that would be eliminating some of the smaller features.  An interesting approach for some homogenous imagery with small feature scales, although after running a lot of different jobs through low and high accuracy alignment I've more often found the low accuracy settings to provide worse alignment (more holes).  This could be a backup approach for when the high quality alignment fails, so will keep trying in the future.

@wishgranter great recommendations - using 40k key and 10k tie worked a charm:
https://www.dropbox.com/s/qm4urtq58sleovb/Screenshot%202015-09-15%2013.05.54.png?dl=0
https://www.dropbox.com/s/rw6az0a81ib70ho/Screenshot%202015-09-15%2013.06.07.png?dl=0
are these settings appropriate across all camera models / scenery types?

Thanks again guys

15
General / low accuracy align produces better results than high accuracy
« on: September 15, 2015, 02:25:56 AM »
Hi there,

I aligned a large set (450) of photos using the low accuracy settings (reference pair selection, 0 tie point limit, 100,000 key point limit), producing a nice alignment and ultimately a good orthomosaic:

https://www.dropbox.com/s/a9893zd1roww822/Screenshot%202015-09-14%2016.09.31.png?dl=0
https://www.dropbox.com/s/c4cu5cmny295is6/Screenshot%202015-09-14%2016.12.23.png?dl=0

however, when re-running using the high-accuracy settings for align (other settings the same), some of the photos aren't stitched, leaving large holes in the map:

https://www.dropbox.com/s/2b1lzwg7jb8fxvx/Screenshot%202015-09-14%2016.23.48.png?dl=0
https://www.dropbox.com/s/rn45kzdly9ntvfu/Screenshot%202015-09-14%2016.24.17.png?dl=0

Any ideas what's going on, or what settings I might be able to use to improve the results of the high-accuracy alignment (or should I use low-accuracy align as a default instead)?

Thanks

Pages: [1] 2 3 4