Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - StevenF

Pages: [1]
Feature Requests / Export Seamlines
« on: January 15, 2016, 03:16:45 AM »
Is it possible to export the seamlines generated by PhotoScan?
Is there any way to access these seamlines through the python API?

I'd like to have a shapefile of where different images are used in the orthomosaic, and then possibly edit this shapefile and re-import it as 'shapes' to have greater control over orthomosaic generation.

General / LAS intensity?
« on: June 17, 2015, 03:16:01 AM »
What is the source of the Intensity value for points exported in LAS format?

My images are four channels (R-G-B-NIR) and I performed dense cloud generation with the master channel set to default. I'd like to calculate point intensity metrics over an area (min, max, mean, etc of intensity) so I need to know where this intensity info is coming from. It would be really nice if there were a setting to choose a particular channel for the intensity value since I'd like to use the NIR channel.

Feature Requests / Brightness and Contrast
« on: December 24, 2014, 08:11:06 AM »
I think it would be useful to have simple brightness and contrast adjustments for viewing photos.

I'm working with a set of 16-bit images that only have values around 0-1000 (out of 65535) so they show up as black in the viewer. I had to rescale and apply a 2 percent stretch to all the photos in other software so I could see them when placing ground control and checking the sparse cloud and model.

I thought it would be fine to use the rescaled 8-bit images for the remaining steps in the workflow. However, my dense clouds were plagued with large gaps so I tried using the 16-bit images in a few test areas. The difference was night and day. Dense clouds generated from the original 16-bit images almost completely filled the gaps that existed in the dense clouds based on the 8-bit images. So a simple bright and contrast slider could be very helpful to avoid this back and forth between 8-bit and 16-bit images.

General / Dense Cloud Not Using All Cameras?
« on: November 21, 2014, 09:11:11 PM »
When I click on a point and select "Filter Photos by Point" I get a list of 17 photos, but when I run Build Dense Cloud over the same area the console readout says: "Selected 7 cameras."
Why is PhotoScan not including all cameras that overlap the region when building the dense cloud?

General / Build Depth Maps Without Creating Dense Cloud?
« on: November 18, 2014, 02:02:32 AM »
Is it possible to create and save depth maps from the Build Dense Cloud process without going on to construct a dense cloud from the depth maps?
This post references a "Build Depth Map" option in the photo pane context menu:
I can't seem to find this option. Has it been removed?

The reason I ask is that my machine seems to be capable of "Reconstructing Geometry" (aka building depth maps) during the Build Dense Cloud process, but then it runs out of memory when it actually goes to make a point cloud from the depth maps. I'm running on linux so the OOM killer kills the process when "accumulating data..." for too many cameras.

If I could build the depth maps separately then I could later construct dense clouds in smaller tiles to avoid running out of memory. This would be much faster than tiling the whole build dense cloud process (i.e. reconstructing geometry + constructing point cloud). I tested this out.

I tried tiling the whole Build Dense Cloud with a small data set on medium quality and it took about 30 minutes. If I start the Build Dense cloud process for the entire project area and cancel it after depth maps have been created then I can reuse the depth maps to tile the dense cloud construction step. This method only takes about 7 minutes, but I have to cancel the Build Dense Cloud process right after it's done constructing depth maps or I risk crashing Photoscan and losing everything. This could be days worth of processing for large projects!

General / Tiled Processing by Changing Region Extent
« on: November 10, 2014, 08:56:01 AM »
I'm attempting to generate a dense cloud with 1000 large photos (~112 MP) from a large format metric camera (Z/I DMC-1). My machine has 48 gb of RAM so it's not possible to generate a dense cloud on High or Ultra High with this many images as a single large chunk or even a handful of smaller chunks.  I would probably have to create more than 50 chunks to process this many large images so I'm considering an alternative tiling process.

This process would involve the following steps:
1. Decrease the size of the region bounding box to cover an area that could be handled with my available memory (maybe 5 km^2)
2. Identify and align this small region to the geographic northwest corner of the bounding box for the whole project
3. Generate a dense cloud with Quality = High or Ultra High for this small region
4. Export the dense cloud to LAS
5. Move the region over to the next tile ( ~5km east) and repeat the process until all tiles have an exported LAS

I think the two primary benefits of this approach over chunking would be (1) a removal of the need to manually break up chunks and (2) a removal of duplicate points in the overlapping areas that are created when using multiple overlapping chunks. The obvious downside is that I couldn't generate a complete mesh or orthomosaic within photoscan, but I have external software that I could use for these processes. 

So my questions are:
Are there any other drawbacks to this approach that I'm overlooking?
Is chunking a better approach if all I want is a dense cloud?
Has anyone tried this process or implemented it in a python script that they'd be willing to share?

Collecting ground control points from georeferenced images of known accuracy directly within Photoscan could greatly reduce the time and difficulty of collecting ground control for images.

It's often the case that people don't have the time or resources (i.e. expensive gps) to collect good ground control but may have access to existing ortho photos or lidar intensity images of high accuracy that would be sufficient for ground control. Control points can be collected from this imagery in existing GIS software and then imported into photoscan. But why not incorporate this functionality directly in Photoscan?

It would provide two primary advantages to a GIS+Photoscan workflow.

1. Simplification of adjustments to control point location: If ground control points are collected in a GIS program and then imported into Photoscan they need to be re-imported every time you want to adjust the location of the control point on the control image. If collection of ground control from existing imagery was implemented in Photoscan it would greatly increase the speed and ease of this process.

2. Automated control point collection: Feature point detection should already be incorporated into Photoscan as part of the SFM process. Why not use the existing algorithm to auto detect potential ground control point locations in the control/reference image and the images you want to georeference? It's baffling to me that all photogrammetry programs have auto-tie point generation but none have auto-control point generation. Photoscan could possibly be the first to implement this feature which would speed up the ground control collection process immensely.

Photoscan is amazing at simplifying the photogrammetric process with its self-calibrating bundle adjustment procedure, but collecting control points is STILL the most time consuming and difficult part of the photogrammetric workflow. I think Photoscan has the potential to make this part of the process significantly easier which would make it a very attractive alternative to other photogrammetry programs.

Hi All,
I'm trying to figure out a good workflow for aligning and generating orthomosaics of aerial photographs taken in the 1930's and 1940's. The photos were scanned at 14 microns using a photogrammetric scanner, but some potential problems with these photos include:
1. A lack of true fiducials - They have 4 corner marks but they don't look like true fiducials to me.
2. Poor film condition - some noise, scratches, variations in illumination and possible distortions.
3. Varying orientation - the corner marks don't line up at the same pixel locations in each photo and so the location of principal point is likely very different in each photo.

I've seen that a few other people on this forum have experience working with historic aerial images so I'm hoping to get advice on a few points given the issues with these images:
1. Can these photos be treated as from a metric camera and calibrated as a group or would I be better off splitting the groups so each image is calibrated separately?

2. If I decide to split the images, would I need to provide a better initial estimate of the principal point for each image? Like using the intersection of lines connecting opposite corner marks.

3. Would a decent estimate of the principal point and focal length be sufficient for good alignment or should I do additional image pre-processing?

4. During "Optimize Alignment" (after ground control) should skew and k4 be fit? I know skew is commonly 0 with images from modern digital cameras and k4 is probably negligible, but I'm not sure if the parameters would be appropriate for the images I'm working with.

I've attached a reduced resolution image of one scan to give a sense of what I'm working with. My goal is to generate orthomosaics of these photos that match a more recent orthomosaic (2011) with less than 5m RMSE. Any advice is appreciated.


Pages: [1]