Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - StevenF

Pages: 1 2 [3]
So I finally got around to doing my calibration and pre-processing test. I've attached the results with a short write up.

In short, the different calibration settings and the pre-processing didn't influence the orthomosaic accuracy by that much. This suggests that aligning fiducials and estimating initial camera calibration may be unnecessary for accurate alignment of historic aerial photos. However, I suspect that part of the reason for the high accuracy is that these scans are actually pretty good with the fiducials at fairly similar locations (~20 pixels diff) in all the photos.

Collecting ground control points from georeferenced images of known accuracy directly within Photoscan could greatly reduce the time and difficulty of collecting ground control for images.

It's often the case that people don't have the time or resources (i.e. expensive gps) to collect good ground control but may have access to existing ortho photos or lidar intensity images of high accuracy that would be sufficient for ground control. Control points can be collected from this imagery in existing GIS software and then imported into photoscan. But why not incorporate this functionality directly in Photoscan?

It would provide two primary advantages to a GIS+Photoscan workflow.

1. Simplification of adjustments to control point location: If ground control points are collected in a GIS program and then imported into Photoscan they need to be re-imported every time you want to adjust the location of the control point on the control image. If collection of ground control from existing imagery was implemented in Photoscan it would greatly increase the speed and ease of this process.

2. Automated control point collection: Feature point detection should already be incorporated into Photoscan as part of the SFM process. Why not use the existing algorithm to auto detect potential ground control point locations in the control/reference image and the images you want to georeference? It's baffling to me that all photogrammetry programs have auto-tie point generation but none have auto-control point generation. Photoscan could possibly be the first to implement this feature which would speed up the ground control collection process immensely.

Photoscan is amazing at simplifying the photogrammetric process with its self-calibrating bundle adjustment procedure, but collecting control points is STILL the most time consuming and difficult part of the photogrammetric workflow. I think Photoscan has the potential to make this part of the process significantly easier which would make it a very attractive alternative to other photogrammetry programs.

Feature Requests / Re: Aerial Footprints
« on: September 20, 2014, 11:55:54 PM »

Hi tenboair,
I just saw this same problem in orthomosaics I'm generating from a set of historical aerial photographs, and I think the issue can also occur in areas with "relatively" smooth meshes/models.

In my case I think what is happening is the photos have poor overlap and the sharp breaks occur on mesh polygon faces in areas near the center of overlap between two adjacent photos. It appears that during the mosaic process Photoscan is choosing what image to map onto each polygon face based on which image has the closest distance from nadir to the polygon. So in areas near the center of overlap photoscan may alternate between two or more images for adjacent polygon faces which causes a blocky appearance in the resulting orthomosaic.

This method of mosaicing is unusual for photographmetry programs and I would guess this is why you aren't capable of obtaining seamlines from photoscan. Many photogrammetry programs draw a single seamline between two overlapping photos during the mosaic process so you may end up with an edge along the seamline, but you don't get these odd blocky polygon artifacts. 

Of course I'm not positive that this is what photoscan is doing behind the scenes. I'm only guessing based on observations I've made so maybe one of the developers can confirm this hypothesis.

Anyway, one way around this problem is to export individual orthophotos and do the mosaicing in another program like ArcMap.  You can export individual orthos by disabling all but one camera and then exporting an ortho.

I think the images are 9"x7". I've seen reference to this film size before but I can't find out what cameras used it. If I can get good results then I guess it doesn't matter what the camera is, but I would like to know just out of curiosity.

The obvious looking mark in the upper left corner is one of the corner marks. The other corners have similar marks but they aren't very visible. I'm going to try using the intersection of lines connecting corners to get an initial estimate of the principal point because it's probably closer than the image center.

My plan is to test whether preprocessing and splitting/grouping cameras really matters. I'll compare the RMSE of checkpoints in the resulting orthomosaics for each method using a small set of 4 images. I'll post up the results when I'm done.

No, unfortunately I don't have any info about the cameras, but I've been trying to dig some up. The images were taken by the Soil Conservation Service, and I'm guessing they used a Fairchild Camera with a 8-1/4" focal length lens. You can see in the posted image that used "Eastman Topographic Safety" film.

The goal is to look at changes in land cover. I'm going to try generating a DSM but I don't have high hopes for anything of good quality, and I don't have any field data from that time period to verify against. A good orthomosaic is all I really need.

I'm now thinking of doing a run with splitting the cameras and no pre-processing. If check points show poor accuracy then I'll go back and try adding some steps that might improve the results.

I didn't have that problem when working with my current data set but you might want to try looking at the point classification to see the results. You can do that by clicking the "Dense Cloud Classes" icon at the top.

As far as I've found there's no legend or even an explanation of colors for classes anywhere, but you can find out the colors yourself by selecting a group of points and going to tools > assign class... and seeing what the color changes to for different classes. It looks like ground points are brown so if you have a good distribution of ground points then you should be able to generate a mesh (DTM) from them. It's possible the ground classification didn't work well enough to generate a mesh from, but Alexey or one of the more experienced users might know the cause of your error.

I've only recently tried doing point classification in Photoscan so I'm probably not the best person to ask with regard to appropriate settings, but your might want to check out Agisoft's tutorial on the subject: Dense Cloud Classification & DTM Generation with Agisoft PhotoScan Professional

It looks like their ground filtering algorithm is pretty similar to Progressive TIN densification which was I think was originally described in this paper by Axelsson: DEM Generation from Laser Scanner data using Adaptive TIN models

The settings you choose will probably be based on the 2d size of the objects you want to remove as well as their height and steepness to the ground. You can probably find what settings work best for your scenario by working with a subset of the data and trying different parameters. I'd also consider searching this forum and the internet in general for recommended settings. I think lasground actually uses a similar algorithm so you might want search the lastools forum too. Good luck.

Hi All,
I'm trying to figure out a good workflow for aligning and generating orthomosaics of aerial photographs taken in the 1930's and 1940's. The photos were scanned at 14 microns using a photogrammetric scanner, but some potential problems with these photos include:
1. A lack of true fiducials - They have 4 corner marks but they don't look like true fiducials to me.
2. Poor film condition - some noise, scratches, variations in illumination and possible distortions.
3. Varying orientation - the corner marks don't line up at the same pixel locations in each photo and so the location of principal point is likely very different in each photo.

I've seen that a few other people on this forum have experience working with historic aerial images so I'm hoping to get advice on a few points given the issues with these images:
1. Can these photos be treated as from a metric camera and calibrated as a group or would I be better off splitting the groups so each image is calibrated separately?

2. If I decide to split the images, would I need to provide a better initial estimate of the principal point for each image? Like using the intersection of lines connecting opposite corner marks.

3. Would a decent estimate of the principal point and focal length be sufficient for good alignment or should I do additional image pre-processing?

4. During "Optimize Alignment" (after ground control) should skew and k4 be fit? I know skew is commonly 0 with images from modern digital cameras and k4 is probably negligible, but I'm not sure if the parameters would be appropriate for the images I'm working with.

I've attached a reduced resolution image of one scan to give a sense of what I'm working with. My goal is to generate orthomosaics of these photos that match a more recent orthomosaic (2011) with less than 5m RMSE. Any advice is appreciated.


I'm new to Photoscan but I've done some other photogrammetry work and I've seen the same problem before. Orthomosaics are commonly made from a bare-earth elevation model (DTM) not a surface model (DSM) which is the initial result of building a mesh from the dense cloud without filtering. If you use the mesh created from the whole dense point cloud then you're more likely to get the sharp breaks created at the edge of trees. A sparse point cloud has less detail and is less likely to have abrupt vertical changes at the edge of a tree so you won't see as many sharp breaks in the ortho.

The real solution is to ground classify your dense point cloud and use a mesh from the ground points when generating your orthomosaic. You can ground classify points in Photoscan using Tools>Dense Cloud > Classify Ground Points. Then build your mesh using only the ground points with Build Mesh > Advanced > Point Classes > Select... and uncheck everything but "Ground".

I've also exported points from Photoscan in LAS format, and then used LASground from LAStools to ground classify the points with some success. You can then generate a mesh in OBJ format from the ground points with las2tin which can be imported back into Photoscan. 

The result will be that only the ground surface and not the trees are ortho-corrected, so you'll see more tree lean. To take care of both tree lean and sharp breaks you would need to be able to create what's called a "True Ortho" which back-fills gaps where the breaks occur using pixels from another image. I'm not sure Photoscan is capable of doing that though.

Also, automated ground filtering of photogrammetric point clouds is a difficult problem. It's not nearly as easy as filtering a lidar point cloud which capable of penetrating the tree canopy. However you can get good results in sparse canopies where objects can be distinctly separated, i.e. the algorithm isn't tricked into thinking the tree canopy is a raised ground surface. 

Pages: 1 2 [3]