Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - StevenF

Pages: [1] 2 3
Feature Requests / Export Seamlines
« on: January 15, 2016, 03:16:45 AM »
Is it possible to export the seamlines generated by PhotoScan?
Is there any way to access these seamlines through the python API?

I'd like to have a shapefile of where different images are used in the orthomosaic, and then possibly edit this shapefile and re-import it as 'shapes' to have greater control over orthomosaic generation.

Hi jmos,
There may be some compatibility issues between how PhotoScan writes Inpho project files and how Imagine will read them. This is the case with another photogrammetry package, SURE, as you can see noted in this manual on the top of page 6. I haven't tried using Inpho project files and no longer have access to StereoAnalyst so I can't say for certain whether the Inpho files work correctly.

You may want to do a couple quick checks to see if the imported parameters are correct. One way to do this would be to import cameras by the method I specified above, then compare measurements and locations made with both methods. Another way would be to compare locations and measurements made with StereoAnalyst to GPS field measurements or measurements made in another orthomosaic covering the same area.

General / Re: LAS intensity?
« on: June 18, 2015, 04:32:27 PM »
Thanks Alexey. When the master channel is set to default the intensity value isn't consistently the same as any of the LAS R-G-B values for the point and it doesn't seem to be a combination of them. So I'm still a bit confused what it means when set to default, but I can reprocess with a particular channel to get what I want. 

General / LAS intensity?
« on: June 17, 2015, 03:16:01 AM »
What is the source of the Intensity value for points exported in LAS format?

My images are four channels (R-G-B-NIR) and I performed dense cloud generation with the master channel set to default. I'd like to calculate point intensity metrics over an area (min, max, mean, etc of intensity) so I need to know where this intensity info is coming from. It would be really nice if there were a setting to choose a particular channel for the intensity value since I'd like to use the NIR channel.

Hi Bigben,
You're using PTGui to align the fiducials of images taken with the same camera and lens, correct? Could you could provide a brief overview of your workflow for this step? I haven't used PTGui before, but I might try to implement a similar step in my own work with historical images.

Since you're working with non-photogrammetric scanners/cameras you might want to consider calibration of the scanner so you can remove it's distortions before correcting for the camera lens distortion in PhotoScan. It might also save you some time to develop an auto advance mechanism if you're working with rolls of film. Someone I'm working with is implementing these steps into their workflow for processing historical imagery.

General / Re: Point cloud density over trees
« on: May 22, 2015, 01:03:28 AM »
I get the same effect. I doubt it's because of any precision requirement in the software. The quality setting specifies the image resolution used during dense matching: Ultra = original pixel, High = 1/4 original, Medium = 1/16 original, etc. At a fine scale features may appear very different in two images, but when you coarsen the image those features are likely to appear more similar in the separate images because you've reduced the local variability.

The result of using a lower quality setting on rough surfaces like tree canopies is a lower point density (because there are fewer pixels to match), but you often a greater coverage of points (because it's easier to find matches). Higher quality setting are likely to yield more accurate point locations which can be important if you want to get accurate height measurements when working with coarse imagery. For UAV applications where the approximate ground sampling distance is less than 10cm this is probably less important, but when working with imagery that has a gsd >30cm I've found my height measurements for trees are significantly more accurate with Ultra quality.

What still baffles me is that I often get better coverage with Aggressive filtering. I wish someone would explain the filtering settings to me more clearly.

General / Re: Generate pointcloud of 266 Ultracam X photos
« on: April 24, 2015, 04:26:58 PM »
Alexey provided me with a tiling script that helped me to generate a dense cloud with 1000 images and only 48gb of ram, but it took several weeks. The script is available in one or two places on this forum, but I have an updated version with a few useful modifications. Message me if you're interested.

General / Re: Satellite Image WV3 - Working with PS?
« on: April 17, 2015, 06:24:54 PM »
WV3 is a line scanning / pushbroom sensor. The imaging geometry is different from a frame sensor such as a digital camera. 

I don't think photoscan would be able to solve for sensor location because the sensor is constantly moving as it collects each line of data. The lens geometry is also completely different than a frame camera. Usually you need to work with software that can handle Rational Polynomial Coefficients (RPCs) which are delivered with the images and provide a simple approximation of satellite ephemeris and the optics. Or you need to use software that has a rigorous sensor model which can handle actual sensor parameters rather than an equation that approximates it.

I would be extremely surprised if WV3 worked in PhotoScan which is only designed to handle frame images, but if you can get some sample images then by all means give it a shot.

General / Re: Export Map of Image Overlap
« on: March 04, 2015, 06:42:43 AM »
I haven't tried it myself, but I believe this forum topic has a script which will export image footprints:

There are a few things that people implement with scripts which I think should just be a standard feature of the photoscan gui. This is one of them.

General / Re: Historic Aerial Imagery Advise
« on: February 03, 2015, 10:25:36 AM »
Hi Rossta,
I've also been working many (hundreds to thousands) of historic aerial images that have some of the same problems as your images (e.g. different sizes, no fiducials, writing, etc.). I can make a few suggestions, but I'm also still trying to figure out the best approach.

With regard to the artifacts in your ortho, I'd recommend looking at your mesh and point cloud for sharp breaks and discontinuities that could be manually edited to be smoother. Also, look for large gaps in the point cloud. A common problem I've had with poor quality imagery is a lack of completeness in the point cloud particularly when using High or Ultra quality settings. I tend to get most complete coverage with Low quality and Aggressive filtering in dense cloud generation, but these settings will produce lower point densities overall and probably have lower accuracy.

If the mesh and point cloud look ok then you may want to consider mosaicing the individual ortho frames in another program that allows seamline editing. I've seen these artifacts before even in areas where the mesh is smooth, and I think it has to do with low overlap and the way PhotoScan selects an image to map on the mesh.  You can export individual ortho's by disabling every camera but one before exporting. If you have many photos you'll want to implement this in the python api. Some other GIS and Remote Sensing programs like ArcGIS are capable of mosaicing images with seamlines that can be edited.

Here are a few other suggestions for changes to your workflow:
1. Use ground control points: Instead of estimating the coordinates of the image center points for referencing, collect ground control points (gcp's) of identifiable locations in the photos. You can get the most accurate gcp's with a survey grade GPS, but getting coordinates from existing orthomosaics (like Google Earth) can yield accuracies that might be sufficient depending upon your purpose. Your existing ortho was off by around 500m in some places, but you commented on the artifacts instead so I'm guessing positional accuracy isn't that important to you.

2. Optimize alignment: Once your ground control errors are reasonable you can often improve accuracy by optimizing alignment which will use the gcp's and tie points (i.e. sparse cloud) to refine the individual camera locations and camera calibration parameters. Without 'optimizing' I think ground control is merely used to translate, rotate and scale the whole set of cameras together.

3. Alignment via referencing: If you ARE going to collect image centers for referencing then you might as well use them during alignment. Load the reference data for each image before alignment and use the Reference pair preselection setting. It should significantly speed up alignment when working with a large number of images since PhotoScan will know to only look for tie points between adjacent photos. You may find that using reference data during alignment is necessary when you step up to big projects with poor quality images.

4. Camera Calibration: If you know camera calibration info like the focal length, pixel size, or a rough guess of the principle point location then enter it as initial information in the Camera Calibration dialog.

I'm also still trying to figure out the best approach for big projects with historic imagery so please share your results if you find a better approach, specific settings, image prep, etc. Good luck.

If your check point RMSE in PhotoScan is sufficiently low I wouldn't expect much improvement in LPS.
I get worse results for auto-tie points in LPS even with good starting orientation, so it requires more work to check the points and remove or fix erroneous ones. It's also more difficult to place GCP's in LPS. However there are some more options for how it arrives at a solution. You can adjust relative accuracy of different parameters and choose different solution methods, but I haven't experimented with these options much.

I've imported orientation from PhotoScan into LPS to use StereoAnalyst and haven't had any problems. However, I was using a camera without distortion or pp offset parameters so it simplifies the process. I'll run through my process, but it doesn't seem much different from what you've tried so far.
1. In PhotoScan > Export Cameras as Omega Phi Kappa. Then edit the file to contain the full image path and column with a unique ID number.
2. Camera Calibration > Adjusted Tab> Save as Australis
3. Open Imagine and create a new Photogrammetric project
4. Model Setup> Model Category = Frame, Geometric Model = Digital Camera (or Frame if using scanned film)
5. Block Property Setup > Reference Coord = Same as the exported OPK file
6. Frame Specific Info > Rotation system = OPK, Angle = Degrees, Photo Direction = Z-axis, Flying height = rough average
7. New Camera... >
8. Camera Info > My camera didn't have pp offset or distortion parameters so I didn't find it necessary to enter this info or use the Australis model, but in your case here's what I'd try:
     a. Check use extended camera model and click edit extended parameters.
     b. Enter parameters from exported Australis calibration.
     c. Save the camera model
9. Import Exterior Orientation info from the edited OPK file. I actually had a lot of problems with LPS crashing during this step, but eventually figured out a sequence that wouldn't crash it.
10.  Wait for image loading and building pyramids.
11. Click Interior Orientation.
     a. Interior orientation tab> set pixel size in microns and apply to all frames
     b.  Exterior orientation tab > set status to Fixed for all and apply status to all active frames
12. Open StereoAnalyst and view a pair of images

If that process doesn't work then there could be some difference between how the PhotoScan exports the camera calibration parameters and how they're interpreted by LPS. You may want to avoid using the extended parameters and just use the principal point offsets and k1, k2, and k3 coefficients on the radial distortion tab. Considering my project worked, I doubt there's a difference in interpreting the rotational angles (opk) between the programs.

General / Re: Agisoft Vs Competitors
« on: January 05, 2015, 06:13:18 PM »
I recently tried Pix4D mapper with a few small test sets. It failed to align my images every time even with the high quality sets that had initial exterior orientation and camera calibration. That was an instant deal killer for me, so I can't comment on later steps of their process such as dense cloud or ortho generation.

I also have some experience with Imagine Photogrammetry and SocetGXP. Triangulation is much much faster and easier in PhotoScan than either of these programs. With regard to dense point clouds, Imagine EATE will occasionally produce very noisy dense point clouds for some image pairs, but the SGM extension is probably much better. SocetGXP produces excellent dsm's and dtm's, but I personally prefer the point cloud output which it doesn't offer. I actually prefer ortho generation in both these programs to PhotoScan since you can choose to use an imported dtm, they have some options for radiometric correction, and they both have good seamline editing tools.

Overall I prefer PhotoScan to any other photogrammetry program I've used. My main gripe with the program is the memory limitation issues that come up when dealing with large sets of images, which you may run into for survey work.  Other software packages will work with the available memory to get the job done, but in PhotoScan YOU have to structure your project into chunks or tiles to get the job done with your available memory. If they fixed this design flaw and added some ortho generation options I'd be happy as a clam, but it's still the best out there in my opinion.

Feature Requests / Brightness and Contrast
« on: December 24, 2014, 08:11:06 AM »
I think it would be useful to have simple brightness and contrast adjustments for viewing photos.

I'm working with a set of 16-bit images that only have values around 0-1000 (out of 65535) so they show up as black in the viewer. I had to rescale and apply a 2 percent stretch to all the photos in other software so I could see them when placing ground control and checking the sparse cloud and model.

I thought it would be fine to use the rescaled 8-bit images for the remaining steps in the workflow. However, my dense clouds were plagued with large gaps so I tried using the 16-bit images in a few test areas. The difference was night and day. Dense clouds generated from the original 16-bit images almost completely filled the gaps that existed in the dense clouds based on the 8-bit images. So a simple bright and contrast slider could be very helpful to avoid this back and forth between 8-bit and 16-bit images.

General / Re: dense point clouds and depth filtering
« on: December 22, 2014, 02:15:46 AM »
I'm having the exact same problem using images taken with a Z/I DMC-1. I don't notice any difference in image quality or image texture between areas that generate quality points and areas that have large gaps. What really baffles me is that 'aggresive' filtering produces more points with fewer gaps, while 'mild' filtering produces fewer points with larger gaps.  In the beta version (1.1) you can set depth filtering to disabled but it produces far too much noise to deal with.

My current solution to this problem is to fill the gaps by merging points from different quality settings. I generate dense point clouds with different quality settings and export each one to LAS. Then I merge the LAS files and thin with a grid which selects the highest point in a grid cell of a given size. I select the highest point because I'm interested in tree heights and the lower quality settings tend to smooth over tree tops.

Some other software such as Imagine Photogrammetry and SocetGXP perform hierarchical image matching and have the ability to keep points matched at lower pyramid levels. It would be nice if PhotoScan implemented a similar functionality. The point clouds generated by PhotoScan are pretty impressive for how fast it works though.

Pages: [1] 2 3