Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - andyroo

Pages: 1 ... 7 8 [9] 10 11 12
121
General / Question about optimizing alignment
« on: March 02, 2014, 03:21:47 AM »
I was wondering if multiple optimization steps while entering GCPs is a bad idea. With a ~30 km-long strip, I find that the "smile" artifact with initial alignment makes GCPs get progressively farther off from their projected locations when I move from one end of the dataset to the other after I initially update the georeferencing by defining a few points as I discuss here.

So far I have resisted the optimization step until after I register all GCPs, since I figured it would degrade alignment if I do the optimization step multiple times. Not knowing the intricacies of bundle adjustment mathematics, I would love some insight into this. If I could do one optimization after entering a few GCPs along the model's length, then another when complete (possibly a third) this would speed up the GCP alignment by a lot (I have about 200 GCPs and prefer to manually align them on each image, resulting in generally about 1000 GCP alignments steps on ~1000 images).

122
General / Running two different versions of Photoscan on same system
« on: February 13, 2014, 11:32:26 PM »
I thought I remembered reading a post from someone during the beta testing of 1.0 where they figured out how to run two different versions of PhotoScan on the same computer, but I searched the forums to no avail. Does anyone know if it's possible?

Oh. I just found it. Linking here to make it easier for others to find in the future (note that I haven't tried it yet):
You can install 0.91, copy it to folder like PhotoScan_0.91, uninstall (it will remove it from Photoscan folder), install 1.0 (in Photoscan default folder). You end with 2 versions in 2 folders.

Admir

123
General / thoughts on "static feature" matching.
« on: January 30, 2014, 03:14:15 AM »
I almost posted this in feature requests.

Would it be possible to use georeferenced pointcloud sub-regions (little clouds around houses/intersections) to do a similar kind of alignment optimization that can be done after GCPs are entered? I recognize GCPs would still be needed, but here's what I am thinking:

I have large areas of my project where there are not many easy-to-identify features in aerial imagery, and when they are, I can't get good XYZ because either there is not much sky view for GPS or I will get run over, etc. These areas have only a few GCPs in a ~5km stretch - very bad ground control for my needs.

But there is a LiDAR flight (three of them over the last 5 years actually) and I have identified many areas along the route where there are good "static features" like buildings and road intersections, where very little changes over time.

I have clipped those static areas from the LiDAR point clouds and compared them, and they are exact matches (to within <5 cm vertical on roofs and roads, and I think better horizontally). I can use something like CloudCompare to do a rigid transformation to this LiDAR cloud, but it would be FANTASTIC if I could use little clips of LiDAR data as control points in this area.

 For example, if there is a clearing (not that great) or a house (better), then the LiDAR first return and PS pointclouds should have roughly the same shape. Some kind of least squares (bundle?) adjustment that holds the "good" dataset clip around the house/clearing static and optimizes camera positions to make the less-good dataset fit those would make it so I could produce sediment volume calculations for the active river channel where right now I have no ability to get vertical control except for briefly during low flows until the river washes my control points away.

It seems like this would have to be a step after GCPs, and maybe using a sparse (or sparser) cloud. Maybe even a separate tool, but I don't know how I would do it outside of PS and be able to do anything with camera positions.

Any thoughts? I was thinking of cross-posting in the VisualSFM or CMPMVS google groups to see if anyone else had ideas. But I am not sure if camera positions are exportable/importable between the different packages (and I have much better ATI cards than NVidia).

124
Feature Requests / Region editing/clipping/masking for processing
« on: January 29, 2014, 11:50:43 PM »
Wish List item:

I really like the addition of the free-form selection tool in 0.9x, but it would be very nice (for aerial orthos and DEMS to be able to import shapes (kmz or .shp) to constrain processing and/or output.

Ideally I would like to be able to constrain both point cloud processing/output and mesh generation with a shape file, but kmz would be a good second. I don't know how much processing time this would save, but for me it would make it a lot easier to remove areas of dense complex vegetation that eat up my polygon count. Clipping the dense cloud manually isn't practical because the 400-800 million points bog down my system too much.

Andy

125
My projects now take >1hr to save with four chunks having dense clouds of 400 - 800 million points. this can add 8-12 hours of processing time for "Build Dense Cloud" and "Build Mesh" steps.

If there were a "Save" job (so I could put it between dense clouds and build mesh, or at the end) and/or a "Save project after batch completion" checkbox, it would give me more control over how frequently I save the project.

I like the security of the "save after each step" option, but I also like (and trust) the stability of PhotoScan enough that I am willing to take a few risks if it can potentially save me 12 hrs.

126
General / dense/moderate model more faceted than dense aggressive?!
« on: January 25, 2014, 12:21:38 AM »
My latest model run produced unsatisfactory results (too faceted a surface) with dense/moderate settings, which I thought were the same as the default/recommended in 0.8.5 or 9.1 (I forget the details of each version, sorry).

I re-ran the model with aggressive, but it clipped out/overly smoothed areas of the model that I don't want clipped.

I've attached two images showing close-up and further out views that I hope illustrate the issues I am having.

I am confused about dense cloud generation and depth filtering. Especially confused about why the moderate filter produces a more faceted surface than dense (is it because I "used up" my 40 million faces elsewhere in the model in "moderate"?

I continue to struggle to produce the same quality DEM that I did in 0.8.5, and I find the settings less intuitive and less exposed for me to tinker with so I can optimize them for flight conditions (exposure quality mainly, primarily due to sun angle and cloudiness)

Also I don't understand why depth filtering step is applied in dense cloud reconstruction and not in mesh. Are the points generated but flagged with a filter value, or are they just not generated (where I have the clipped values in the Dense/Aggressive mesh)?

127
I am confused. I just processed a flight with better flying conditions and a second camera attached, but ended up with a worse product than my previous flight. It looks like it's because I selected "moderate" filtering instead of "aggressive" at the point cloud stage, but I am not sure.

I used the same settings except for moderate vs aggressive -because I was trying to avoid large holes in the water surface with big "flares" at their margin. It appears that I ended up with a more strongly smoothed surface with the moderate filter than the aggressive. Below I attach an image showing the results of the two model runs. The top of the image is hillshaded topography (0.5 m cell size). The bottom of the image is the two orthophotos from the two flights (a month apart - Dec 19 and Jan 15).

As an aside, with the large point cloud from my most recent flight (~2 billion points for all chunks) it takes about an hour to save the file anymore. optimizations in the save speed would be much appreciated :)


128
Feature Requests / show raster dimensions in raster creation dialog box
« on: January 23, 2014, 07:17:52 AM »
I realized as I was writing my bug report post a few minutes ago that there is an easy fix that would help me avoid wasting time (since all the math is already in the dialog window).

If the xy dimensions of dems and orthophotos could be shown as a line at the bottom of the "estimate region boundary" box in the orthophoto/DEM creation dialog, then I would know if my .jpg was going to be over 65535 in one dimension or another. I could do the math myself, but it's an added step and the scientific notation in the WGS84 cell size obscures significant digits (maybe) and makes things a little more of a pain.

Just a thought.

Andy

129
I just found out that if I hit cancel right after the raster size to be created is displayed, the color correction calculation has already started and I have to wait tens of minutes until it finishes. Or maybe I have to wait until the raster builds. Either way I hit cancel 20 minutes ago, and it's still chugging away as I type this.

This is sort of an "additive" bug, since it wouldn't be a problem for me except I already know that the jpg is going to fail, since the max dimension is greater than 65535, but before color correction came around, I used to be able to cancel raster creation once I saw the grid was too big. Now color correction starts right away, and apparently I have to wait until that's done. ugh.

--edit - I posted the logfile below to illustrate my issue. Bold underlined is what told me to hit cancel (max dimension was 75862, which will cause a .jpg raster build to fail after going through all the motions) Bold is what I saw sitting on the screen after I hit cancel. (last thing showing was "collecting control points..."

Non-bold text is what showed up at the end after another 22 minutes or so...

initializing renderer... tessellating mesh...done (39891889 -> 39891889 faces)
done in 92.613 sec
Raster size: 25055x75862
Estimating mosaic occupancy...
1 blocks used from 1x1 (100%)
Calculating color correction...
collecting control points...
collected 11412195 points in 280.025 sec
analyzing 395 images... *Error: aborted by user
*Error: aborted by user
*Error: aborted by user
*Error: aborted by user
*Error: aborted by user
*Error: aborted by user
*Error: aborted by user
*Error: aborted by user
Error: aborted by user
Finished processing in 1241.76 sec (exit code -1)

>>>

130
General / Question about project extent and processing times
« on: January 22, 2014, 11:01:16 PM »
I mention in another post my best strategy for "large" (~1500 image) aerial project processing.

I think I may have come across an adjustment that makes a BIG difference in processing time, (and I should probably bold the "adjust model extent" text in my other post if that's true) but I wanted to check with the experts first.

 So my question - if my bounding box is way bigger than the chunk being processed, will it slow down processing time? Also - does it matter at all if the cameras are outside of the bounding box for surface reconstruction (tangential question) Here's why I ask:

In my most recent flight, my alignment time seemed fine (<2 hr), but image processing after alignment went from ~24-36 hours to >96 hours (still going). I realized a few minutes ago that I forgot to adjust the bounding boxes after duplicating and trimming the sparse aligned images for each chunk I process (four or five total depending on flight configuration). I chop the whole project (25-30 km length) into smaller chunks to get higher resolution DSMs without a ridiculous number of polys. Usually I adjust the box for each chunk by "resetting" it, then trimming the height/depth to just over the treetops.

It seemed like most of the processing time in the last few days consisted >56000 lines of the "selected x cameras" to "points: x (x.xx MB) octree processing loop, which means 5154 iterations of that code block, with a lot of "selected 0 cameras" whatever it's doing. Here's an example from the beginning and end of the "generating dense cloud" phase:

Beginning:
Code: [Select]
finished depth reconstruction in 20830.4 seconds
Device 1 performance: 73.6024 million samples/sec (CPU)
Device 2 performance: 349.252 million samples/sec (GeForce GTX 560 Ti)
Device 3 performance: 367.072 million samples/sec (Tahiti)
Total performance: 789.927 million samples/sec
Generating dense point cloud...
selected 278 cameras in 31.374 sec
working volume: 275845x44112x29289
tiles: 67x11x7
selected 0 cameras
preloading data... done in 0 sec
filtering depth maps... done in 0 sec
accumulating data... done in 0 sec
accumulator: 0 MB
octree constructed in 0 sec
nodes: 1 (4.8e-005 MB)
points: 0 (0 MB)
setupOffsets: branch without children
nodes: 0 (0 MB)
points: 0 (0 MB)


and the end...
Code: [Select]
selected 0 cameras
preloading data... done in 0 sec
filtering depth maps... done in 0 sec
accumulating data... done in 0 sec
accumulator: 0 MB
octree constructed in 0 sec
nodes: 1 (4.8e-005 MB)
points: 0 (0 MB)
setupOffsets: branch without children
nodes: 394589 (18.9403 MB)
points: 412151947 (4121.52 MB)
412151947 points extracted
Saving project...
saved project in 2611.65 sec
Generating mesh...
generating 257905x41243 grid (0.00383479 resolution)

That last line once mesh processing started - "generating 257905x41243 grid" jumped out because I realized that's freakin' huge compared to my previous ones, I am pretty sure.


131
Feature Requests / Expose options for TIFF/JPEG export formats
« on: December 31, 2013, 03:28:12 AM »
In trying to optimize imagery for access in GIS, I have found that my preferred format (JPEG) doesn't redraw as rapidly at full zoom as TIF, but TIF is huge file size compared to JPG.

For my case (ArcGIS) but also supported in GDAL and GRASS and other GIS, I have found that tiled TIFF with YCBCR JPEG compression appears to work best. It would be nice to have this option with PhotoScan. Also would be nice to be able to adjust compression settings etc.

I was wondering if Agisoft could add options to the export dialog to have more control over the exported file format. As an example, here's what I do to make a Agisoft TIFF into a fast-redraw small file size TIF:

gdal_translate infile.tif outfile.tif -co TILED=YES -co BLOCKYSIZE=256 -co BLOCKXSIZE=256 -co COMPRESS=JPEG -co PHOTOMETRIC=YCBCR -co JPEG_QUALITY=90 -co TFW=YES -b 1 -b 2 -b 3

Andy

132
Feature Requests / allow fine scale region rotation
« on: December 21, 2013, 04:54:46 AM »
It would be nice to have the ability to rotate the region in fine scale. When trying to align the ground plane with the model ground plane after optimization, It's hard for me to get the plane right because the region moves too much for each mouse movement. Either a keyboard option (up/dn/left/rt arrows when rotate mouse button is depressed) or maybe a modifier (press shift to slow rotation) would be nice.

133
Feature Requests / add linear and vertical unit geokeys to .las export?
« on: December 14, 2013, 04:47:43 AM »
To import .las files into ArcMap that are produced with PhotoScan (1.0 preview build 1780) I have to strip and re-add VLR GeoKeys because the PS GeoKeys don't provide linear and vertical units. If I don't do that, I get this error (in CheckLas) and the .las won't import:

WARNING 1: Failed to import spatial reference
           Failed to read LAS linear unit geo-key.


The GeoKeys before and after look like this:

Before:

variable length header record 1 of 2:
  reserved             0
  user ID              'LASF_Projection'
  record ID            34735
  length after header  40
  description          ''
    GeoKeyDirectoryTag version 1.1.0 number of keys 4
      key 1024 tiff_tag_location 0 count 1 value_offset 1 - GTModelTypeGeoKey: ModelTypeProjected
      key 1025 tiff_tag_location 0 count 1 value_offset 1 - GTRasterTypeGeoKey: RasterPixelIsArea
      key 3072 tiff_tag_location 0 count 1 value_offset 26910 - ProjectedCSTypeGeoKey: PCS_NAD83_UTM_zone_10N
      key 3073 tiff_tag_location 34737 count 21 value_offset 0 - PCSCitationGeoKey: NAD83 / UTM zone 10N
variable length header record 2 of 2:
  reserved             0
  user ID              'LASF_Projection'
  record ID            34737
  length after header  21
  description          ''
    GeoAsciiParamsTag (number of characters 21)
      NAD83 / UTM zone 10N|


After:

variable length header record 1 of 1:
  reserved             43707
  user ID              'LASF_Projection'
  record ID            34735
  length after header  40
  description          'by LAStools of Martin Isenburg'
    GeoKeyDirectoryTag version 1.1.0 number of keys 4
      key 1024 tiff_tag_location 0 count 1 value_offset 1 - GTModelTypeGeoKey: ModelTypeProjected
      key 3072 tiff_tag_location 0 count 1 value_offset 32610 - ProjectedCSTypeGeoKey: PCS_WGS84_UTM_zone_10N
      key 3076 tiff_tag_location 0 count 1 value_offset 9001 - ProjLinearUnitsGeoKey: Linear_Meter
      key 4099 tiff_tag_location 0 count 1 value_offset 9001 - VerticalUnitsGeoKey: Linear_Meter


My workaround is to use las2las as follows (for UTM 10):

las2las -i infile.las -remove_extra -remove_all_vlrs -o outfile.las -utm 10N -meter -elevation_meter

134
I am interested in what other people are doing for a 1.0 workflow with "large" aerial projects.

[Edited 12 Mar 2014 to add camera calibration step since this helps sooo much with GCP placement - thankyou Porly for your suggestion]

First, I want to say thank you and cheers to the Agisoft team. I am really enjoying the changes that are taking shape in 1.0 and I love how involved with the user community you are on this forum and in PMs. It's a pleasure to work with folks who do such a great job of supporting and developing their product while keeping in touch with the user community.

I collect aerial imagery via Cessna with a wing-mounted 12MP camera at ~600 m elevation over ~25 linear km shooting images every 3 seconds in four overlapping passes (about 1km wide swath). I am using a Canon D10, but about to do a flight with a EOS M (22mm EF-M lens and big beautiful sensor - for my low budget anyway). I am doing repeat flights at least once/month. Ground pixel resolution is 10-15cm depending on elevation, but generally about 12cm.

With the changes in processing in 1.0, and my ongoing learning, I find that I get best results for the overall dataset as follows:

0 (new step) If you have used your camera(s) on another project, Export the adjusted camera calibration and import it into the new project to improve initial alignment and reduce/eliminate bowl effect - this was a HUGE timesaver for me since the bowl effect made placing GCPS a hunting effort)

1)Align all images with Accuracy=High, Pair pre-selection=generic, point limit=40 000.

2) trim flyers and sinkers - I usually just trim the obvious stuff - though I have experimented with gradual selection - I would be interested in other folks' experiences for 1.0 workflow using gradual selection especially.

3)Set Coordinate system, Import GCPs and manually place 3 to 4 GCPs that are well-distributed over the flight area (2 images each). Then update georeferencing. This generally gets all the other calculated point locations somewhat close to their real location.

4) starting from one end of the project, I sort GCPs by lat or long (depending on orientation of project) and work my way through all GCPs by filtering photos by marker, then placing the GCPs on all images where they are visible. Generally I update georeferencing after doing this for each GCP.

5) After all GCPs are placed, I optimize alignment, unchecking fit aspect, fit skew, and fit k4 - this is based on what I read on earlier forum postings. I am especially interested in feedback on this step too. Seems like skew might be useful for folks trying to do stuff with rolling shutter cameras, but not sure..

6) copy optimized model into multiple chunks and clip each chunk to about a 5th of the model (roughly 5km sections). Trim GCPs and cameras and adjust model extent. Generally I overlap about 500 meters (2 or 3 GCPs) on each side with the adjacent chunk. Note that doing the GCPs first before splitting saves a lot of GCP placement time and seems to provide better continuity.

7) generate dense pointcloud with ultra-high quality/moderate depth filtering ( I wish there was more control over depth filtering - still dealing with bad noise in water). This gives me about 400 million points per chunk.

8 ) build mesh with height field/dense cloud/Interpolation enabled/custom face count = 40 million faces. This gives me good enough quality to produce a DSM with 0.5m xy resolution where I can resolve features with z relief of about the same magnitude, like logs on the ground and slope breaks from coarse to fine sediment in scarps.

9) export color corrected RGB-average image and DSM.

Notes:

Step 8 and 9 are batch-processing. I would love to develop a python script to always export DSMs and models with the same coordinate system and extents (and resolution for the DSM) but I haven't had time to sit down and play with the API yet.

Up until recently (maybe build 1684?) I generally got good orthophoto results (RGB average) by simply constructing a mesh for each chunk from the sparse cloud, then I constructed a dense mesh (custom, 40-50 million faces) for the DSM. I liked the sparse cloud orthos because they didn't cause so many artifacts in forested areas - and the trees looked more natural in the orthophoto. Now it seems like there are blending or projection issues with the sparse cloud that are resolved when using the dense cloud for orthos, but it takes much longer (about 30 hrs to build dense cloud and mesh in my case).

I use RGB averaging because I find that it increases detail from my relatively noisy sensor by essentially working like image stacking to increase the signal to noise ratio.

Hardware:

My system is aDell T7500 with dual Xeon X5647s@2.93GHz with 192GB RAM and either 1 NVidia GeForce 560Ti and one ATI HD 7970 or two ATI 7970s depending on how successful I am at making everything happy together. I just ordered a R9 290, but man those things are hard to get your hands on!

135
Feature Requests / Cross-section-based pointcloud editing
« on: November 10, 2013, 12:44:29 PM »
I know photoscan is primarily a pointcloud/surface production tool, and not an editing tool, but for producing DSMs from aerial imagery it would be great to have some more powerful dense point cloud editing tools to clean up data before producing a mesh.

I know it's probably too much to ask for surface-finding filters and conditional selection tools, but I wonder if a simple 2D editing mode might be possible.

One feature I love in CARIS for cleaning up seafloor mapping data before producing DEMs is the cross-section editing tool in the swath editor, where I can choose a window width along a survey track, then edit data on a 2-axis cross-section plot and step through each survey line (attached image, taken from page 11 of this PDF).

If PS had the ability to draw an adjustable-width slice through the ground plane (or any plane) and edit those data in "cross-section" mode with the existing editing tools, I could much more easily clean up noise in my data. Other point cloud editors I have found that can handle >300 million-point-clouds cost $5 000 - $10 000+, and still don't offer tools that work well with SfM data.

Also, I am glad to see the changes that have happened since I started using PhotoScan. Thanks for all of your hard work, and your accessibility to the users. You make a great product. I love PhotoScan :)

Pages: 1 ... 7 8 [9] 10 11 12