Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - andyroo

Pages: 1 ... 24 25 [26] 27 28 ... 30
376
General / Re: Agisoft PhotoScan 0.9.1 pre-release
« on: May 20, 2013, 07:19:30 PM »
Depth filtering in 0.9.1 (1676) seems to work worse in removing noise and worse in preserving real detail than the default settings in 0.8.5.  In my first project done completely in 0.9.1, I tried depth filtering "mild" and "aggressive" and both times had results that were noisier than in 0.8.5 over water areas, and removed real detail (logs on gravel bars) that were preserved in 0.8.5.

Granted, lighting was different, and the flight lines were different, but I have 25 flights I've processed in various conditions in 0.8.5 and subjectively it seems like they are all less noisy over the water than 0.9.1, even in aggressive filter mode.

Is it possible to make the depth filter parameters more customizable than "Mild"/ "moderate"/"aggressive"? I would like to see an option that gives more control - maybe an "Advanced" checkbox that exposes the underlying parameters adjusted by the mild,moderate,aggressive settings so we can see and adjust them more..

Also in the region where the interpolation is happening, is the only way to avoid this to use the "sharp" reconstruction mode? From what I understand, this is not recommended with aerial imagery.

377
General / Re: Agisoft PhotoScan 0.9.1 pre-release
« on: May 18, 2013, 02:56:22 AM »
Looks like a bug in the Ground Control\Import dialogue. The row import is incremented by one from what I select. I am guessing that row 1 of the text file = row 0 in the dialogue window.

Anyway, if I export points from one photoscan project and import them to another, the default "start import" row is row 3 (unless I selected that, I forget), with Row 1 being Coordinate System and Row 2 being column headers. But if I select Row 3 to start the import, it skips my first point and starts importing at row 4. I have to select Row 2 in order to import all of my points.

Andy


378
General / Re: Agisoft PhotoScan 0.9.1 pre-release
« on: May 16, 2013, 07:53:52 PM »
I notice that it takes longer for GCPs to show up after I create them in 0.9.1 (build 1676) than in 0.8.5. Didn't use 0.9.0, so I don't know whether the difference was also  in that version.

Seems to be taking an average of about 37 seconds for me in a project with four chunks, each with about 250 cameras (12MP images). Model is 50 000 000 faces, system is dual Xeon X5647 2.93 GHz w. 48GB RAM and dual NVidia GTX 560 Ti cards.

with about a hundred GCPs entered that means I spend an hour just waiting for the coordinates to pop up after I click "Create".

(By the way I love that the GCPs stay in numeric order after #9 now when sorted by name/label. But I notice that in the right-click "place point" dialog they don't)

I also love the progress bar when saving a project. Little touch, but much appreciated.

Andy




379
I was just clicking through photos looking for a GCP and being disappointed that I have to hit <arrow><enter> to switch to an adjacent image in the image view window, rather than just <arrow> (once a photo is highlighted in the thumbnail view).

It seems like making an <arrow> keypress select the next photo, with no need to hit <return> would be a 50% improvement, making it a lot easier to quickly scan through images to find one with a control point.

Also I have started importing GCPs from previous projects and placing them (thankyou JMR for that tip), and it seems like it would be a lot faster if I could leave my mouse in the center of the screen and cycle to the next photo (especially when "filter by markers" is selected) by using a key like <pageup/pagedown>. Maybe these two changes could be incorporated in one? The auto-center on marker feature is really a time saver. Thank you!
Andy

380
General / Re: Exporting Orthophoto from large dataset
« on: May 15, 2013, 09:03:28 PM »
When I have a project/chunk that's too big for a single orthophoto I manually enter the bounding coordinates, and generally allow for a few pixels overlap between discrete blocks. You don't need to rebuild geometry.

381
I generate DEMs and orthophotos multiple times for areas, with one goal being DEM comparison. If I forget to crop region boundaries to integer values, it complicates comparison since the rasters are not concurrent.

If I could save/load "region profiles" with a defined projection, cell size, nodata value, and region boundary, that would be pretty nice.

Also if I could make the estimated boundaries default to a given precision (like integer values for UTM projection) so that rasters with the same cell size are concurrent, that would be one less thing for me to forget/mess up.

382
General / Suggestions for processing 360° horizontal imagery?
« on: April 26, 2013, 11:07:16 PM »
I just collected ~5 km of river data from a raft using 1-second intervalometer with a bunch of horizontal-looking cameras on a pole oriented to have about 30% overlap in the FOV and provide 360° coverage.

Now I have about 30 000 images to work with.  ;D  I am processing them in <5,000 image chunks, and my first one is aligning now (says it's got about 40 hours to go).

If anyone has done anything remotely similar, I am looking for recommendations on PhotoScan settings. I've done pretty much everything else with downward-looking imagery, so I am not really sure what's going to happen.

My goals are:

1. see what happens when I try to tie the imagery together.
2. If I can align and generate geometry, I want to make a DSM of the river bars and banks, and see what happens when I try to export an orthophoto generated from near-horizontal imagery.

3. Hoping to figure out how to generate a textured model and figure out how to either render a flythrough or allow people to navigate it.

4.  (kind of like 3) I think I might need to do video instead, since my cameras weren't synced, but it would be nice to figure out how to do something Google street-view-like.

Thanks for any suggestions/ideas..

Andy

383
General / Thoughts on how to combine repeat aerial surveys..
« on: April 23, 2013, 06:05:06 AM »
I am conducting repeat aerial surveys of ~20 km of river to track changes during a dam removal, and becoming tired of re-entering GCPs for each survey. I think I may be about halfway done with the project. I just finished my 26th survey, and each one has about 100 GCPs...

So, I guess 2 600 points is my threshold for "becoming tired" of entering GCPs :)

I am wishing for a way that I could add in the photos from a later survey and align them to "automatically" add GCPs from an earlier survey that has been fully georeferenced. Of course that is a challenge because parts of the river have changed, some dramatically, but many areas of roads/trees/fields/houses/etc have not changed so much. Also there are two LiDAR flights, which when combined generate a very dense first return point cloud that I wish I could "fit" my data too (but not the part that's changing).

I keep coming back to the thought that it would be nice to be able to "snap" one model to another, and to use the GCPs and other common features shared by all of the images from all of the flights to better refine the camera locations for ALL of the images. For that matter, I wish I could use an existing LiDAR first return point cloud OR DSM to align the PS model using a least squares fit, or something like that.

I think this would be difficult to do computationally using SfM because (1) the river changes between each flight, even though many features do not, and (2) Now I have around 30 000 images total, which seems like it might be too much for any type of SfM processing using PhotoScan.

I wonder if anyone using PS has had an opportunity to work with this many repeat surveys of a single area, and if anyone knows of a way I can use the volume of data to my advantage.

What I would love to do is:

(1) generate unreferenced/referenced models for each flight (done)

(2) tie those models together (ideally using SfM algorithms) so that unchanging features (GCPs, roads, buildings) share the same points/faces, but changed features (consistent with all photos from a single flight, but inconsistent with photos between flights) are unique to each model.

(3) ideally, I would like to be able to deform the composite model in (2) using some kind of least squares or other model-fitting algorithm to fit a first-return LiDAR point cloud.

I think that if I were able to align the unchanging points of all of these models with each other and with the LiDAR point cloud, maybe applying less weight to the lowest points/center area of the model (where the river is), then I would end up with an very accurately registered surface for each flight - much better than I can generate using GCPs.

I think this is probably dreaming right now, and I am getting pretty good results considering my tools even with just the GCP method. But this is an interesting idea to me and I was curious if anyone else has any input that would inspire me to try something I haven't thought of.

Thank you for reading,

Andy

384
Feature Requests / Re: Marker Refinement
« on: March 30, 2013, 05:02:20 AM »
I was just going to post something similar. I was thinking of a pretty simple (looking) marker refinement algorithm used in the Reigl TLS systems I've used - basically when you fine scan a target, the TLS algorithm pulls the bright returns it gets, assuming those are the target, and you specify a shape, and it determines the center.

In the case of Photoscan and SfM, IF you were using targets that had bilateral or radial symmetry you could probably refine just by isolating the highest intensity pixels around the hand-picked target area and looking for similar high(er) contrast pixels in the other photos as well.

For me this would only work for some GCPs. I use other "targets of opportunity" like the corners of stop stripes, or the centers of ends of parking stripes, etc. Those would still require manual refinement I think.

385
Very interesting paper. It was unclear to me from reading it whether you would need different light angles on the same subject or only different viewing angles.

Also unclear how well it would work on running water, which is changing from image to image. But if it did work, it would be cool - it could be the "SAUS" (structure acquisition using specularities) that makes SfM even tastier!

I am working right now on postprocessing artifacts from specular surfaces on a DSM, and so far ruggedness and curvature look like they'll be big winners in helping to mask and reprocess the specular surface artifacts.

386
Feature Requests / Batch options for DEM/orthophoto export
« on: March 22, 2013, 03:23:01 AM »
In 9.x the bounding box around the model is better constrained than in 8.x. For me, this means that orthophoto and DSM extent is fine immediately after rebuilding geometry following GCP input.

I would like to be able to add an option to batch export orthoimagery and DSMs. It would save me a lot of time if I could do this for all of my project chunks in batch.

If the batch dialog had options to automatically calculate extent, define pixel size (default is ground sampling resolution) and automatically split into blocks if X or Y dimension is too big, that would be nice. But pre-specified x/y boundaries would be ok too. Output directory could be specified in batch.  filename could default to chunk name (plus extension if multiple blocks, auto-increment if exists)

Right now, I spend a lot of time waiting for orthos and RGB averaged images to generate from each chunk of my models. In some ways it takes longer than even the GCP entry and I often waste time because I don't immediately start generating the next orthophoto or DSM when the previous one completes.

workflow:
Day 1
1. create chunks and import photos (0.5 - 1 hr)
2. run batch process to align photos and build model geometry (12 hr)
3. manually enter GCPs for each chunk (3-4 hr)
Day 2
4. optimize alignment
5. run batch process to  rebuild geometry (~6-8 hr)
Day 3
6. manually export imagery and DSMs for each chunk (half day of clicking and waiting)

387
General / Re: Agisoft PhotoScan 0.9.1 pre-release
« on: March 22, 2013, 02:33:42 AM »
Hello Andy,

Could you please provide the log file for successful export and for export with memory problem?
And just for case please also check you are using 64-bit version and not 32-bit.
I generated about a dozen new averaged images and have not been able to duplicate the problem. I think I must have mistyped the bounding coordinates. sorry for the false report.

Right after I wrote the above, I managed to crash Photoscan while exporting a RGB average image. The crash also occurred immediately after relaunching Photoscan and trying to perform the same export, and when I switched to exporting a mosaic for the same chunk (instead of RGB average), so this may be a different problem. An error report was generated for each crash, but I am attaching the logfile (as of the first crash) to this reply anyway.I had started a new logfile immediately before beginning to export these orthoimages.

This is reproducible with this project file, at least on my machine). I can upload the project file (and imagery if needed) for the whole project or just this chunk if it would help.

(also tried with build 1647, and it still crashes. crash reporter appears to have sent the dump)

Andy

388
General / Re: Agisoft PhotoScan 0.9.1 pre-release
« on: March 20, 2013, 04:35:55 AM »
Currently single orthophoto file in TIFF format is limited by 2 GB. If it is not possible to generate smaller file using the current settings (export resolution) "Not enough memory" message will be displayed during the export orthophoto.

I was able to generate mosaic orthophotos with exactly the same dimensions as the RGB average orthos that gave me an out-of-memory error. (retracted - I think I mistyped the bounding boxes. Sorry I should have tried to duplicate the error first)

389
Finally got one uploaded that shows pits/lines in 0.9.1 and not 0.8.5.

Andy

390
This is half-question/half-request, because I only somewhat grasp the process of structure-from-motion.

Most of my work is around water, and specular reflections from water are always requiring time-consuming editing or masking. In LiDAR data the dramatic difference in return intensity of water allows easy semi-automated masking. I was wondering if the pixel intensities (and uniformity of color) could be used to automate specular surface detection and mask it from processing in SfM - producing a sort of "intensity map" from the orthoimagery.

I don't know if it would make more sense to mask it in the image processing step or in a pre-processing step (like manual masking) but it would be nice to be able to exclude returns from the water surface from surface processing, while maintaining those pixels in orthoimagery.

Of course an even better solution would be to detect and model the specular surface, rather than masking it. I've been looking for work done on that topic, and this is the best reference I've found:

http://www.cs.columbia.edu/CAVE/projects/spec_stereo/

Of course, I recognize that water is not just a specular reflector, but also at times a transparent refractor, and like I said, there's a lot about SfM I don't understand, but I would appreciate any insight into the problem of mapping water areas. And of course I would REALLY appreciate any tools that were developed to deal with the issue (or anyone's insight into how they do it).

Andy

Pages: 1 ... 24 25 [26] 27 28 ... 30