Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - andyroo

Pages: 1 ... 9 10 [11] 12
151
General / Suggestions for processing 360° horizontal imagery?
« on: April 26, 2013, 11:07:16 PM »
I just collected ~5 km of river data from a raft using 1-second intervalometer with a bunch of horizontal-looking cameras on a pole oriented to have about 30% overlap in the FOV and provide 360° coverage.

Now I have about 30 000 images to work with.  ;D  I am processing them in <5,000 image chunks, and my first one is aligning now (says it's got about 40 hours to go).

If anyone has done anything remotely similar, I am looking for recommendations on PhotoScan settings. I've done pretty much everything else with downward-looking imagery, so I am not really sure what's going to happen.

My goals are:

1. see what happens when I try to tie the imagery together.
2. If I can align and generate geometry, I want to make a DSM of the river bars and banks, and see what happens when I try to export an orthophoto generated from near-horizontal imagery.

3. Hoping to figure out how to generate a textured model and figure out how to either render a flythrough or allow people to navigate it.

4.  (kind of like 3) I think I might need to do video instead, since my cameras weren't synced, but it would be nice to figure out how to do something Google street-view-like.

Thanks for any suggestions/ideas..

Andy

152
General / Thoughts on how to combine repeat aerial surveys..
« on: April 23, 2013, 06:05:06 AM »
I am conducting repeat aerial surveys of ~20 km of river to track changes during a dam removal, and becoming tired of re-entering GCPs for each survey. I think I may be about halfway done with the project. I just finished my 26th survey, and each one has about 100 GCPs...

So, I guess 2 600 points is my threshold for "becoming tired" of entering GCPs :)

I am wishing for a way that I could add in the photos from a later survey and align them to "automatically" add GCPs from an earlier survey that has been fully georeferenced. Of course that is a challenge because parts of the river have changed, some dramatically, but many areas of roads/trees/fields/houses/etc have not changed so much. Also there are two LiDAR flights, which when combined generate a very dense first return point cloud that I wish I could "fit" my data too (but not the part that's changing).

I keep coming back to the thought that it would be nice to be able to "snap" one model to another, and to use the GCPs and other common features shared by all of the images from all of the flights to better refine the camera locations for ALL of the images. For that matter, I wish I could use an existing LiDAR first return point cloud OR DSM to align the PS model using a least squares fit, or something like that.

I think this would be difficult to do computationally using SfM because (1) the river changes between each flight, even though many features do not, and (2) Now I have around 30 000 images total, which seems like it might be too much for any type of SfM processing using PhotoScan.

I wonder if anyone using PS has had an opportunity to work with this many repeat surveys of a single area, and if anyone knows of a way I can use the volume of data to my advantage.

What I would love to do is:

(1) generate unreferenced/referenced models for each flight (done)

(2) tie those models together (ideally using SfM algorithms) so that unchanging features (GCPs, roads, buildings) share the same points/faces, but changed features (consistent with all photos from a single flight, but inconsistent with photos between flights) are unique to each model.

(3) ideally, I would like to be able to deform the composite model in (2) using some kind of least squares or other model-fitting algorithm to fit a first-return LiDAR point cloud.

I think that if I were able to align the unchanging points of all of these models with each other and with the LiDAR point cloud, maybe applying less weight to the lowest points/center area of the model (where the river is), then I would end up with an very accurately registered surface for each flight - much better than I can generate using GCPs.

I think this is probably dreaming right now, and I am getting pretty good results considering my tools even with just the GCP method. But this is an interesting idea to me and I was curious if anyone else has any input that would inspire me to try something I haven't thought of.

Thank you for reading,

Andy

153
This is half-question/half-request, because I only somewhat grasp the process of structure-from-motion.

Most of my work is around water, and specular reflections from water are always requiring time-consuming editing or masking. In LiDAR data the dramatic difference in return intensity of water allows easy semi-automated masking. I was wondering if the pixel intensities (and uniformity of color) could be used to automate specular surface detection and mask it from processing in SfM - producing a sort of "intensity map" from the orthoimagery.

I don't know if it would make more sense to mask it in the image processing step or in a pre-processing step (like manual masking) but it would be nice to be able to exclude returns from the water surface from surface processing, while maintaining those pixels in orthoimagery.

Of course an even better solution would be to detect and model the specular surface, rather than masking it. I've been looking for work done on that topic, and this is the best reference I've found:

http://www.cs.columbia.edu/CAVE/projects/spec_stereo/

Of course, I recognize that water is not just a specular reflector, but also at times a transparent refractor, and like I said, there's a lot about SfM I don't understand, but I would appreciate any insight into the problem of mapping water areas. And of course I would REALLY appreciate any tools that were developed to deal with the issue (or anyone's insight into how they do it).

Andy

154
I wanted to share a learning experience I had today. I have been working on a long narrow stretch of coastline where we have 30-50 m bluffs and GCPs at top and bottom of bluffs. Also a long spit. The area was flown from 500m with about 500 images.

With the unreferenced model, there was noticeable "frown" over the length of the shoreline (about 15 km). After changing to WGS84 coordinate system and inputting control points, I was having a big problem where no matter what photos I disabled, if I try to generate a model with the entire coastline it would fold like a pretzel and I would get errors of 1000s of meters relative to GCPs.

I even went out and collected more GCPs in areas where the model folded to try to fix the problem, but the solution was simple! I changed the GCP accuracy from 0 m to 0.01 m, and the alignment became perfect.

I just wanted to share in case other people had the same problem.

155
I originally posted about this in the general forum, and another user has reported the same issues so I thought I would add a note here since it looks like it might be a bug.

When reconstructing in smooth/high (50M polygons for ~250 12 MP photos in my case), photo alignment artifacts show up in areas of high overlap, and it appears that overall alignment with GCPs is worse for the same photoset in 0.9 vs 0.8.5.

Original post has several image examples:

http://www.agisoft.ru/forum/index.php?topic=766.0

156
I have a 50M face model with 221 aerial photos covering about 5 sq km with 17 GCPs. Due to issues I encountered with the surface generated from 0.9.0, I downgraded to 0.8.5 and regenerated the project using the same parameters.

General method was: align photos, generate geometry, select control points, optimize, regenerate geometry, export orthoimage, average image, and DSM. Project datum and control points were entered in WGS84. surfaces were exported in NAD83UTM10.

I generated a surface with 50M polygons with each version, using the same base project (photos and alignment). Then I compared them to each other and to a LiDAR flight flown in April. The results were not good for the DSM generated with 0.9.0. Here's what I found:

DSM surface comparison with LiDAR was much better with 0.8.5. Unchanged surfaces were < 0.25 m different on average. With 0.9.0, elevation values on unchanged surfaces were 8 - 20 meters off, and warped into a frown. See first attached image  (sorry - values in U.S. feet).

DSM comparison between 0.8.5 and 0.9.0 showed that there was a warp on 0.9.0 Reason unknown. What I especially don't understand is why the control points looked ok in the project but not the DSM. Possibly export format issues?

Another thing I noticed was the quality of the DSM was worse in 0.9.0 (see second attached image). I've highlighted two features that I noticed - pits, and lines.

I am not sure if the warping issue is because of the UTM10 export, because the project started as a 0.8.5 project, or because of some inherent issue in 0.9.0. But I am pretty sure that the lines and pits are exclusive to 0.9.0.

Andy

157
General / Question about face count in 0.8.5 vs 0.9.0
« on: October 18, 2012, 01:50:57 AM »
I am generating orthoimages regularly of a draining lake in a forested watershed. I have been using a face count of 10M to guide DSM creation. I can do up to around 20M at least but I was happy with the product before at 10M. From what I understood - the full number of faces were generated but they were decimated before the final product, which allowed by medium-end machine to handle them (48GB RAM and two 560Ti cards)

In 0.9.0 the quality of  low-relief surfaces is much worse, and it seems like the quality of trees is better. Is 0.9.0 targeting complexity of the surface for face creation while 0.8.5 averaged more evenly when it decimated?

If not, what are other potential causes of this? If so, can we have an option to change it?

The only other change I have made is a new camera mount mount that improved the image clarity - I had some blur on about 20% of images before from engine vibration on the aircraft. Otherwise I think it has to be because of changes from 0.8.5 to 0.9.0.

[edit - added demo images - names correspond to PS version. Top is 0.8.5. The lines in the top surface are logs.]
[edit 2 - just found depth filtering in the new version. regenerating with mild instead of aggressive to see if this is a factor.

158
Feature Requests / Add "intelligent scissors to model editing tools
« on: August 14, 2012, 03:44:16 AM »
I was just deleting a noisy waterbody from a model I made with a couple hundred aerial photos, and being frustrated by having to start with a circle or rectangle when the waterbody is very much not either of these. so I figured I'd voice this request -

Could we get an "intelligent scissors"-type tool (free-form polygon) in the model window to build a polygon to start from when we grow/shrink our surface to delete using the <PageUp>/<PageDown> keys? That would be cool for me.

159
Feature Requests / Constrain ground plane or image angle/position?
« on: July 23, 2012, 08:56:54 PM »
I am collecting repeat orthoimagery, and pilots sometimes are not able to follow tight flight lines, but I always have good (>60%) along-path overlap - where the sidelap decreases to <30% or where I am in heavily forested areas, sometimes the alignment "blows up" - even though images are taken on a linear flight path, I can't constrain the solutions.

tried to post an image, but "UPLOAD FOLDER IS FULL" error w 99KB image

It would be nice if I could constrain the vertical angle of images to within X° of the others, and even better if I could constrain the distance from image X to image Y for subsets as I add them.

The other thing that happens when the image alignment blows up is that the ground plane generally is off by 90° or 180°. If I could specify that this is an orthophoto flight and have the ground plane be normal to the alignment of the majority of the images, that would be ideal.

I know forests are hard for automagical point-finding, and most of the time this issue never comes up, but it sure would be neat to constrain this.

Also I know if I had GPS locations for camera data that would be better, but I can't do that at the moment. Am using GCPs for orthoimage production.

160
Just noticed that if all of the numbers are deleted out of any field, all coordinates for that marker are deleted. In other words if I start to enter Z, then decide I just want to use x,y coordinates instead, and delete z, then press <enter>, I have to re-enter x and y.

I don't know if this is a bug, but for me it is an undesired feature.

161
First, I am not complaining. I <3 Photoscan  ;D

Second, caveat that this is for a specific use - orthos from aerial imagery..

Third, I've been clicking away for hours so this might just be a subconscious excuse to take a break and exercise my brain... But:

I would like an efficient way to go through my images and identify/disable blurry ones for orthoimage generation. I don't always know what images are around the one I am thinking about disabling. If I could have the photo and model panels display side-by-side, and quickly scroll through the images with the thumbnail strip and see them on the model and zoom in and out on the photo it would make it WAAY less tedious to go through hundreds of images disabling blurry ones that are surrounded by good ones.

Here's what I suggest in a nutshell:

-allow image and model windows to be displayed at the same time
-allow scroll wheel to continue to zoom in and out of image and model window based on focus
-allow scroll wheel (and/or arrow keys) to select the previous/next image in the thumbnail window when a SINGLE image is selected.
-allow left-click-release in the model window with arrow selected to an image if the mouse is on a camera square and cameras are shown.
-allow left click to bring up "disable image" in photo tab if arrow is the selected tool

Also I really like what Changchang Wu has done with VisualSFM. He displays thumbnails instead of the blue squares you see in Photoscan. and when you select a thumbnail/image, the border and the corresponging points in the pointcloud turn red, so you see where the camera is pointing (also he uses GPU for bundle adjustment and sparse point cloud generation).

Andy

162
I go into another piece of software to choose my region boundaries from GIS when generating orthoimages and DSM. And once I have the surface optimized with GCPs, I export both the orthoimage and the DSM - sometimes I even export a few orthoimages, disabling blurry images for areas of interest. It would save me significant time if I could define the region once and it would stay there.

163
General / preferred citation for photoscan?
« on: June 30, 2012, 12:25:32 AM »
Is there a preferred citation format to cite photoscan in academic papers?

164
Bug Reports / Marker ordering during entry
« on: June 30, 2012, 12:02:23 AM »
When markers are sorted by number and auto-named, auto-incrementing, point 10 is generated between point 1 and 2 in the list, rather than after point 9. This is a minor annoyance, but seems incorrect, even though I understand it's an artifact of alphanumeric sorting.

165
Does PS go directly from a local coordinate system to a geographic coordinate system with the Helmert transformation, or is there an intermediate step?

I've been thinking a bit about my GCP distribution, and transformations to account for earth's curvature. I think that as long as I have good GCP distribution, I'm probably ok, but in situations where I don't have good ground control in the middle of the project, it seems like I can pretty rapidly introduce significant error due to the curvature of the earth.

Another SfM package I've been playing with (VisualSFM) is dealing with this by using an Earth-centered, Earth-fixed coordinate system as an intermediate step, but it does this on camera coordinates, which I don't have. But I started thinking about this a lot more when I read this post from the Ecosynth blog.

Is this a potential issue with PS, or is earth's curvature accounted for in the transformation? Each of my study reaches are about 5km long, so I could potentially be introducing significant error just in coordinate transformation if I don't have dense enough control (if I understand the math correctly).

Andy

Pages: 1 ... 9 10 [11] 12