Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - nadar

Pages: [1]
1
General / Cost of airborne LiDAR ?
« on: February 09, 2017, 09:09:38 PM »
I'm currently writing a report to justify the use of photogrammetry vs Lidar survey. I can't find a reference of how much an airborne LiDAR system to be installed in a dedicated aircraft costs:

- Lidar + recording system, software etc.
- combined RGB camera
- inertial station
- dGPS

Quite surprisingly, I can't find this info on Internet, on providers websites, etc. Only cost / Ha are available, not the cost of the equipment
I have in mind, based on a market analysis done a few years ago, that the entry ticket was about 1 million US$
I guess the prices are quite lower now.
Does anybody have a recent estimate (including reference, if possible).
(I'm not talking about low-precision system to be installed on UAV, but on high performance systems).
Thanks for your help.

2
General / Point clouds in Autocad 2016
« on: November 01, 2015, 01:34:54 PM »
I'm always looking for an efficient solution to draw and measure architectural features (windows, doors, etc.) from point clouds of buildings, obtained either with an UAV or with a pole.
I tried automatic feature recognition (e.g. PointFuse), but as the cloud is noisy, the extracted features are not accurate enough and/or splitted in too many faces.
I thing manual extraction will better suit my needs: I want to draw the architectural features directly on the cloud to obtain topologically "clean" polygons such as rectangles, volumes, etc.

The latest version of Autocad 2016 and Civil3D 2016 have very intersting functionalties to manage point clouds in such a way:

https://www.youtube.com/watch?v=rCR95F0Fo88

The general workflow is to prepare the original cloud (I'm usually working with .LAS files) in an external application, Recap. It will organise different files to merge these in a .RCP file that can be attached to an Autocad drawing. According to the demo, during this preparation phase, the software will "structure" the cloud, i.e. recognise planes in the structure of the cloud. Recap has a very simple interface, and the parameters for this segmentation does'nt  seem to be accessible to the user.

When the cloud is attached to an Autocad (or Civil3D) drawing, I can acess most functionalities such as slicing, automatic recpgnition of features in slices, etc. Snapping on cloud points works well.
Unfortunately, the possibility to modify the local coordinate system ("Dynamic UCS") by snapping on planes identified in the "structured" cloud doesn't work with me. Apparently, no structure was detected during Recap preparation.
I tried with many different Photoscan-generated clouds, and even with "pure" Lidar samples, but the result is always the same.
This functionality of Dynamic UCS would be very useful.
Does anybody already tried this ?  Is it a problem with the nature of the data generated by Photoscan ? I'm using .LAS. Will another format be more adequate ? Do I have to clean or reorganise the data in another software ?
Is the problem related to some settings in Autocad ?
Any suggestion will be more than welcome, because I'm sure this possibility to combine large point clouds with a 3D Cad software used widely by architects will be a great asset.


According to the demo, the

3
Agisoft Viewer / Free viewer with measuring function ?
« on: October 03, 2015, 11:36:12 AM »
Is there a simple free viewer capableto display large points clouds and/or textured meshes and allowing to read coordinates (real world) and to measure distances and surfaces?
Could work on local data or on data stored on a cloud

4
General / Exporf / Import of dense clouds
« on: August 24, 2015, 11:31:36 AM »
Is there a possibility to export a dense cloud (.LAS or .PLY) to an external point cloud editor, edit it, then re-import the modified cloud into Photoscan ?
(I'm still looking for a solution to clean noisy clouds and to interactively remove unwanted parts)

5
General / Filtering noise in classified cloud
« on: June 28, 2015, 07:24:22 PM »
I finally found good settings to seperate ground points from buildings and trees in a cloud generated from vertical photos.
Unfortunately, the result is a little bit noisy: within a building, a few points are classified as ground. This is annoying because these misclassified points generate "spikes" in the model.

This is not very different from the noise you get from supervised classification of satellite images. To remove this effect, we usually apply a smooting filter adapted to a classified image (discrete values, in opposition of continuous values found in an image): affect to each pixel the most frequent class (mode) found in a moving windows of 3x3 or 5x5 pixels. Another solution is to apply a suite of dilatation / erosion of each class.

I was wondering if a similar function exists to clean a classified point cloud: analyse the 3D neighbouring of each point (or cell) and recalss it in the dominant class.

I found similar functions in Meshlab (Ball-Pivoting and Poisson Disk), but Meshlab is really not friendly with large datasets, and these seems more oriented on cleaning meshes, while I rather want to clean point cloud.

Alternatively, a similar functionality will be great to smooth continuous values: give each point the dominant colour of its surrounding.
And let me dream a little bit further: smooting not only the class or point colour, but also the point elevation ?

I'm convinced some python magician already worked on this !

Thanks for your help

6
General / How to edit elevations in dense point cloud ?
« on: May 18, 2015, 03:36:39 PM »
Is there a possibility to modfy Z value (elevation) of selected points in a dense cloud.
I know how to classify points and to assigne a new class, but I want to select an area and either force the elevation of all points to a given value or (better) add a constant to existing Z values (to increase or decrease the heigth of the object).
I'm trying to simulate changes in buildings and trees.
I know how to do this in a mesh, but I want to keep all fine details of my cloud.
I'm pretty sure a python script exists to do this, but can't find it. If not, can you recommend another software ?
Thanks

7
General / Problems with orthophoto of buildings
« on: March 10, 2015, 11:31:19 AM »
I'm trying to produce a clean orthophoto of a build-up area:

about 50% of the area is covered with residential houses and industrial hangars.
Photos were aquired in vertical view from a drone flying 75 m. ground resolution is about 1.5 cm
(sorry for double posting: I first added this requets for help in an existing topic, but did not get any answer...)

I generate a high quality densecloud. It looks fine (not too noisy).
If I generate the ortho using the mesh generetaed from the whole cloud, a got jagged roof borders, etc.
If I generate a mesh from a cloud filtered to keep only ground points, I got large triangles in the buliding areas, and this creates discontinuities in the roofs.

I'm now trying to export the classified cloud to Global Mapper, select ground points and calculate a raster DEM from these points. Seems OK
Then I generate a TIN model and I export it as DXF.
When I Import this DXF into Photoscan (using tools / import mesh), the TIN imports OK, but seems upside down:
when the model is view from the top, the mesh is displayed in dark grey.
If I rotate the whole model 180°, then the mesh is correct (various hues of blue), but applying textures doesn't work.
Does anybody have a suggestion to either flip the DXF model before importing, or rotate the mesh inside Photoscan ?

More generally: how do you deal with this type of images ?
Thanks

8
I would like to use estimated values of camera positions and orientations defined in Photoscan (alignment + optimisation) in ERDAS Imagine Photogrammetry suite (aka LPS) , in order to digitize features in a stereoscopic environment (StereoAnalyst).
Inputs for LPS are image name (including complete path), X, Y, Z position values and omega, phi, kappa orientation values.
I tries using estimated values.
Import of XYZ is fine, but orientation doesn't work. I also tried through the export camera  tool (type omega-phi-kappa), but also unsuccessfully.

I interpret roll as omega, pitch as phi and yaw as kappa (please confirm if this is correct), but the convention used for these angles seems incorrect.
I suppose all angles coming from photoscan are in degree (?) but LPS can also use gons (grades) and radians.
The convention used in LPS is the right hand rule (see http://en.wikipedia.org/wiki/Right-hand_rule)according to this rule, if the plane is inclined on the right side, roll will be positive, etc.
A major concern is with yaw. LPS measures kappa (yaw) positive if you turn to the left direction, and negative to the right. I assume the ortgin (kappa=0) is the top of the photo (=no rotation). Is photoscan using the same convention, or a "geographic" orientation, where yaw is defined as the bearing (yaw=0 when the top of the image is towards north).
More confusing for me: in which order are these rotation applied ?

Then, I need to introduce the estimated camera calibration parameters.
Here also, LPS is quite confusing:
I need focal length (this one is easy) and x&y decentering offsets in mm, while photoscan reports these in pixels. I assume the offset is given by the difference between the actual center and the theoretical center (= Xsizs / 2).  It's easy to transform by multypling the offset in pixels by the pixel size, but I'm not sure about the direction (sign) of this parameter. Could somebody confirm if [Cx-(Xresolution/2)]*pixelsize corresponds to Xoffset of principal point, as requested in LPS

And now, the cherry on the cake:
I need "extended parameters", and LPS proposes 3 templates: ORIMA, Australis and SMAC
Australis seems the closest to what photoscan provides in calibration report (k1, k2, k3, p1, p2), but when I enter the estimated values, I only get garbage (works fine if I set all parameters to zero). I assume these are coefficients and don't use units and hence no transformation from pixels to microns or whatever.
LPS can also calibrate the camera using a set of radial distorsions values, but I don't see this type of info in photoscan reporting. Am I wrong ?

If somebody uses photoscan and LPS, I will really appreciate to understand how to share infos between these two packages that are potentially very complementary.

Thanks


9
General / Software to generate flyovers MP4 from textured models
« on: November 03, 2014, 06:15:45 PM »
How do you record nice looking flyovers from textured models ?

10
General / Structure from forward motion ?
« on: October 31, 2014, 11:17:09 AM »
is it possble to reconstruct a 3D model from images acquired with the line of sight parallel to the movement direction ?
I'm trying to build a model of a street using a single camera placed on a vehicle and aiming forward and more or less horizontally.
I'm interested by the road flooring, but also by buildings located on the sides.

Is this geometry compatible with the basic requirements of photogrammetry (e.e epipolarity), or does SFM algorithm circumvene this ?

11
General / Optimization generates heavy noise in sparse cloud
« on: August 17, 2014, 07:14:44 PM »
I have 12 vertical aerial photos of a castle (acquired with a 28mm lense from about 800m flying heigth) and 150 oblique views acquired with a 135mm from different angles (2 x 360°) at flying heigth varying between 300 and 150m. All photos are well exposed and sharp. They are all referenced (GPS data stored in Exif)

I import all photos in the same chunk. They all align fine (using "ground control" or "disabled" option).
I have a few (4) ground control points, not very accurate (obtained from Google Earth). Alignment error is about 5m. I add 20 markers using guided approach, and on most photos, proposed placement is fairly good, and when needed, I have adjusted the placement manually.
The resulting sparse cloud is not too bad (noisy, but geometrically coherent).

If I optimize this sparse cloud, either using fixed camera calibration or not, the "optimized" cloud becomes very noisy, with many pikes and wrongly oriented sub-parts. As you may expect, derived dense cloud is not good...

Any suggestion will be welcome.

12
General / how to relate cameras and wrong parts of a cloud ?
« on: August 15, 2014, 02:41:23 PM »
I'm trying to produce models of urban sites using both vertical and oblique photos acquired from an helicopter.
Sometimes, I got relatively good results, but some parts of the low-density cloud are obviously wrong:
a sub-cloud stretches outside of the model, rougly oriented on a oblique plane.
This suggests a wrongly computed camera orientation.
I have enough camera to remove a few badly oriented, but finding the culprit(s) is another story !
The only solution I found so far is to remove camera one by one and see the result in the model window.
As I often work with a few hunderds cameras, this procedure can be very tedious, especially if there are a few wrongly oriented parts of the cloud.

Is there a solution to:
a) select one camera (or group of cameras) and highlight corresponding parts of the cloud
or
b) select a part of the cloud and identify which camera(s) are involved in its construction.

(Otion b will of course be easier for my specific problem)

Pages: [1]