Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - photogrammetrix

Pages: [1] 2 3
1
Hi Williv, Hi Andy,


may be this is helpful for you:

http://www.isprs.org/proceedings/XXXVIII/1_4_7-W5/paper/REDWEIK-119.pdf

kind regards

2
Dear Williv, Dear Andy,

I understand that the great number of scanned images from different cameras will make a manual photo-by-photo approach very time consuming and ... boring.

As Nadar mentioned already, Agisoft PS is not well suited for your data, because it is made to process photographic images from which the software is able to calculate / estimate the inner orientation of the camera, utilizing algorithms for camera-self calibration following the concepts of Brown.

Unfortunately the inner orientation will in most cases get lost during the scan-process for reasons I mentioned already. And perhaps there are some more, e.g. the photo was not exactly aligned with the scanner frame etc. etc..

Some time ago I carried out several test-processings with scanned paperprints of WW2 recon images from German Luftwaffe. The drawback with these images was, that the bottom fiducial had been cut off and each scan had another dimension (pixel rows/ columns). On the other hand,  I was lucky, because the type of camera, film-dimensions and focallength was known, so that I was able to do a kind of geometric "reconstruction" for each single image. After that procedure some images with sufficient overlap could be aligned, but there were always images that did not.

When you do not need necessarily "orthorectified" images with very precise positioning in your target coordinate system , trying out some kind of image-stitching software my help you out.

But you have to keep in mind, that these kind of software will not necessarily take photgraphic-image distortion into account, which is still inherent in the scans, although it can not be parameterized any more.

Some time ago I tried out Regeemy, which is an automated image stitching package developed by INPE / Brasil, which i found quite good for Satellite images. Please look here:

http://wiki.dpi.inpe.br/doku.php?id=wiki:regeemy

Good luck for your project.
kind regards




3
Hi Williv,

me again   :D

may be this is helpful for you:

http://sourceforge.net/projects/e-foto/

http://www.uni-koeln.de/~al001/airdown.html

kind regards

4
General / Re: Effects of mosaicing on vegetation index values
« on: January 06, 2016, 02:14:26 PM »
Hi Andreas,

I would be interested to take a closer look into this paper. Is there a link where I can download this?

From my point of view, deriving reliable values for  vegetation indizes from NIR-converted consumer-grade cameras is not at all an easy task, as often is suggested.

The concept of vegetation indizes was originally developed using multispectral sensors (for example Landsat Multispectralscanner MSS or Landsat Thematic Mapper TM) with well defined imaging parameters and spectral characteristics.  Furthermore there is additional information recorded during imaging which enables us to do a radiometric calibration - and if necessary, to correct also for atmospheric effects. Otherwise the image data and derived data such as VI will hardly be comparable between different acquisition dates and between different sensors.

The question I have in mind is, how much effort is necessary to get radiometrically calibrated measurements for the wavelength of interest from consumer-grade camaras, in order to derive comparable values for vegetation indizes - comparable over time and comparable between different sensors / consumer-grade cameras?


kind regards

5
Hi,

I forgot one important aspect:

The images must have sufficient overlap to do the modelling in PS. The standard overlap for classical sterophotogrammetry is around 60% in flight-direction with around 30% sidelap from flight-path to flight-path.

Myy experience is, that this is not sufficient for PS to do a proper 3D-modelling. Mosaicking may work with acceptable results for historical images, when you can find a sufficient number of ground control points.

kind regards

6
Hi Williv2,

just a few questions to better understand what you are working on:
- FSA stands for Farm Service Agency (FSA) of the U.S. Department of Agriculture (USDA)?
- are the aerial images that you have scanned paper-prints or the original film-roll (positive/negative)?
- was it a complete scanning, including all markings that are displayed on the image frames (fiducials, focal length, time, altitude, image counter, clock etc.?

I am asking this, because the most important thing is, that the scans have to preserve the original parameters of the inner orientation of the camera. Paperprints or even the original film-rolls may have changed their original dimensions slightly, depending under what environment conditions the were stored over all the years (Temperature / humidity).

Furthermore the scanner may not be suited best for this task, although it is a more high-end one. There are special photogrammetric scanners, especially designed for photogrammetrically "correct" scans of aerial images that do preserve the inner orientation of the camera. Please look here:

http://e-collection.library.ethz.ch/eserv/eth:25230/eth-25230-01.pdf

The problem with standard flatbed scanners often is, that they introduce additional distortions to the imagery, e.g. because of a non-linear movement of the scanning head.

You can try to eliminate or minimze that "bad influence" , if the fiducials are imaged on your scans and the original film-format and focal-length is known. Use the fiducials as control points and perform a geometric correction of your scans using standard image processing software.

kind regards





7
General / The future has just begun ...
« on: September 15, 2015, 07:31:02 PM »
Hi everybody,

just recently stumbled across this:

https://www.youtube.com/watch?v=i1eZekcc_lM

...wow, in realtime!


Cheers

8
General / Quo vadis, PhotoScan?
« on: October 02, 2014, 11:40:51 PM »
Hi all,

thanks to the Agisoft team for putting a lot of effort in developing this good piece of software. Please forgive me the pathetic attitude in the title of this posting.

I was crossreading some of the latest feature requests and looked around on the fresh designed website. Photoscan is meanwhile used for a broad spectrum of applications reaching from gaming, multimedia and art to geoscience, GIS, mapping and surveying. This is an undoubtable proof for Photoscan's outstanding versatility.

Naturally all these different fields of interest have their legitimate wishes and needs for new functionality in the future.

For the moment Photoscan exposes itself to the user as more or less monolithic block of software covering everything "under one hood". I am wondering about the way and strategy the increasing number of features and functionality will be managed in the future - e.g. plugins, branche specific sub-versions, anything else???

In other words: "Quo vadis, Photoscan?"

Cheers

9
Feature Requests / 3D-Cursor Coordinate display and coordinate/point picking
« on: September 28, 2014, 10:48:14 AM »
Dear Agisoft Team,

what I am really really missing in Photoscan is a realtime 3D Cursor Coordinate display and picking tool when in Model view or 2D-Cursor coordinate display when in photo view, that are independent from creating markers. Best would be a combo with a fast measuring tool.

The coordinates should display as local, and if a Mapprojection is existent, as geocentric and map-projected too and when in photoview image coords (row,col, RGB) and the asoziated 3D coords. That would be really great.

If the python Api will become more flexible and expose the necssary interfaces for accessing the model and photoview graphics to do such things, I would try it on my own.

So this is another great wish: a more extended and flexible python api that allows acces to the graphics. e.g. in broader sense like ESRI does with python interface in ArcGIS.

And here comes the last one for today:
a crosshair marker style, so thet the position can be checked better without having to zoom in too far  as with the dot/flag symbols.

Cheers and thanks!


10
General / Re: ground vs. non-ground points classification
« on: September 25, 2014, 01:33:16 PM »
Hi Alexey,

ah, sorry forgot to mention that: heightfield

Thanks and cheers

11
General / ground vs. non-ground points classification
« on: September 25, 2014, 11:29:59 AM »
Hi all,

taking a look at the dense point cloud point classification functions and performing just a first initial run , I found the following:

When creating a mesh from ground point class, it seems that resulting mesh does not represent the exact same area as depicted by the red groundpoints-mask, although interpolation/extrapolatiion was switched off.  So unwanted objects seem to remain in the ground-mesh.

Please see attached Screenshot

Tested on lUbuntu 12.04. LTS with PS 1.0.4.1874 and 1.1.0 1976

Cheers

12
Hi Steven,

many thanks for sharing your experiences and publishing your results. Great inspiration and guideline for handling such things.

Thanks!

Cheers

13
General / Re: Filtering water, vegetation etc.
« on: September 11, 2014, 04:30:15 PM »
Hi Julien,

I have not tested CANUPO in deep. As fa as I digged into it,  It works well with CC but tends to take looong time when dealing with larger point clouds.

Is there a chance of preclassifiying the images and using e.g. a non-shore mask?? You may take a look at the WEKA trainable sgmentation classifier in FIJI / ImageJ or the SIOX Segmentation tool in GIMP or FIJI. Batch processing should be possible via scripting .

Another idea is to exploit rgb and normal values of the point-cloud and create a kind of conditional filter with help of the python Api.

Cheers


14
Hi Steven,

you mentioned the corner markings in the photos.

Commonly in an aerial mapping or surveying camera the fiducials and all the additional information we  normally find on the black image frame -such as altimeter, image counter, focallength information, time stamp, vertical gauge, etc  are exposed onto the film while the image is taken. So all these things are "fixed" parts within the camerabody.

Having this in mind I was wondering about the corner-markers in your images, which have no constructive connection to the image frame. My conclusion is, that these corner markers may be exposed to the film by some other means, but I think these are no markings like the fiducials from the camera frame as in  metric mapping cameras. I assume hat these markers therefore will not be usable for any kind of calibration.

Just some thoughts about it ...

Cheers


15
General / Re: limit of accuracy by pixelpitch
« on: September 11, 2014, 10:50:33 AM »
Hi all,
 <pixel-pitch> is commonly defined as the direct distance of the photosensitive detector elements on your imaging sensor. It is one of the factors that influences the performance of your imaging system but there are a lot more. I also was wondering about all the facts influncingt he process of imaging with digital sensors some time ago and digged a bit into it. Other important aspects are:

- numerical aperture
- aperture  /  f-stop
- ratio of wavelenth  / diameter of aperture
- light diffraction / Airy disks
- Rayileigh criterion
- Modular transfer function (MTF), image contrast, object contrast
- signal to noise ratio of your sensor
- distortion characteristics of your lens
- etc, etc, etc,

here is one "rule of thumb" to get a number for distance of the objects to ge them resolved with an optical system


for common cameras, separation distance of
objects can be approximated:
                         f                           f        1
x = 1.22 * L * ­­­ ---          with      --- ­­­ = ----    ­­­ =  f-­number
                         D                         D      2NA
x = distance between two objects
L = wavelength       f = focal length     
D = apertur diameter  NA = numerical aperture

Some conclusion from all that stuff to consider:

diffraction sets fundamental resolution
limit independent of number of megapixels

depends only on the f­number of lens, and
on the wavelength of light being imaged

it reduces small­scale contrast by
causing airy disks to partially overlap

tiny pixel sizes of high megapixel point­
and ­shot cameras can only be exploitet
with high­ quality lenses and f­number < 2

Or in other words: the sensor in your camera body is one thing to consider but there is another one in frontt of it - the lens with it' aperture, which is not of minor importance . Without a good lens  the best sensor will only yield average image quality.

There is a saying amongst photographers: "Invest in glass!" :-)

Cheers and have fun



Pages: [1] 2 3