Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Arie

Pages: 1 ... 3 4 [5] 6 7 ... 9
General / Re: Using Global (ASTERDEM) as ground control points?
« on: August 10, 2016, 07:19:38 PM »
Quick answer: Not, it won't be valid.  :P

This is mainly due to the accuracy and resolution of AsterDEM (and probably wordDEM too). According to the specs, the AsterDEM has a accuracy in the range of several meters.

And if you take a critical look at the accuracy assessment, they compare it to SRTM. These datasets do not nearly achieve the same resolution and accuracy as aerial imagery.

You should rather try to find out why your DEM is concave/ convex. 10 minutes of googling will help you out ;)

now that DNG support has been added, I was wondering where exactly the benefits are. I'd guess processing utilises the full bit depth of the input images, so it should also be possible to export textures as 16-bit imagery.

But I've noticed, that no external corrections, such as vignetting removal, noise reduction, white balance etc., are being applied when viewing the images in PS. Might it be possible to support the output as DNG (similiar to Lightrooms DNG output after panorama stitching) to do noise reduction, WB etc. in post?

Any other advantages of using DNG in comparision to 16-bitt TIFF except file size?

General / Re: Images acquired from high distance
« on: June 26, 2016, 01:57:44 PM »
I'd think, a baseline of 10-15 m for shooting at a distance of 1km is way too small. Additionally the 180 mm on a 1/2,3" sensor equals to a 1008 mm focal length on a full frame sensor.

I'm pretty sure you will not get decent results regardless the software you use.

General / Re: Spurious holes in mesh
« on: May 28, 2016, 01:49:44 PM »
You could also give cloudcompare a try. Its Poisson reconstruction plugin ( is quite powerful, but finding the right parameters is important. Export the pointcloud as ply with normals, import to cloudcompare, mesh it and reimport to Agisoft for texturing.

Regarding the paremeters for the poisson reconstruction, setting the octree depth to 13 or 14 yields fairly high resoltuions meshes. The samples per node should be set higher than the default value for noisy pointclouds.

good luck.

General / Re: Some general questions about Photoscan
« on: April 27, 2016, 01:34:05 PM »
1. PDF files are not really optimized for displaying large amounts of points/polygons. Therefore a reduction usually is necessary. The red color is a indicator for a selection; so when your mesh/ points turn red it means you selected it.

2. The accuracy of the pointcloud is dependend on quite a few different factors. In lectures, I usually divide them into internal and external influences: Internal being everything involving the camera. To name a few, you have sensor noise (somewhat related to pixel pitch size), lens distortion, image sharpness (AA-filter, bayer pattern vs. monochrome, lens quality), depth-of-field vs. diffraction etc.
External factors would be things like image overlap, the type of object reflectance (diffus, specular, etc.), texture and structure of the surface, ground-sampling-distance etc.

3. Not quite sure what you mean with viewing point coordinates. AFAIK you can't pick a single point and view it's coordinate. You can do that with free software such as Cloudcompare.

4. Well, you can decide the geometric resolution of the orthoimage- so if you know a pixel in your original image has a GSD of 1cm, just set 1x1cm in the output tab.

Hope that helped-

General / Re: New FF camera, but have some questions..
« on: April 04, 2016, 05:59:45 PM »
The amount of megapixel determine how much details can be captured with a single shot. Of course you can get the same amount of detail with a lower resolution camera, but you would have to take more pictures while being closer to the object of interest.
Furthermore, with more megapixels (smaller pixel pitch), lens defects show more clearly- this means, better lens quality is needed for optimal utilization of the increased resolution. For example, unsharp corners might not show on a 10 mp sensor, while being clearly visible on a 36mp sensor.

In general, I would recommend to spend at least as much on the lenses as on the camera body. The value of the camera body decreases way more rapidly than lenses do.

The 35mm f2.8 is a lovely, tiny lens, which has a very good image quality. Also, what stihl said.

General / Re: 4K video instead of image files?
« on: April 04, 2016, 05:48:26 PM »
AFAIK, one cannot import video files to Agisoft. There is a bunch of software out there, which can extract single frames in an defined time intervall (for example, ffmpeg).
In general, video footage is not the best option due to the rolling shutter effect, which most cameras exhibit. Since this type of distortion cannot be corrected, it leads to worse results.


General / Re: FF camera questions: a7Rii vs. D750/810
« on: March 30, 2016, 03:49:30 PM »
It is possible to deactivate image stabilisation, because this can have an impact on image sharpness when activated using a tripod. Here's a test from Jim Kasson:

General / Re: FF camera questions: a7Rii vs. D750/810
« on: March 30, 2016, 12:58:08 PM »
while not having used the D800 / Sony A7 II I do have a A7R as well as a Nikon D600. To be frank, I am currently selling off most of my Nikon gear.
As you mentioned, using a bulky DSLR with good lenses such as the Sigma Art 35mm f1.4, can be quite cumbersome when using it all day (though it is a free workout). I personally really love the EVF of the Sony A7R, which is more easy to use than an OVF, since you see what you get, making for example precise focusing much easier. One of the other main reasons for switching is that I can use the A7R airborne as well as terrestrial, the Nikon is just to heavy for that (and I don't need a mirror when using it with my UAV).
Of course, there are some drawbacks to Sonys new Alpha series; since it's a comparatively new system the available lenses are fairly expensive. Sony/ Zeiss has been on a roll releasing wonderful lenses, such as the Batis (25mm, 85mm) the Loxias (21mm, 35mm, 50mm) or Zony lenses such as the small 35mm 2,8 or the incredibly sharp 55mm 1,8.
Furthermore there are hundreds of lenses, which can be adapted, though most adapted wide angle lenses do not work as well as native lenses (corner smearing, color shift).
But of course Nikon can hold its own regarding lenses, Sigma Art, Zeiss Milvius/ Otus are some of the best lenses out there. And one has a wide range of legacy lenses, which can be quite cheap (compared to the other mentioned ones).

Regarding the comparision of ISO, dynamic range and resolution between the D810 and the Sony A7RII, you should just check the usual photographic sites such as dxomark, dpreview etc. to decide if the difference between the two is neglible for you (spoiler alert, the difference between the two is not huuuge!).

At the end of the day, it would seem to me, that choosing one system over another is largely dependend on your personal preferences. Both mentioned cameras are top of the line regarding full-frame image quality. Sony has not quite matured yet (in my opinion) but is on a roll regarding technological innovation.


General / Re: Advice for Workflow on big area in city needed
« on: March 29, 2016, 12:56:39 PM »
Hi Luciano,
sorry for the late reply.

Are you talking about registering a dataset without common GCPs?

General / Re: Room scan - Getting rough uneven walls on mesh
« on: February 27, 2016, 02:02:36 PM »
Shouldn't be a problem, as long as you keep the orientation and scale of the two datasets aligned- sometimes the units or the axis differ between software packages.

If your clean model has a new uv-map just make sure to check "keep uv" during the texturing stage (under Mapping mode) .

General / Re: Quality of mesh question
« on: February 27, 2016, 01:57:02 PM »
You should check out the "Face and Body Scanning" section on this forum for more tipps on faces.

The noise is most likely the result of using just a single camera. The person would have to hold perfectly still, which is practically impossible.

edit: you might want to try to use aggressive filtering on the dense reconstruction step. It might smooth out things a little more. Additionally you could use the "smooth mesh" command.

General / Re: Advice for Workflow on big area in city needed
« on: February 10, 2016, 06:27:01 PM »
You can use Agisoft to texture laserscan data.

After aligning and referencing the images, just import the referenced mesh of the lidar data and texture it.
Just make sure, that the referencing of the lidar data and the images is accurate (using dgps or total station for GCPs). When the alignment between the datasets is not good enough, there will be some errors in the texture map.

In case the aligment based on reference markers isn't precise enough, you can export a mesh of the dense reconstruction and align it using an ICP approach such as implemented in Cloudcompare.

I do this regularly and it works great.

General / Re: Best strategy to scan thin objects on turn table
« on: January 27, 2016, 11:19:53 PM »
The results you posted show clearly some limitations of photogrammetry. But with these type of objects I would guess even structured-light scanners would struggle (depending on the accuracy requirements and preparation work).
Two things that immediatly come to mind is the uniform surface color of the glasses frame and the shininess of the surface. Both individually are bad, combined even worse and I don't think you will be able to achieve any kind of good results.

My suggestion would be to coat the frame with some diffus reflecting substance (spray paint, for instance) and additionally trying to get some texture onto it (splashing some acrylic paint or similar).

For thin objects there has to be a fairly high amount of overlapping images, since the surface changes rapidly when only change the rotation slightly (especially in vertical direction). It is a good idea to have a well textured background as long as it stays in the same position relative to the glasses.

Also try optimizing the image-framing of the glasses, they should be covered by as many pixels as possible.

Good luck! Quite a challenging subject you have there.

Feature Requests / Re: Refining generic alignment
« on: January 27, 2016, 07:02:52 PM »
So, i've tried to replicate the situation and stumbled across some issues.

Since the original dataset was too large I just ran several alignments on a subset of the imagery in the area where larger errors occured. Interestingly enough, suddenly I was not able to align all the images using the "generic" option.

Attached are some screenshots, the original "generic" alignment was done with a version before 1.2. Here one can see the error in the dense reconstruction, that occurs between two rows that do not have a lot of overlap. Next to that screenshot is the dense reconstruction based on the "disabled" option. The errors are gone.

With 1.2.3. I can't get the two rows to align at all using the generic option. The "disabled" option on other hand aligns them without a problem and the dense reconstruction shows no errors.

The difference time wise is 7h 28min for disabled vs. 50min. for generic, which brings me back to my original question. Any chance on exposing the generic parameters (for example as advanced settings)?

Pages: 1 ... 3 4 [5] 6 7 ... 9