Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - BobvdMeij

Pages: [1] 2
1
General / Export initial camera positions PRIOR to alignment
« on: August 16, 2019, 10:53:30 AM »
I reckon this may be somewhat of a silly question, but is there a way to export the initial camera positions/orientations (as stored and imported by Agisoft from the image's metadata) prior to alignment? I understand there's an option to 'Export Cameras' in order to do so (and to export the calculated positions), but this is only available after alignment and provided a georeferenced chunk is available. I'm not interested in the computed positions though. All I want is to import my images into Agisoft, use the built-in converter to convert the WGS84 information to another CRS and subsequenly export the resultant converted coordinates to a text/csv file.

Thanks in advance!

2
General / Resize Bounding Box: Fit-to-shape
« on: April 08, 2019, 03:39:29 PM »
Dear all,

I'm sure we can all agree that resizing the Bounding Box accordingly can be somewhat of a cumbersome process. Although this might (perhaps) be relatively straightforward for smaller datasets it can be a real pain to size it correctly when working with larger datasets. On the one hand this requires one to zoom in sufficiently in order to pinpoint the exact location of each corner with respect to the model. At the same time, however, it is deemed necessary to be zoomed out completely in order to keep track of the other three corners moving as one is relocated. The fact that the blue dots and white lines (indicating the corners and connecting lines respectively) can be really tricky to discern, especially when displaying a sparse/dense cloud in the background, make this proces even harder. Not to mention it can be rather troublesome to rotate the box to correctly align with linear elements within the scene at times too.

This got me thinking there must be an easier (automated) way. More specifically, would it be possible to resize the bounding box to represent the minimum bounding rectangle based on the spatial extent or envelope of one or more selected features, such as markers or shape(s). See the attached figure for a simple demonstration of how such an automated resizing could look like. In fact such 'Minimum Bounding Geometry' operations are rather common within most GIS packages. This would also ensure the correct rotation of the bounding box with respect to the area one wishes to proces further.

3
General / Import CSV file - Camera orientation data is lost
« on: March 01, 2019, 01:59:43 PM »
Dear all,

We are dealing with an issue that results in the columns with camera orientation being lost upon importing a new CSV file containing adjusted camera position and accuracy data.

In short this is what we do:
- Import camera's+EXIF in Photoscan including position (lon, lat, GPSheight) accuracy (for lon, lat, GPSheight) and orientation (yaw,pitch,roll) information;
- Using external software we transform the original image position data stored in the EXIF/metadata to a local coordinate system (easting, northing, altitude)
- From the latter results a CSV file with 7 colums (ID, easting, northing, altitude, easting accuracy, northing accuracy, altitude accuracy)
- We then import the resultant CSV into Photoscan

As the camera-IDs in the CSV match with the Camera-IDs already present in Agisoft the imported data does correctly overwrite Long/Lat/GPSheight columns in the Reference pane. The same goes for the column storing the camera accuracy information. However, the data in the yaw/pitch/roll columns that WAS still present prior to importing the CSV is lost when the import is complete. See the attached screendump for a clarification.

Question: How can we preserve the original camera orientation data for Yaw/Pitch/Roll when importing a CSV file? We've already unchecked the 'Rotation' box in the ImportCSV window, hoping it would leave the orientation data as it is, but it does get rid of the orientation nonetheless.

Thanks in advance!

4
General / Extraction of XMP/EXIF data for editing outside Metashape
« on: January 04, 2019, 10:34:50 PM »
Dear all,

We've found that the default transformation conversions implemented in Photoscan/Metashape are lacking some features and are therefore not meeting our needs. Hence we are considering conducting the transformation outside of Metashape using dedicated third party software and subsequently overwriting the original WGS84 image coordinates in Metashape by reimporting the transformated coordinates.

The transformation itself isn't expected to be an issue, but we're still seeking a straightforward method to extract the values from the desired fields stored in our Phantom 4 RTK's XMP/EXIF data. We're curious to learn what options are out there.

5
General / Filtering Ground Points on sloped surfaces
« on: August 10, 2018, 12:06:25 PM »
How does one go about filtering non-ground points on sloped surfaces, such as small lumps of vegetation on water barriers (dikes)? The Classify Ground Points-tool works quiet okay on horizontal surfaces, but its methodology seems to fail when surfaces are increasingly angled.

Which makes sense as the tool supposedly splits the pointcloud into cells of a user-defined size, followed by detecting the lowest point in each of these cells to arrive at an approximated DTM. Consequently all other points within each cell are validated with respect to meeting or exceeding the user-specified values for distance and angle (relative to the approximated DTM).

The first of the above two steps, however, becomes flawed when the surface represents a slope. The lowest detected point will always lie downwards of the slope within each cell. This point can be far from representing ground level. In fact it may even represent vegetation if the upward section of the surface within the cell is situated higher, athough being bare soil.

I have literally ran countless iterations, each time varying one or more of the three user-defined criteria. I have tried cell sizes between 0.1m and 20m, angles between 2 deg. and 40 deg. and distance-values as low as 0.01m or as high as 1m. Strangely enough, the output is hardly any different no matter what I try. Furthermore, lumps of vegetation that are easily discernible as non-ground by the naked eye are still classified as ground point regardless of the parameters set.

Attached is a screendump of the pointcloud displaying a subsection of the water barrier, please note how some sections of the vegetated cover clearly stand out from the surrounding area. This contrast is particularly well discernible when the pointcloud is visualized by lasses as this enhances shaded relief. As you can see, however, the far majority of the clearly outstanding vegetation is still classified as ground (brown color), and NOT as non-ground points (white).

6
General / Your opinion on USGS Agisoft Processing Workflow
« on: August 06, 2018, 01:40:18 PM »
Dear all,

While randomly scouting across the internet in search of clarifications on certain terminology used in Agisoft I came across this (also see the attachment below) seemingly well structured Agisoft Photoscan Workflow formulated by USGS (United States Geological Survey) in March 2017. USGS being a globally renowed organization I like to believe considerable thought and extensive testing and validation led to this document.

I personally very much like the column named ‘Function’, which supposedly describes what each step does and how it affects the output. Especially because such information is often lacking in the rather technical and somewhat limited explanation provided in Agisoft’s official user manual. This is particularly valid for the various and seemingly important Gradual Selection stages. I believe this ranks among the most frequently discussed themes on this forum, although a comprehensive/ understandable explanation of what it exactly does and how it should be applied is still missing.

I'm eager to learn what you all think of this USGS workflow, how it relates to your own and if you could perhaps comment on why certain steps are executed in this order using particular settings. I’m particularly intrigued by the presented order of the Marking of GCPs, Camera Optimization and the subsequent Optimization Parameters.

I personally always use the following methodology:

1.   Align Photos
2.   Mark Ground Control and Checkpoints
3.   Optimize Camera’s (checking all parameters except p3 and p4)
4.   Gradual Selection Reprojection Error at 0.5 > delete points > Optimize Camera’s (check allexcept p3/p4)
5.   Gradual Selection Reconstruction Uncertainty at 10 > delete points > Optimize Camera’s (check all except p3/p4)
6.   Gradual Selection Projection Accuracy at 2-3 > delete points > Optimize Camera’s (check all except p3/p4)
7.   Dense Cloud > DEM > Orthomosaic

The USGS workflow, however, employs a much more complex procedure. Rather than running ALL Gradual Selection stages AFTER marking the GCPs (as done by me), the USGS workflow applies Reconstruction Uncertainty and Projection Accuracy BEFORE any GCPs are marked. Only the Reprojection Error step is executed AFTER GCPs are included. Also note that the USGS workflow suggests to change the Tie Point Accuracy setting within the Reference Settings from 1.0 to 0.1 along the way. The workflow furthermore suggests to check/uncheck different Camera Parameters for distinct Gradual Selection stages, rather than keeping this the same across the board as is done by me (and I believe by many others). 

Again, the workflow seems to be thought out well but I still cannot wrap my head around certain details. I’m hoping some of you, and Agisoft’s developers in particular, are able to reflect on the matter!

Thanks in advance.

Bob

7
General / Export DEM (.TIF) for importing in Civil 3D
« on: August 01, 2018, 03:20:18 PM »
Dear all,

I have been trying to import several DEM files, created in Agisoft and exportedas .TIF files, into our Civil 3D environment. Unfortunately I haven't been very succesful at it; Civil 3D does not seem to recognize the file, at all.

Importing in Civil 3D using the MAPIINSERT command produces an error message stating that <filename>.tif was not found or not valid. Considering the DEM file is well over 3GB, and knowing Civil3D dislikes such large file sizes, I exported the DEM at a coarser resolution to produce a considerably smaller file of only 300MB. Unfortunately importing the smaller TIF file produces an identical error message.

Searching on the internet demonstrated that importing .TIF files produced in Agisoft (or P4D for that matter) is often problematic. The issues described above are seemingly well known to many others who have tried this before. Following suggestions mentioned by some I imported the .TIF file into our QuantumGIS (QGIS) software and exported the DEM to a new .TIF file at the same resolution. Again, however, this does not resolve the problem and importing into Civil3D remains impossible (the same error message appears).

It may be worth noting that importing the 300MB .TIF file into QGIS, then saving/ exporting it again to a new .TIF file (at the very same resolution), suddenly produces a 2.4GB .TIF file.

Any help is much appreciated, thanks in advance!

8
General / Pointcloud: Tackling Noise
« on: May 08, 2018, 12:43:28 PM »
Dear all,

We've recently started testing processing image sets with more complex geometries, i.e. containing both nadir and oblique shots using different camera angles. Unfortunately we're somewhat disappointed thus far with respect to both the processing time AND the outputted (dense) pointcloud. The disappointment grows when we compare both the processing time and subsequent output with another photogrammetry suite, which I will not name here for obvious reasons.

Attached you'll find a two screendump of a pointcloud depicting a bridge, the first being processed using Agisoft and the other using a different software. The differences are obvious, with the pointcloud in Agisoft depicting significant amounts of noise around the beams whereas the pointcloud created using the other software depicts rather sharp edges with no or very little noise. This likely follows from background noise in the original images, depicting the ground and/ or the sky in the background, and Agisoft for some reason (and unsuccesfully) trying to align these pixels. Furthermore Agisoft is hardly capable of reconstructing the metal pipes on the either side of the bridge, whereas the other package does not indicate such troubles at all!

How would I be able to get rid of this in an efficient manner? I'm aware of the option to mask images in Agisoft, but considering the rather complex geometry of the object under study and the image set containing well over 1.000 images this is hardly option as it will likely take several weeks (not so say months) to do so! It is striking to see that the other software suite seems to be very capable of removing/ filtering such background noise automatically, as no image masking was done to produce the output in the second image.

It's also worth mentioning that we were forced to downsize the original image set (1.100 images) to about 800 images for processing in Agisoft as the full set would not process at all OR would take well over a week (regardless of our decent PC setup). The other software package, however, had no problems at all with aligning all the images and was even able to do so within less than 24 hours!

9
General / Exporting: ESTIMATED Checkpoint Marker positions
« on: November 29, 2017, 05:07:05 PM »
Hello there,

Is there any way to export the estimated coordinates of Checkpoint markers? Or, put differently, the estimated positions of those markers that were left unchecked during Camera Optimization.

I found it is perfectly possible to export the in-situ measured/ source coordinates of all markers, Control and Check, using the Export Marker option from the Tool menu or the Export option from the Reference pane. However, when exporting the coordinates of markers using the Export option in the Reference pane the 'estimate' fields for Checkpoint markers are left empty.

This makes somewhat sense considering that the fields for these coordinates are also left empty in the Reference pane (when switched to View Estimated instead of View Source on the top), but I am still believing these coordinates must be stored somewhere. How would Agisoft else be able to calculate a Total Error for the Checkpoints? The estimated coordinates must be stored at least somewhere, right?

10
General / Georeferencing using GCPs: Optimize -vs- Update tool
« on: November 27, 2017, 06:02:30 PM »
Dear all,

In a quest to optimize our workflow I have been 'playing' around lately with the Camera Optimization tool to evaluate how fitting different parameters (or not) influence our model and associated accuracies. This was partially motivated by one of my colleagues asking me what type of transformation Agisoft applied when optimizing the model based on markers/ GCPs, and I found myself not knowing the answer to that question.

This caused me diving into the User Manual and Forums a bit more and fiddling around with a multitude of settings in the software. This also led me to this forum post, wherein it is stated that the Update Tool makes use of an Affine (or linear) Transformation (translation, scale, rotation) for updating the model based on Markers. Strikingly enough, I had never really located, understood or used the Update Tool before. Instead, I always 'optimized' my models based on markers using the Optimize Cameras Tool. Under Optimization of camera alignment in the user manual the following is stated:

Quote
During georeferencing the model is linearly transformed using 7 parameter similarity transformation (3
parameters for translation, 3 for rotation and 1 for scaling). Such transformation can compensate only a
linear model misalignment. The non-linear component can not be removed with this approach. This is
usually the main reason for georeferencing errors.

Possible non-linear deformations of the model can be removed by optimizing the estimated point cloud
and camera parameters based on the known reference coordinates. During this optimization PhotoScan
adjusts estimated point coordinates and camera parameters minimizing the sum of reprojection error and
reference coordinate misalignment error.

I am not entirely sure whether the former paragraph refers to either the Alignment step, the Update Tool and/ or any other step (any elaboration thereon is appreciated!), but I find the latter paragraph particularly interesting. Although this is not explicitely mentioned it seems to relate to the Camera Optimization Tool. It furthermore suggests that this tool applies a non-linear transformation, hereby making it a sincerely different tool from the Update Tool.

I subsequently setup a testing dataset. Upon importing all (257) images, running Alignment and marking a total of 17 Ground Control Points and 13 Check Points I duplicated the chunk to ensure the base dataset for all subsequent trials was identical. I then applied the Update Tool (once) and a variety of Camera Optimization Tools (fitting different parameters each time) for each chunk seperately.

As expected the Update Tool hardly altered my model. Apart from changing the initial TiePoint coordinates from WGS84 (from the camera's EXIF data) to the GCP's coordinate system nothing striking happened. Relative camera positions seemingly remained unchanged, as did the relative distribution of TiePoints and the overall height gradient across my model. The implications of applying different Optimization Camera Tool runs were noticably more vast. Not only the relative position and orientation of each camera is slightly altered, the Sparse Cloud demonstrates similarly varied transformations across the various iterations. Although it is very hard to pinpoint this with absolute certainty based on a visual inspection of the SparseCloud/Camera positions alone, it indeed seems that the model is transformed beyond rather simple scaling/rotation/translation axis.

The total CheckPoint error does also indicate that Camera Optimization allows for a more complex/ non-linear fitting of the TiePoint model to the Markers. Whereas the Update Tool produced a massive Total CheckPoint error of 21.1cm, the CheckPoint error varied between 1.3cm and 3.5cm across the different Optimize Cameras iterations. On the contrary, the RMS reprojection error was relatively similar for both the Update and Camera Optimization tool (0.67 pix).

Although the Total Error quiet clearly speaks in favor of applying Camera Optimization over the Update Tool, I am still not convinced. From a traditional photogrammetry perspective I am admittingly somewhat anxious, not to say reluctant, to apply non-linear transformations. Not least because this may invoke model overfitting on the one hand while producing unrealistic model accuracies that may differ across the model as GCP density varies. More specifically, although the model accuracy measured very locally at the position of the CheckPoints suggests very high model accuracy, I am not so sure to straightforwardly believe this is also the case for locations further away from these CheckPoints.

On the other hand, aerial photogrammetry based on nadir oriented imagery alone tends to produce the so-called 'doming-effect'. Or, put differently, models that demonstrate significant bulging or depressing towards the center of the model. This affect can be easily gotten rid of by application of GCPs and then fitting the model towards these GCPs by using Camera Optimization. Obviously, this notorious error is preserved when one only fits the model to the GCPs in a linear/ affine fashion by using the Update Tool.

In short then, I'M LOST. What is the right way to go? What does each of these tools precisely do? What transformations are applied? Which tool should one use in which instance?

11
General / HOW TO question: Share GeoTIFF/Pointcloud data with Clients
« on: November 13, 2017, 01:37:01 PM »
Hello all,

I am curious to find out how everyone copes with sharing/ distributing their GeoTIFF (Orthomosaic/DEM) and Pointcloud data acquired by UAVs with their clients.

Obviously websites such as WeTransfer/WeSendIt are suitable to share the raw endproducts, provided that the data volume does not exceed the data limit. However, in order to view the output data this requires adequate software on the client's side and sufficient knowledge to operate this. Sometimes a client does not possess such software/ knowledge, let alone sufficient computing power, or simply wishes to view a product prior to the delivery.

Consequently we are now looking for a simple online Viewer/GIS solution where we can upload our endproducts to, and which allows a client to have a look thereat without having to download the raw data. Ideally there's also some limited functionalities such as the ability to measure distances/ surface areas, although this is not a must-have. Such a viewer may also be employed to demonstrate the possibilities of UAV products to potential clients without having to carry a substantial amount of locally stored data around.

So, what does everyone use?

12
General / Orthoview: DEM & Ortho are gone?
« on: October 23, 2017, 02:47:27 PM »
As of recently I am encountering problems when visualizing DEMs and Orthomosaics n Photoscan.

More specifically, when opening either of them in the Orthoview the screen remains empty and the functional bar on the top of the screen does as well (see attached image). The scale bar shows up, as does the gradient bar for depicting the DEM's elevation. But the models itself remain absent.

This had been working fine until recently. Anyone else encountering the same issues?

13
General / Decimate (Dense) Point Cloud
« on: October 19, 2017, 07:59:38 PM »
Is there anyway to decimate the densecloud in Photoscan itself prior to export, or is one dependent on other software to do this? I understand an alternative workaround would be to generate the Densecloud at a lower quality/ resolution, but I'd prefer to generate and keep the higher quality cloud, then export a decimated version to allow for importing into our client's software.

I found a similar question was posted two years ago, it was then stated this was not (yet) possible within Photoscan. I am curious to find out if things have changed!

14
General / Migrating Markers between Chunks
« on: September 25, 2017, 11:44:35 AM »
Dear all,

We are currently processing a 800 image project, based on a series of diversified UAV flights. Besides processing the full project we are also processing several chunks that borrow only a selection (400-600 ish) of the total image batch, to see how different flight patterns influence the output.

Marking all markers in each Chunk is a very time consuming activity. Besides we ideally use the exact same location of markers throughout the chunks to keep as many parameters equal. Hence I am wondering whether it would be possible to migrate the marked markers from the full project to the smaller chunks, based on the location of each marker in each image (i.e. pixel) and the unique image identifier.

Any clues on whether this would be possible? Or do we have no other choice to manually mark all markers in all chunks?

Cheers!

15
General / *HELP* Poor alignment, massive GCP errors, useless endproducts
« on: September 07, 2017, 04:28:29 PM »
Dear all,

Last week we mapped two water barrier structures at two seperate locations as a pilot project for one of our clients, this week was all about processing of the images we captured. Normally, given the small area of each site (only 2.5 and 5 hectares respectively) this shouldn't take longer than half a day of work. Unfortunately, however, we are facing serious troubles throughout the process starting with image alignment, let alone with respect to distilling of the desired products. In the past days we have been running an endless number of processing runs, each time minorly tweaking some parameters, for well over 90 hours continously ánd simultaneously.

Without any success, however. We have seen everything so far ranging from Alignment failing miserably or all together up to GCP and CP accuraties indicating errors of several meters in all three (XYZ) directions. For what it's worth, these are the specs for the smallest of the two areas (see also attached screenshot, WSZZ_StudyArea).

Site specs
Area = approx 2.5Ha (450m in length, 70m at its widest).
Surface = 50% low standing grass / 50% bare soil
Elevation difference = less than 3m
Ground Control: 11 markers as GCP, 7 as independent checkpoints. The markers were positioned and measured using RTK-GPS before the flights and measured again prior to removal after the flights. XYZ coordinates were then averaged before being imported to PS.

The data was captured using the following:

Device specs
UAV: DJI Inspire Pro
Camera: Zenmuse X5
Altitude: 40m AGL (GSD = approx. 1cm)
Flight speed: 4.3m/s
Number of flights: 2 (total of 321 images)
Overlap (F/S): 75% - 75%. The overlap between the two flights was approximately 200%, just to be sure.
GCPs: Leica GS08 Plus

Initially the following processing workflow was used, in accordance with settings that we have used across most of our other projects: Alignment accuracy was set to High, Generic+Reference preselection were turned ON and Key- and Tie Point limits were set to 40.000 and 4.000 respectively. After Alignment we would manually mark the GCPs on all images they are visible on (> 9) as well as the Checkpoints but keep the latter checked OFF in the reference pane. Then camera's are Optimized (checking all boxes, except P3 and P4) and the resulting errors for both Control- and Checkpoints studied in the Reference Pane.

Immediately after Initial Alignment, however, a large black area can be distuingished in the scene where the two flights met. No Tie Points were found here whatsoever, regardless of overlap being well over 200% here and the underlying scene being no different than elsewhere (see WSZZ_AlignmentBlack). Inclusion of GCPs and Re-Optimization of Camera's did not result in improvements. Neither did re-running of Alignment using 40k/10k, 120k/40k, 160k/40k or 160k/120k for Key- and Tiepoint limits or turning Generic and Reference Pair Preselection OFF or ON. Although allowing a higher number of Tiepoints did result in the bare spot becoming smaller, the Sparse Cloud remained notably different from other locations in the scene.

To make matters even more interesting I decided to process each of the two flights in two seperate chunks, mark the GCPs in each, and merge both chunks using the GCP markers thereafter. This time, the area that remained fully black in the previous attempts was actually provided with a decent level of tie points. It remained somewhat more sparsely populated than other parts of the scene, but it at least demonstrated that the images of the scene were find and DO allow for Tie points to be found. Merging of both chunks, however, failed miserably and left a large vertical gap between both flights, regardless of each being georeferenced using RTK-GPS GCPs and basing the merging on Markers (see WSZZ_ChunksMisaligned).
 
Also please have a look at the screendump attached (WSZZ_ImageShift) displaying the water barrier (sparse point cloud) underneath with the 'Aligned' images on top. The appears to be a massive horizontal shift between the scene on the one hand and the grid flown overhead on the other. This remains even when GCPs are added and MARKED in the scene! We are absolutely sure that all images were captured at nadir (gimbal angle varies between minus) 88-90 degrees) and the blue plane of each image after alignment confirms this as well, but the shift remains.

More importantly still, however, is the fact that the total error (of both GCPs and CPs) throughout each of the above attempts was appalling. The total error averaged at around 2 meters although individual GCP errors reached even larger values. This notion remained largely unchanged throughout the different attempts, regardless of the number of Key- and Tiepoints, processing the flights seperately and/ or changing any other Alignment parameters. Even when I decided to incorporate ALL markers (including Checkpoints) as Ground Control Points for Camera Optimization, resulting in 18 GCPs in a scene as small as 2.5 hectares (!), the error remained in the order of meters. I have never ever witnessed anything like this produces as we have easily produced and (externally) validated both horizontal and vertical accuracies during prior projects below 5 cm.

Without going into too much detail it should be mentioned that we run into similar issues processing the images from the other pilot location. This location measures slightly bigger (4-5 Ha) and contains more (571) images and approximately 30 GCPs/CPs. The output, however, is almost identical. Alignment either fails miserably or produces weird artificats that cannot be explained for. Just as important, however, model accuracy is appalling with several meters (XYZ) even after the inclusion of an ever growing number of GCPs. Likewise, processing each of the flights seperately and then merging them together results in two vertically transposed models. After many hours I have managed to bring the two closers together, but GCP accuracy is still well over 30cm and absolutely not matching either expectations or the specifications.

In short, what is going on? We are absolutely clueless, regardless of processing a multitude of somewhat similar projects (and even larger) projects in the past without any problems and decent accuracies. What may explain this strange model behavior? More important, are there any more things we might consider in an attempt to succesfully produce the desired outputs after all?

Pages: [1] 2