Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - n@sk

Pages: [1] 2 3 ... 6
1
General / Re: YPR to omega,phi,kappa
« on: April 14, 2018, 06:46:25 PM »
I assume that you are following
https://support.pix4d.com/hc/en-us/articles/205678146-How-to-convert-Yaw-Pitch-Roll-to-Omega-Phi-Kappa-

As you are considering a UAV and not transatlantic flights, you should convert  all the geographic coordinates to 3D cartesian coords with respect to a plane tangent close to the centre of the block.
(forget about the projection)

read the companion notes in the link and the top of page 5 and 7 in the paper.

in any case, I suppose that this not a PhotScan question not even a Python one, but search the forum for similar questions, SAV and Alexey have made some informative comments on how useful these angles might be, or not.

regards

2
Hi GPC,

If you have highly accurate camera locations (= 'air control points') then you probably get more accurate results compared to using ground control points, as shown in a publication by James et al 2017.

Quote
Survey precision under direct georeferencing could be 2–3 times better than from GCP-control.

Regards,
SAV

This is a misinterpretation of that statement. Precision is not accuracy.
Chances are that you cannot get "highly accurate camera locations" and the higher you fly the less accurate the ground coordinates will be.

regards

3
General / Re: Markers with fewer than three coordinates (x,y,z)
« on: April 14, 2018, 03:56:34 AM »
Bear in mind though that even if the corresponding residuals are ignored, the input coordinates will be used to initialise the bundle adjustment; if you use many such points and the elevation is unreliable, they will dominate the process and converge to a useless solution.
In any case this will not work for the elevation equality constraint.
I do not think that this is how the other "specialized photogrammetric programs" go about doing this.

Just a reminder of a feature request:
http://www.agisoft.com/forum/index.php?topic=7718.0

regards

4
General / Re: sample volume measurement
« on: April 14, 2018, 03:41:34 AM »
I mean that if you use 3 targets to align each before/after chunk  pair, and the coordinates of the rest of the targets agree,  you will have an estimation of the internal accuracy of the model. If they do not agree then you cannot expect the external accuracy to be any better.
This is a more meaningful way of presenting the quality of the result in terms of 3D coords instead of the average  scalebar error.

You do not know how the optimisation works but it was used during the alignment phase anyway; you now need to fine tune it.
There are some points that are flying around, which I do not suppose that correspond to any object, thus they are false matches and should not be taken into account.
I can only see the low resolution thumbnails of the images but the reflections of the lights on the floor are not helping at all...

You do not know which parameters should be checked but some of them were checked and they were initialised to zero;
you should not leave this to chance and at least  try to be consistent.
Even if the values change, the same parameter set should be used  as you are using the same camera.
Since you are fixing the focus why don't you calibrate the camera so that you have a better initial guess for the parameters values and use it for all chunks?
Normally f,cx,cy,k123,p12 should be fine for a digital camera.

There's nothing you can deduce  from the calibration values about the metric accuracy of the 3d model.

The normal case is well studied, convenient when the surface can be assumed to be planar, in which case the accuracy can be assumed to be homogeneous and the images can be easily orthorectified, but is the worst choice with respect to  the uncertainty in depth estimation, especially with an uncalibrated camera.
I would also suggest adding images with varying roll angles.
The results should be improved but you shouldn't take my word for it.
Try measuring a planar surface or a sphere and see for yourself.

Having said all that, it depends on the accuracy requirements of your project; if this was gold you were measuring I would not buy from you...or maybe I would :)
Perhaps you already have what you need but judging from your original post you are aware of the fact that a third party might not be convinced, so you should at least be aware of all the caveats.

5
Bug Reports / Re: calibration p1 p2
« on: April 13, 2018, 08:30:04 AM »
Hi hagorms,

this reminds of a similar thread I started a couple of years ago
http://www.agisoft.com/forum/index.php?topic=5827.msg28767#msg28767
The conclusion then, was to use PhotoScan in order to convert the values assuming that you are using Australis, PhotoModeler etc for the calibration.
The tag in the GUI and the appendix is the correct one but you should NOT manually edit the value with the same tag in an xml which was created with the latest versions.
I suppose that Alexey would be happy to implement a new conversion if you are using another application but this is indeed a mess...

Alexey,
Is backwards compatibility still a valid issue?
If yes, then you should consider flipping this behaviour or make it more transparent.
I would expect a notification similar to the bundler.out export.

regards

6
Feature Requests / User-defined feature request priority
« on: April 13, 2018, 07:43:10 AM »
A voting system so that the priority of each feature request is defined by the community.
Alternatively, each feature request should only be submitted as a new poll so that more than one users can support it or not.

7
Feature Requests / Re: Improvement of the Python API documentation
« on: April 13, 2018, 07:25:02 AM »
Well said, Gall.

8
Feature Requests / Re: Keeping Key Points in Duplicate Chunk dialogue
« on: April 13, 2018, 07:22:43 AM »
Thanks Alexey,

I'll try to keep up with all the updates even though they make it hard to maintain a workflow and thoroughly test it.

regards

9
General / Re: Image Stabilization and Photoscan?
« on: April 13, 2018, 07:09:38 AM »
Hi kirk

I agree with SAV
there is no simple answer here, as the question is "what is less bad for my unique project in some unknown lighting conditions in combination with whatever it is that happens to my lens or sensor during image stabilisation".

Photoscan does what you described. Note, however, that it will do it even if the images were not "stabilised", which means that the corresponding camera model, instead of the camera position/rotation, will be forced to variably absorb errors.

"Perfectly ok" sounds fine to me, and might indeed be the case, but means nothing to a customer :)

regards

10
General / Re: sample volume measurement
« on: April 13, 2018, 06:45:38 AM »
Hi Mina,

I suppose that you used this imaging configuration because there is a practical rule of thumb on what you can expect in terms on horizontal and vertical accuracy but the fact that you are not using a pre-calibrated camera complicates things.
My (useless) answer is that it depends on the accuracy requirements.
In general, the model's accuracy could only be assessed if a reference model or target coordinates of x5-10 higher accuracy, which could be considered as ground truth, were available.
I do not know how exactly you referenced the chunks but, assuming that the before/after targets were not moved, you should use their coordinates to estimate the registration accuracy, which may be considered/assumed to be indicative of the overall accuracy.
Note, however, that there is no guarantee that they reflect what's happening in the middle of the model.
The scale bar errors are low but they can be significantly improved (at least 50% improvement for all chunks expect for the first one) if you optimise the alignment, even if you do not remove any potential outliers from the  sparse point cloud.
I do not see any reason why one would not click on a button that says "optimise" unless it is red or it is April fools' day :)


With regard to the calibration
the interior orientation parameter sets are different because  you selected the Adaptive Camera Model Fitting,
for example B2 was estimated for the last chunk (set this to zero and then optimise, unless you have reason to believe that it is essential in modelling your camera; I don't).
The value for each parameter is different in each chunk because their estimation is affected by random errors, just like the camera and point coordinates, and was based on different object points; in addition, because the images were not taken from the exact same positions and because the image planes are approximately parallel and coplanar.
If you had added convergent images or had accurate coordinates for some of the targets, I would say that you do not have to worry about it but if you plan on reporting height or volume differences you should think about how many decimal digits you will present.

I hope this helps.

11
General / Re: Work Flow for GCP
« on: April 13, 2018, 04:47:33 AM »
Hi geomaticist,

Here is something that might be of interest for you:
 
Quote
Survey precision under direct georeferencing could be 2–3 times better than from GCP-control.
James et al 2017 (EARTH SURFACE PROCESSES AND LANDFORMS)

James et al have done some extensive research on marker precision. Their work suggests that photogrammetric models based on 'air control points' with cm accuracy are more accurate than models that have been processed using traditional GCPs at cm accuracy.


Hi SAV

I apologise for the partial deconstruction of some of your posts but I read them all in one go and I felt that I had to make some comments, so I am just  leaving this here for future reference (even though the post diverged slightly off topic and was hijacked by Marija).

I think that replacing the term "precision" with "accuracy" is misleading.
James et al. reiterate the notion that a free bundle adjustment minimises the trace of the a posteriori covariance matrix (precision).
(Also, don't forget that their results are primarily based on computer simulations.)
The accuracy of the result will be unavoidably affected by the accuracy of the GCP coords, even if minimal constraints are used;
however, X cm on the ground are approximately X cm on the ground, whereas propagating X cm in the air to the ground will not only depend on the flying height and the imaging configuration but also on the quality of the interior orientation parameters.
As we are talking about non-metric cameras within a self-calibrating bundle adjustment, projective compensation should be expected and can potentially affect the "internal consisteny" of the result even if one is not interested in absolute georeferencing.

Hi geomaticist,

Using cm accuracy air control points allows you to achieve the same accuracy compared to cm accuracy ground control points. Theoretically it might be even more accurate because you generally have many more air control points (= number of images) then you would have ground control points.

I've done some tests where I first estimated the accurate location of geotagged UAV imagery based on cm accuracy ground control points. Then I used the estimated geolocation of the pictures (cm accuracy) and removed all ground control points from the project and tried to reconstruct the scene without them. The resulting model had basically the same cm accuracy as the one that used GCPs. You can do the same test yourself  ;)

If you are on a limited budget, you might be better off sticking to the 'traditional' workflow using cm-accuracy ground control points.

Regards,
SAV

I do not doubt the fact that you got the same results, as you explained that you re-ran the project from scratch,
but I would argue that the only reason why you did get these results was because you already had the GCP values and were able to initialise the bundle adjustment with reliable approximate values in order for it to happily converge within a few iterations.

At some stage, however, it becomes a 'philosophical' question, IMO. The Earth is a quite dynamic system that changes over time. For example Australia is moving NE by about 7cm each year (!!!). One should be aware of such 'natural error' that needs to be considered as well.

I think in the end it boils down to the project requirements. For some, the absolute accuracy is not crucial. For example, if I simply want to calculate a volume of an object (e.g, stockpile), I don't really care if the whole survey is off to the E/N/S or W by a few meters. As long as it is properly scaled and internally consistent, it will deliver the correct measurements/results.

ad 2) Note that your before/after point clouds will only be accurately aligned if you surveyed your ground control points at high accuracy (i.e., using and RTK GPS). If that's not the case, you could align your point clouds in CloudCompare using ICP before computing the cut/fill volumes. Details here: http://www.cloudcompare.org/doc/wiki/index.php?title=ICP

The "philosophical" question that you describe is taken care of by the realisation of the terrestrial reference frames, unless of course the drone continuously records images all year long! :-)
Based on your most recent post, I understand that you appreciate the importance of GCPs and that the ICP is the last resort, especially in the dynamically changing environment of a quarry.
Unfortunately, ground control and check points for almost normal blocks of almost flat areas are not a "tradition" but a necessity, if metric accuracy is required.

regards

12
General / Re: Roll pitch yaw conversion to Omega Phi Kappa?
« on: April 13, 2018, 02:21:30 AM »
I am not sure where exactly the IMU comes into play but
why don't you export directly to ω,φ,κ?
File>Export>Export Cameras>OmegaPhiKappa.txt or BINGO.dat?

13
General / Re: 3d Models from images splitting in two parts
« on: April 13, 2018, 02:10:14 AM »
The front and back side of the eagle-thing are identical.

You need to mask the eagles in the images of the back (or front) side and check the "Constrain features by mask" option during the  alignment process.

I hope this helps.

14
General / Re: Creating region / bounding box / LiDAR import
« on: April 13, 2018, 01:41:32 AM »
Have you tried exporting the cameras to xml, adding the region, and reimporting?

it should look like this:
<?xml version="1.0" encoding="UTF-8"?>
<document version="1.4.0">
  <chunk label="chunk " enabled="1">
    <sensors next_id="1">
   ...
   ...
   ...
    </sensors>
    <cameras>
   ...
   ...
   ...
    </cameras>
    <reference>...</reference>
    <region>
      <center>-1.8167784182654151e+002 -3.4825539726895009e+001 -1.8053705293184427e+002</center>
      <size>7.7704579467773442e+002 6.5211002807617183e+002 1.5348713684082031e+002</size>
      <R>-8.5212594090191968e-002 1.2303440753837614e-002 -9.9628682574549643e-001 -9.6148600055342970e-001 -2.6326040662700245e-001 7.8984992513488270e-002 -2.6131108768705286e-001 9.6464635159637757e-001 3.4262688213379036e-002</R>
    </region>

    <settings>
     ...
    </settings>
  </chunk>
</document>

15
General / Re: align selected cameras
« on: April 13, 2018, 01:27:06 AM »
Chances are that it tried at some point to align these 30 images, individually or at small subsets, with the rest but it failed.
Bare in mind that feature detection is not a deterministic process and that the quality setting also implicitly affects the acceptance thresholds for feature correspondence.
Once you have the sparse point cloud,  it's probably easier to confirm that some of the detected points are indeed reliable matches; thus, half of them were successfully realigned.
For the rest of the images, which were not aligned, an incremental alignment is required before the dense matching.
Depending on the quality of the images within the overlapping areas, you might have to repeat this process so that enough features, which can be matched, are detected.
In some cases you might have to incrementally align some of the images that were aligned successfully in order to generate features that will be detected in the problematic images.
In any case,  you will have to run the dense matching and mesh generation again because the alignment/optmisation only estimates the exterior orientation and does not update the depth maps.

Pages: [1] 2 3 ... 6