Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - mauroB

Pages: [1]
1
General / image error while marker placing
« on: September 01, 2023, 08:01:45 PM »
Dear All,

I'm working with a project which must be georeferenced. For that, during the image capture we placed non-coded cross-type target within the study area.

I perform a preliminary alignment by using a reference calibration fixed (I planned to loose the camera parameters only at the end of workflow, during the final block optimization). Then, I used the tool for the automatic marker detection and matching. At the end of the computation, in the reference pane, all detected markers were displayed along with an error (pix) different from zero...

Considering that after applying the detection tool I did not perform any least squares adjustment or manual refining of (automatically) identified projections, I was wondering what the errors actually represent. At most, I could expect an image error equal to 0, since the projections are those identified by the software, and in any case "an error metric not in the least-squares sense"...

After that, I performed the same test, by using the guided approach for markers placing (i.e., "add marker" comand). Also in this case, after manually adding markers on aligned photos I get projections errors different from zero...

Not convinced by the results, I finally performed the adding of only one marker projection through the manual approach (i.e., "place marker" comand). With only one projections placed, the software returned an image error (also in this case different from zero).

It absolutely makes no sense...

Any idea?
Does the software automatically perform a least-squares-based triangulation (i.e., forward intersection) after placing marker on the images?

P.s. I'm working with Metashape v.1.5.1

Regards,
Mauro


2
General / distorsion profiles computing
« on: August 10, 2023, 11:03:11 AM »
Dear All,

I'm wondering how metashape computes the distorsion profies in the camera calibration section.

I tried to implement a code to compute the distorsion profiles along the (upper-right) semi-diagonal of the frame, starting from the frame center.
However, I noticed some discrepancies between the tangential distorsion profiles (see attachments for image plots and code).

For the parameter values conversion I used the following formulas (from Luhmann et al., 2019):
k1 pixel units = k1 focal units / focal length^2
k2 pixel units = k2 focal units / focal length^4
k3 pixel units = k3 focal units / focal length^6
p1 pixel units = p1 focal units / focal length
p2 pixel units = -p2 focal units / focal length

For the distorsion computation I used the formulas in Appendix C of the user manual

Any feedback\suggestion on that?.
Thanks in advance,
Mauro

3
Bug Reports / average tie point multiplicity
« on: October 27, 2022, 07:38:31 PM »
Dear all,
I was wondering how the average tie points multiplicity statistic (chunk info) is calculated.
Is it based on the valid correspondences (i.e. those remaining after outliers detection) or on the total found ones?.
I noticed a discrepancy between the reported statistic and the computed (by mysellf) average number of valid projections for each tie point.
Regards,
MB

4
General / question about camera model variance-covariance matrix
« on: October 25, 2022, 10:06:10 AM »
Dear All,
I'm experiencing some counterintuitive results from metashape run, which are worth of noting (at least from my experince and knowledge).
In detail, I attached the shots of two Metashape "variance-covariance" matrices: the first  one relates to a BA in free-network mode (i.e., without any external constraint) of a theoretically strong photogrammetric network (orthogonal roll angles, convergence, redundancy, 3d scene, etc); the second one relates to a BA in extended mode (i.e., with external constraint in the form of ground control points) of a theoretically weak photogrammetric network (parallel flight lines, low redundandy, no convergence even if we are in high relief conditions).
As you can see the results are somewhat counterintuitive....
In the first case I would expect high precision and low correlations in the camera model parameters, whereas in the second case I would expect low precision and high correlations. However, the results indicate high precision and high correlations (for the first case), and low precision and low correlations (for the second case).
I noticed this behavuiour (i.e., strong networks typically reach high precision but very high correlations) in a number of other networks.
I missed some indication about the meaning or computation of the variance-covariance matrix, which led to a my mis-interpretation of obtained results, or what?

SOME DETAIL FOR THE FIRST CASE:
"camera" functional model: f, cx, cy, k1, k2, k3, p1, p2
stocasthic model: tie point accuracy set to 0.10 pix (sigma0 = 1, average key point size of about 3)
RMS image residual: about 0.50 pix (without filtering of false mathcing points)
redundacy: on average higher than 9 projections per point

SOME DETAIL FOR THE SECOND CASE:
"camera" functional model: f, cx, cy, k1, k2, k3, k4, p1, p2, b1, b2
stocasthic model: not specified
RMS image residual: about 1.6 pix (filtering details not provided)
redundacy: on average lower than 3.5 projections per point

5
General / BA-derived correlations between intrinsic and extrinsic
« on: August 22, 2022, 11:13:48 AM »
Dear All,
I was wondering if through the python API is there a way to retrieve the BA estimated correlations between intrinsic and extrinsic parameters.
I looked through the API documentation without gaining any useful information.
Regards,
MB

6
General / SfM implementation
« on: August 01, 2022, 12:02:27 PM »
Dear all,
I'm looking for a "very simple" clarification (i.e., without algorithms details if not freely available) about the implementation of the SfM workflow within agisoft metashape.
By assuming there are not geotagg in the images exif data, my overall idea is that during the alignment step a sequential process based on the analytical relative orientation is used for creating the photogrammetric block and estimating the initial guess of some functional model unknowns (extrinsic parameters and relative 3d coordinates of tie points). At the end of the sequential process, a FREE-NET bundle adjustment is carried out.
On the other hand, in the next step of cameras optimization, a further EXTENDED bundle adjustment is carried out.
Am I right or does anyone have different ideas?
Regards,
MB

Pages: [1]