Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Ryuseiken

Pages: [1]
1
In Reference pane of Metashape, the estimated marker 3D coordinates as well as Error (m) and Error (pix) values change by just checking/unchecking markers (without retrying georeferencing or Bundle Adjustment).

This issue has been discussed in the following topics.
https://www.agisoft.com/forum/index.php?topic=11329.0
https://www.agisoft.com/forum/index.php?topic=11655.0

According to the discussions, I have understood as follows.

1. When a marker is unchecked, its estimated 3D position is that calculated by triangulation, in other terms by minimizing RMS reprojection error. The minimized value is shown as Error (pix). Error (m) shows the difference between the estimated (triangulated) position and the "source" position provided by user.

2. When a marker is checked, its estimated 3D position is that obtained by BA (i.e., by minimizing something like a weighted sum of squared reprojection errors and squared differences between adjusted and "source" coordinates). The Error (pix) and Error (m) are calculated by using the 3D coordinates obtained by BA.

However, I still have a question. Error (m) and Error (pix) as well as the estimated position change by just checking/unchecking markers even when the markers were imported after BA. The above explanation #2 cannot explain this phenomenon.

So I'd like to ask again, what happens to the estimated position when a new marker not involved (existing) in bundle adjustment (in Alignment or Optimize Camera Alignment command) gets checked.


2
Feature Requests / Tie point limit per image pair
« on: December 15, 2022, 12:31:50 PM »
Hello.
Some UAV-based flight missions take groups of photos of several different orientations.
When  "Align Photos" command is applied to such a photo set, the in-group matchings (matching among photos of the same orientations) can dominate and inter-group matchings (matching among photos of the different orientations) can be relatively few.

This is not good for the camera parameter estimation because some parameters (such as the intrinsic parameter f) cannot be estimated just by in-group matchings. If in-group matchings is dominant, the inter-group matchings may not have an enough influence (in least-squares optimization assumed).

Therefore, it would be great if the user can specify a tie point limit per image pair, namely the maximum number of feature matches allowed between two photos. This is different from the currently implemented "tie point limit" (per image).

3
Bug Reports / Incremental image alignment decreases the original matches
« on: December 10, 2022, 05:44:41 AM »
Hello. I have tested "Incremental image alignment" in Metashape Pro 1.8.4, according to the user manual.

First, after confirming that "Keep key points" is checked in Preference dialog, I have added 5 images to the blank chunk, and ran "Align Photos." As a result, image C3_2 got 3097 total and 3032 valid matches with image C3_4, as shown in one of the attached images.

Then, I have added another 5 images to the chunk. Consequently, the chunk consisted of 10 images: 5 aligned and 5 non-aligned.

After that, I ran "Align Photos" without modifying the alignment settings. As a result, the number of total and valid matches between image C3_2 and image C3_4 were decreased to 1231 and 1171, respectively, as shown in another attached image.

So it seems like the matching are reset and retried among the originally existed images too.
Although it is mentioned in a past post that "Incremental image alignment" does not decrease the original tie points, I guess the above-described behavior can decrease the original tie points as a result of decreasing original matches.

4
Hello.
In Metashape versions 1.5.4 and 1.5.5 in my environment (Windows 10 64bit), marker reprojection error values, displayed in the rightmost column "Error (pix)" of Reference pane, often change by just checking/unchecking the marker, even though no camera parameter is changed.

This phenomena cannot be observed in 1.5.2 and 1.5.3, and therefore I guess it is a kind of bug.



5
General / The criterion used in Key Point Limit
« on: November 12, 2019, 03:21:02 PM »
I have recently recognized that a very small "Key point limit" coupled with image downscaling (low alignment accuracy setting) sometimes greatly improves SfM accuracy.

But I don't find any information about what kind of criteria Metashape uses to select the features up to "Key point limit."
For example, the open source SfM software COLMAP selects the large scale features up to "max_num_features" setting.
Is Metashape using a similar criterion or combining multiple criteria including scale and distinctness?
If it is not a company secret, I'd like to know.

The situation I recognized the importance of  "Key point limit" is as follows.
For a project of 664 Phantom 4 RTK images (nadir + oblique) from 110 m altitude,  I got the following total RMS error values for 23 check points:

Case 1. Alignment accuracy "High" and Key point limit 50000: 0.8514 [m]
Case 2. Alignment accuracy "High" and Key point limit 1000:  0.4446 [m]
Case 3. Alignment accuracy "Low"  and Key point limit 1000:  0.0247 [m]
Case 4. Alignment accuracy "Low"  and Key point limit 3125:  0.4056 [m]

These are just typical examples selected from 300+ combinations of settings I have tested. The large errors in Cases 1, 2 and 4 are due to large overestimations of ground altitude, associated with the underestimations of the intrinsic parameter f. The result indicates that using large-scale & "selected" features sometimes improves the accuracy of SfM.

6
General / Tie points are not well masked in vegetation and water regions
« on: November 01, 2019, 03:57:22 PM »
Hello.
I am working with UAV photos over an area partially covered with trees and water.
I'd like to mask them out in alighment because they sometimes degrades the camera parameter estimation.
The problem is that "Mask tie points" option doesn't work well in my case.

Specifically, I still get some tie points on regions with tall vegetation or water, even when I mask the regions on at least one photo and enable the "Mask tie points" option in alignment. Those tie points are not displayed on the masked images (indicating that they are not tied with the masked images), but it's clear that they fall inside the masked area if they are projected to the masked image.
I do not observe this phenomena when I apply masks to flat regions on the ground in the same image set.

Could someone tell me why this happens?

Alexey explains about  "Mask tie points" function as:
Quote
This feature will not create any new masks, it is just meant to avoid any tie points being created beyond the masked areas.
on this page:
https://www.agisoft.com/forum/index.php?topic=11021.msg49743#msg49743
but I don't understand well what "beyond" means here.

Another problem I've got is that the tie points in regions surrounding the masked regions are reduced (thinned) by the mask. This also happens in flat regions.

I am using Metashape version 1.5.5 build 9097.
Thank you for your helps.


7
General / Why the key points change every time I run Align Photos?
« on: August 03, 2018, 01:53:21 PM »
Hello.

I have been wondering why the result of “Align Photos” command (e.g., the number and positions of points in the sparse point cloud) is different every time I run the command, even if I restart PhotoScan each time and use exactly the same setting.

Then, I have noticed that the number of the key points (feature points) detected in each image changes every time I run the “Align Photos” command. It means that the process is stochastic in its first stage: feature detection.

I understand if the optimization (e.g., bundle adjustment) process is stochastic, because stochastic strategy is sometimes useful to avoid local minima in optimization. However, I have no idea why the feature detection process should be stochastic, not deterministic.

Of course the detailed algorithm must be secret, but I would be happy if I can understand why it should be stochastic.


8
Hello.
I'd like to ask why the number of matchings (valid + invalid) between a pair of images depends on the existences of other overlapping images.

Suppose I have 100 images taken from UAV, with the overlapping ratio of 80%.
The number of total matchings (valid + invalid) between the 50th and 51th images, obtained by "Align Photos" command, significantly decreases if I do not include any of other 98 images in the chunk.

The number of total matchings between any pair of images seems to be large when the chunk contains enough images in the surounding areas.

I am asking why this happens, because it is not likely to happen in principle.

In my Alignment, both of the keypoint and tie point limits are disabled. The result is the same if I use the default value of 50000 for the key point limit.

Thanks for your help.


9
In PhotoScan 1.3.1, I have witnessed an increase in “RMS reprojection error” (in the tie point scale) by running “Optimize Cameras” even though there’s no markers and/or camera extrinsic parameters provided.

Could you tell me why this happens?

The detailed situation is as follows.

After loading the images, I ran “Align Photos” with adaptive camera model fitting, and the resultant “RMS reprojection error” (in the tie point scale) 0.0437678. No camera intrinsic parameter changed from the initial value at this stage. This is not strange because in the adaptive mode PhotoScan can judge that estimating any  intrinsic parameter would be unstable.

Then, I ran “Optimize Cameras” allowing only “f” to change. The resultant “RMS reprojection error” was 0.0519948, larger than the value before the optimization. According to the user manual, the target function of this optimization command is “RMS reprojection error” + “reference coordinate misalignment.”
In this case, no markers and no camera position/rotation information were provided, and therefore, simply “RMS reprojection error” should be minimized.

10
In the current version of PhotoScan (1.3.1), the intrinsic parameters considered in the camera model must be selected by the user in the “Optimize Camera Alignment” dialog. If we aim for best results, this selection is very difficult for the user because there is no statistically valid criterion available for the selection. “Adaptive camera model fitting” option in the previous stage (“Align Photos”) may give a hint, but it often removes too many intrinsic parameters.

RMS reprojection error for the “Control points” and “Check points” (appearing in Reference pane) as well as the sparse point cloud (checked by Show Info command for the chunk) is not a valid criterion, because the pixel coordinates of all of these points are used in the optimization. RMS estimation error of the real coordinates of “Control points,” appearing in Reference pane, is also not a valid criterion for the similar reason.

In statistical terms, these points are “training data” to which the model is fitted. For valid evaluation of the model quality, we need independent “test data.” Otherwise, the model will suffer from overfitting of the model: good fitting to the training data and poor reprojection of the test data.

Therefore, I propose a command as follows:

1. Random splitting of the tie points (sparse point cloud) into the training and test points (in the ratio about 90%:10%).
2. Optimization (minimization of RMS reprojection error) using training points only.
3. Evaluation of the RMS reprojection error for the test points on the basis of the estimated camera intrinsic/extrinsic parameters and point 3D coordinates.
4. Repetition of the above steps 10 times or more to formulate a cross validation.

Hopefully, the command automatically repeats this procedure for various combinations of the intrinsic parameters considered, to find the best combination.

This is a statistically valid selection procedure of intrinsic parameters. It is similar to the selection of explanatory variables in linear regression. Because it would be computationally expensive to try all possible combinations of the intrinsic parameters, I think “forward selection” starting from only “f” and increasing the parameter one by one is practical.

I am happy if the Agisoft considers the implementation of the command like above.


11
I'm using PhotoScan Pro 1.3.1.

When we select Show Info command from the context menu of a Chunk in Workspace pane, two values of sparse points are shown in the format: XX of YY.

For example, in the attached image, they appear as:
1,447 of 1,561

I’d like to know what these values mean. According to my guess, the number on the left is valid (not removed) number of tie points, and the number on the right is the number of tie points originally made by the feature matching process in Align Photos.

However, even immediately after Align Photos, the number in the left is already smaller than that in the right. It indicates that some quality control is applied to the original matching points in Align Photos. I also would like to know what kind of selection is done in this command.

Thank you.


Pages: [1]