Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - vik748

Pages: [1]
General / Re: How to lower keypoint threshold limit
« on: August 08, 2020, 05:05:26 PM »
Thanks for that information.  I stand corrected, from the logs I see that Metashape is indeed detecting the required number of points. 

Are there any other tips / tweaks that could be used to improve the matching process?  I have uploaded a few test images to

As you can see from the images, visually we can easily track between the images, however Metashape is not able to match the keypoints well.  Are there any settings / tweaks that could help Metashape with the matching?

Thanks for all the help,

General / Re: How to lower keypoint threshold limit
« on: August 08, 2020, 03:31:18 PM »
I am only thinking about the detected keypoints and not tie points at this time. The white points are all the detected key points and as you can see they are way less than 60K.
Most feature detectors like ORB and SIFT allow you to choose a minimum threshold to be called a keypoint. Is there any way I can try to reduce that kind of threshold below the default values in Metashape.
Since I am looking at ice, my images are always like this and can't do much while capturing them.

General / Re: How to lower keypoint threshold limit
« on: August 08, 2020, 03:09:45 PM »
The settings are:
Accuracy: Highest
Key point limit: 60,000
Tie point limit: 100,000

This particular image is from a drone and is 5472 x 3078.  I also use similar images from go pro camera which are 4000x3000.

General / Re: How to lower keypoint threshold limit
« on: August 07, 2020, 04:55:11 PM »
Here is an example of the keypoints detected, see attachment.

General / How to lower keypoint threshold limit
« on: August 07, 2020, 02:18:42 PM »
I am working with a set of images of ice which are low contrast and low texture.  When I use default settings of 40,000 keypoints, Metashape seems to only be able to detect a few thousand.
Is there a way to lower the keypoint detection threshold so that I can get more keypoints?  Possibly by adding something under Tools -> Preferences -> Advanced -> Tweaks

Thanks in advance.

General / Dense cloud coordinates after rotating Camera positions
« on: April 02, 2020, 11:43:14 PM »
I have a set of images without any reference, so when I align them in Metashape, I end up with cameras with arbitrary coordinate system. Now, I know that these cameras are on a flat plane (XY plane) so I fit a plane to the calculated camera positions and then rotate them.  Then, I am able to use the triangulate points functions to generate the tie points / sparse cloud. When I export the sparse cloud it is in the expected coordinate frame and everything makes sense. However, when I generate the dense cloud and export it, it is still in the original arbitrary coordinate system.  So what I am doing incorrectly and how can I fix this.
Here are the steps to reproduce:
1. Load a set of images without any reference data.
2. Align images and get an accurate alignment solution.
3. Fit a plane P to the aligned images
4. Rotate the plane to make it parallel to XY plane
5. Calculate new camera poses T_new
6. Set the new poses to camera using  cam.transform = Metashape.Matrix(T_new)
7. Calculate all the tiepoints using Metashape,app.document.chunk.triangulatePoints(max_error = 1.0)
8. Export sparse cloud and check in cloud compare -> Everything makes sense.
9. Build dense cloud
10. Export dense cloud and check in cloud compare -> Incorrect coordinate system.

How do I get the dense cloud onto the new coordinate system?

Thanks in advance.

General / Feature Detector / Descriptors in Metashape
« on: September 16, 2019, 03:36:34 PM »
I have been studying and comparing various Feature Detectors and was wondering if anyone here knows what kind of Feature Detector / Descriptor is being used in Metashape?

Thanks in advance. Cheers.

Python and Java API / Animation to replicate image camera positions
« on: September 02, 2019, 06:12:35 AM »
I would like create an animation looking at the 3D model from the image camera positions.  ie. the positions we can when we right click and image and click "Look Through".
When I tried to add these positions to the animation and check the transform, I see some difference in the numbers.  Could someone suggest how one might be able to generate the animation camera track file containing all the image camera positions? A script would be nice, but if you can point out the math required, I can comeup with the python script.

Thanks in advance. Cheers!

Actually this is on fresh install of Metashape with no custom modules added to python.

Is there a way to change the python version in Metashape say to 3.4 or something like that?


Bug Reports / Python Kernel Crashes with "ERROR: execution aborted"
« on: May 14, 2019, 12:15:16 AM »
I am running the latest version on Metashape on Ubuntu 16.04.  I have observed that in the Python console once I run any command that throws an error, the kernel crashes and doesn't respond to any more commands. The only message you then get is "ERROR: execution aborted"

Looks like this has been identified at

Any ideas how I can fix this issue in Metashape?


General / Re: Force alignment between certain images during optimization
« on: February 07, 2019, 02:05:58 AM »
Mark, answers to your questions:
1. Yes multiple times
2. Yes, this is how I can ensure I get cross matches between the images in the beginning and the end.  Generic pre-selection is able to pick it up though.
3. I use adaptive histogram equilization which enhances features, so the matching is actually quite good.  Here is the matching between the two consecutive images which are being mis-aligned.

General / Force alignment between certain images during optimization
« on: February 07, 2019, 01:01:44 AM »
I am trying to run an alignment on a set of 4000 images taken while going around a large ice-berg. If I leave the project to align automatically, it fails miserably.  However if I start from one end I am able to align and get a general shape of the path.  The challenge is that when the loop is closed the overlapping images don't seem to register that they are looking at the same thing and make a separate path.
I have been able to minimize this effect by starting from the overlapping area, which lets both runs of that area align well.  Then sequentially aligning from both sides till they meet at the opposite end.  However, there is still a small mis-alignment at the end which I am unable to get rid off.  Here is what I have tried:
1. Multiple runs of optimize cameras while changing / holding different parameters.
2. Filtering the point cloud to remove 'bad' points or those with high uncertainty.

Please see attached screenshots for an idea on what is going on.

So, in the area where I have mis-alignment, the images are taken right next to each other and have considerable overlap.  Is there a way to force Metashape to keep them correctly aligned and adjust the others accordingly?


General / Export matching points
« on: December 05, 2018, 06:59:04 AM »
I see from the comments in the forum that in an old version there was an option to export the Image Matching points. I could not find this option in the current version 1.4.  Can someone advise how I can export the Matching points in the current version for processing in a different software.


Pages: [1]