Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Mariusz_M

Pages: [1] 2 3
1
Feature Requests / Hide all info layers option for Capture View
« on: January 30, 2025, 02:27:34 AM »
Sometimes I just want to create a screenshot of a model using capture view without any info layers like points, region, XYX axies, grid, trackball and everything else that is normally under Model-> Show/hide items.

So my request is to add one more option to the Capture View widnow - the option of "hide all info layers", so with ticking one box I can hide everything apart from the model on the actual screenshot without the need of manually swithichin these layers off before and then switching them back on after capturing the view I want.

Also, some of the options under Model-> show/hide items already have icons, but they are not on the toolbar. A good idea would be to add them to the main toolbar or to a sub-toolbar that could be switched on when needed.

2
Feature Requests / Open Magic Want Tolerance Slider with Magic Wand
« on: January 30, 2025, 02:15:47 AM »
A simple request to open the Magic Wand Tolerance slider every time Magic Wand tool is selected, so I do not have to switch the tool to the its tolerance slider and back.

3
There is a bug in 2.2.0 that makes all deleted tie points re-appear when I align one camera, even if it this camera was previously not even aligned.

Steps to recreate.

1. Align cameras on a photoset, where you know not everything will align.
2. Make note how many tie points are in total.
3. Select and delete a lot of tie points, like a half, so you can clearly see that is missing.
4. Right click on a camera that is not aligned and choose "Align selected cameras". If it manages to align, it will show not only the points belonging to this camera, but also all deleted points. The total number of the tie points will equal what you noted in Step 2.

Plan B In case the camera does not want to align:

5. Make note how many points you have after Step 3, since you could not complete Step 4.
6. Reset one of the cameras that is already aligned and once it is reset, the points belonging to this camera will disappear.
7. Re-align the camera using Right Click/Align Selected Cameras. You expect only to bring back the points belonging to this camera. And this is how it has been forever. But with verison 2.2.0 you get all the points back from all the cameras, even the previously deleted half of the points in Step 3. So the total numer of points is again what you noted in Step 2.

It looks like all the tie points from the initial cloud are saved and when a camera is aligned, instead of bringing only the points that belong to it, the software brings all the stored points of the initial cloud. This makes optimizing cameras difficult, because the points delted manually or with gradual selection are back and still taken into consideration for re-optimization.

4
Bug Reports / Look through shows the right view, but zoomed in
« on: December 08, 2024, 03:09:31 PM »
I have noticed that the Look Through option shows the correct point of view, but each time it is zoomed in a lot.

For a Gopro 11 the FOV is around 60 degrees. So when I set Perspective to 60 degrees and select the "Look through" option, then I am show the correct angle of view, but the vieport camera is too close. I have to use the mouse scroll to move the viewport camera away to actually match the original image and then everything looks good. This happens for all photosets on land or under water.

5
I use a 4K laptop screen, so it is set to high DPI mode. When I put the computer to sleep and wake it up again, I get these warnings:

2024-11-17 13:26:29 monitorData: Unable to obtain handle for monitor '\\.\DISPLAY1', defaulting to 96 DPI.
2024-11-17 13:26:29 monitorData: Unable to obtain handle for monitor '\\.\DISPLAY1', defaulting to 96 DPI.
2024-11-17 13:26:29 monitorData: Unable to obtain handle for monitor '\\.\DISPLAY1', defaulting to 96 DPI.
2024-11-17 13:26:29 monitorData: Unable to obtain handle for monitor '\\.\DISPLAY1', defaulting to 96 DPI.
2024-11-17 13:26:29 monitorData: Unable to obtain handle for monitor '\\.\DISPLAY1', defaulting to 96 DPI.
2024-11-17 13:26:29 monitorData: Unable to obtain handle for monitor '\\.\DISPLAY1', defaulting to 96 DPI.
2024-11-17 13:26:29 monitorData: Unable to obtain handle for monitor '\\.\DISPLAY1', defaulting to 96 DPI.
2024-11-17 13:26:29 monitorData: Unable to obtain handle for monitor '\\.\DISPLAY1', defaulting to 96 DPI.

The computer was not even processing. Metashape was just open in the bacground and this happened.

However, then I set batch processing and it runs for some time, finishes and some time after computer goes to sleep mode, then I have another problem. When I wake the computer up, Metashape window is quater of the size and not fully refreshed. There are some black parts and the only thing I can do is to kill the metashape process and re-open.

At the moment I have the newest Nvidia Studio drivers for dedicated GPU and the newest Intel drivers for integrated GPU.

6
Feature Requests / Nuber of enabled/disabled cameras in the chunk tree
« on: November 16, 2024, 03:59:01 PM »
At the moment in the chunk tree I can only see the number or cameras/images and how many are aligned. It would be usefull to also have a number of enabled cameras there. Especially after the "reduce overlap" process. At the moment I only see which ones are enabled/disabled, but I do not know how many.

Furthermore, the same two numbers would be useful for every camera group in the chunk tree. So If I have 10 camera groups, I can easily see how many cameras are disabled in each.

7
Hi,

I put 200 images into Metashape Pro and tried to align them with preselection using Medium accuracy and default settings. On my computer it should take up to 5 minutes. But it stopped at 0% during Camera estimation and keeps working on some "bruteforce estimator:".... for ages. I actually tried to cancell the whole thing but it was cancelling while doing it for 25 minutes and was still counting, so I had to kill the process and restart Metashape.

First I thought it may have had something to do with a few masks I put on the images in the first attempt. So second time around I used the same 200 images and put one sma rectangular mask on one image and run alignment again. Same outcome... bruteforce estimator for ages...

Third time around I switched the photosed and used something similar, also 200 images, but different images and no masks at all... Still the same... Already running for over 25 minutes and still at 0% of camera estimation.

I have used these photosets before with previous Metashape Versions. So nothing wrong with pictures, but something clearly wrong with the software. Is there any tweak to switch off this "bruteforce estimator:" and keep using the current version like it was an old version?

This is what I see in the console:

pair 3 and 38: 784 robust from 4943
pair 1 and 41: 1931 robust from 4954
pair 4 and 9: 1025 robust from 4928
additional pairs considered in 3.596 sec.
optimizing initial pair...
****************************************************************************************************
default pair score: n_aligned: 3, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 0.935702, accuracy: 0.00583939, n_points: 5757, ids: [1, 2]
optimal pair score: n_aligned: 3, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 0.683379, accuracy: 0.00306118, n_points: 3648, ids: [36, 38]
initial pair still not stable
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.91284, accuracy: 0.00280702, n_points: 6811, ids: [36, 38]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.90761, accuracy: 0.00280855, n_points: 6802, ids: [36, 38]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.90067, accuracy: 0.00280848, n_points: 6803, ids: [36, 38]
bruteforce estimator: n skipped: 5 / 40, tm_total = 63.3, tm0 = 0.004, tm1 = 0, tm2 = 61.67
initial pair 1/20 : t_calculate: 63.315, t_evaluate: 3.107
new best bruteforce evaluated score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.89912, accuracy: 0.00280921, n_points: 6804, ids: [36, 38]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.93716, accuracy: 0.00279238, n_points: 6822, ids: [36, 37]
bruteforce estimator: n skipped: 12 / 34, tm_total = 14.777, tm0 = 0.004, tm1 = 0, tm2 = 13.103
initial pair 2/20 : t_calculate: 14.796, t_evaluate: 3.02
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.26831, accuracy: 0.00420025, n_points: 7414, ids: [4, 37]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.25748, accuracy: 0.00420639, n_points: 7404, ids: [4, 37]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.25487, accuracy: 0.00420779, n_points: 7403, ids: [4, 37]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.25487, accuracy: 0.00420779, n_points: 7403, ids: [4, 37]
bruteforce estimator: n skipped: 4 / 88, tm_total = 172.921, tm0 = 0.009, tm1 = 0, tm2 = 169.768
initial pair 3/20 : t_calculate: 172.937, t_evaluate: 5.848
new best bruteforce evaluated score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.25487, accuracy: 0.00420779, n_points: 7403, ids: [4, 37]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 2.23821, accuracy: 0.0114749, n_points: 8571, ids: [1, 2]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 2.22273, accuracy: 0.0114948, n_points: 8570, ids: [1, 2]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 2.20827, accuracy: 0.0115137, n_points: 8568, ids: [1, 2]
bruteforce estimator: n skipped: 6 / 38, tm_total = 79.193, tm0 = 0.007, tm1 = 0, tm2 = 76.577
initial pair 4/20 : t_calculate: 79.218, t_evaluate: 6.22
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 2.2351, accuracy: 0.0114733, n_points: 8568, ids: [0, 2]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 2.22412, accuracy: 0.0115737, n_points: 8569, ids: [0, 2]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 2.21902, accuracy: 0.0115979, n_points: 8569, ids: [0, 2]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 2.2172, accuracy: 0.011533, n_points: 8568, ids: [0, 2]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 2.20202, accuracy: 0.011615, n_points: 8565, ids: [0, 2]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 2.12348, accuracy: 0.0115042, n_points: 8534, ids: [0, 2]
bruteforce estimator: n skipped: 4 / 90, tm_total = 142.041, tm0 = 0.013, tm1 = 0, tm2 = 138.578
initial pair 5/20 : t_calculate: 142.055, t_evaluate: 5.147
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.77631, accuracy: 0.0023308, n_points: 7353, ids: [5, 37]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.73912, accuracy: 0.00233812, n_points: 7340, ids: [5, 37]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.27191, accuracy: 0.00419602, n_points: 7416, ids: [5, 37]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.26893, accuracy: 0.00420003, n_points: 7420, ids: [5, 37]
bruteforce estimator: n skipped: 1 / 58, tm_total = 113.435, tm0 = 0.008, tm1 = 0, tm2 = 111.154
initial pair 6/20 : t_calculate: 113.451, t_evaluate: 5.2
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.93273, accuracy: 0.00279949, n_points: 6823, ids: [6, 36]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.90957, accuracy: 0.00280206, n_points: 6814, ids: [6, 36]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.90535, accuracy: 0.00280512, n_points: 6807, ids: [6, 36]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.48033, accuracy: 0.00231413, n_points: 7099, ids: [6, 36]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.4642, accuracy: 0.00231315, n_points: 7094, ids: [6, 36]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.40826, accuracy: 0.00231871, n_points: 7064, ids: [6, 36]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.37616, accuracy: 0.00232043, n_points: 7046, ids: [6, 36]
bruteforce estimator: n skipped: 10 / 100, tm_total = 150.656, tm0 = 0.012, tm1 = 0, tm2 = 146.891
initial pair 7/20 : t_calculate: 150.674, t_evaluate: 4.134
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 2.24525, accuracy: 0.011531, n_points: 8574, ids: [0, 1]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 2.20229, accuracy: 0.0115619, n_points: 8566, ids: [0, 1]
bruteforce estimator: n skipped: 7 / 44, tm_total = 83.581, tm0 = 0.007, tm1 = 0, tm2 = 80.927
initial pair 8/20 : t_calculate: 83.602, t_evaluate: 5.109
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.741, accuracy: 0.00980469, n_points: 7595, ids: [71, 72]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.741, accuracy: 0.00980469, n_points: 7595, ids: [71, 72]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.741, accuracy: 0.00980469, n_points: 7595, ids: [71, 72]
bruteforce estimator: n skipped: 5 / 34, tm_total = 55.448, tm0 = 0.003, tm1 = 0, tm2 = 54.28
initial pair 9/20 : t_calculate: 55.46, t_evaluate: 5.101
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.71491, accuracy: 0.0170769, n_points: 7128, ids: [21, 22]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.65335, accuracy: 0.0172729, n_points: 7088, ids: [21, 22]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.64756, accuracy: 0.0172456, n_points: 7090, ids: [21, 22]
bruteforce estimator: n skipped: 10 / 96, tm_total = 113.377, tm0 = 0.011, tm1 = 0, tm2 = 109.557
initial pair 10/20 : t_calculate: 113.399, t_evaluate: 4.322
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.81521, accuracy: 0.0116906, n_points: 8408, ids: [2, 4]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.78542, accuracy: 0.011712, n_points: 8389, ids: [2, 4]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.76972, accuracy: 0.0117169, n_points: 8384, ids: [2, 4]
bruteforce estimator: new best score: n_aligned: 5, n_points_tier: 1, accuracy_tier: 1, reprojection_error: 1.76804, accuracy: 0.0117205, n_points: 8382, ids: [2, 4]


8
I have been placing markers on a model that is made of around 8000 photos, which is not that big compared to other models I work with. As long as I place a new or existing marker on a photo (when everything is already aligned), it works well. But when I accidently press "add marker" instead of place marker, it goes through every single photo and it takes ages. I would simply like to abort adding a marker when it takes a long time by pressing "Esc".

9
I am working o a model made of 8000 photos. The model has 50M triangles.

 Now each time I want to process 10 textures it takes ages, because first it wants to do it on CPU instead of GPU. Even if I click processing in the background and it closes the model, for some reason it does not have enough momory to do it anyway. The part that takes ages is blending textures, so after UVs are done and I do not udnerstand why the size of the model matters at this stage. Creating 10 textures on a 3-5M triangles takes only a few minutes.

A huge part of the process is "Estimating Quality". Is is the same as the option of Estimating Image Quality? I ran Estimate Image Quality on all images just before texture generation and it took only around 35 minutes. Did not help though. Now at the stage of blending textures it is estimating quality again. It has run for the last 6 hours and so fat 72% done.

10
Feature Requests / Second layer of masks
« on: May 03, 2024, 02:06:30 AM »
In underwater photogrammetry quite often I need to mask either divers or bigger fish. The masks must stay on during the model creation, so no diver or big fish affect depth maps. Because of that, I am unable to use the volumetric masks at the same time, so most of the time. Volumetric masks are quire useful to fix some small problems of shipwreck parts.

The solution is quite simple. Please introduce a second layer of masks. Then I will be able to draw separate m asks for actual masking, and another independend layer of masks for volumetric maks. Then during the model creation I would be able to select: Depth maps - layer 1 masks or both, Volumetric masks - Layer 2 masks or both. Quite simple solution and I guess from the programming side also not very difficult.

11
Hi.

There are several cases when prioritizing closest photos for texture generation would be prefered. There are also cases, when it would be prefered to select a specific camera group and prioritize them in texture generation. So please implement these options if possible. Below only two scenarios.

Case 1.

Flying a drone around a building. First flying far away, to get the overview and to get informaion what is around the building. Then flying much closer only to focus on the details of the walls. At the moment I have no control which photos will be used for generation of textures on the walls and in some cases the ones from far away will be chosen, although there are closer photos that better show the details I care about.

Case 2

I have underwater photosets where I scanned a wreck. Most of the photos are taken with a 20mpix GoPro and they help building the model well. But there are also some photos from a higher resolution and quality DLSR camera showing only details up close in those parts of the wreck viewers will most likely zoom in and expect details. At the moment I have no control which photos will be used for textures. In this case I would prefer to select a camera group from the DLSR and set "prioritize in texture generation".

12
Recently a new option was added to split models in blocks. There is not much about it in the manual. What would be the use of it and how much does it differ from building a tiled model?

13
Hi.

I have a photoset from underwater without GPS co-ordinates recorded with an autonomous boat. The same boat has a sonar with GPS and saves a track. I would like to use the CSV file from the sonar which contains the track (trajectory) of the boat to preselect and reference the photos. At the moment it does not seem possible, as the "import trajectory" function is for laser scaners.

So Metashape can already load the boat trajectory and displays points with GPS co-ordinates and time. Now all that is needed is Metashape to also read the timestamp on each photo during the alignment process and place it on the same trajectory based on the time the photo was taken. There could be another option to the existing Reference Preselection, like Reference Preselection: Timestamp

This simple addition can help with underwater photogrammetry a lot. Since nowadays more and more underwater navigation devices can output a track which sometimes has lower accuracy than similar devices above water, but could easily be used for preselection of big datasets and rough reference of the whole model

14
Feature Requests / Pause/play button for background processing
« on: December 15, 2022, 04:23:07 PM »
I often process big projects and they run in the background for a few days. Every time I want to pause processing and use my computer for something else, I first need to re-open the project to pause it. It would be so much easier if I caould just pause/resume background processing with one click without loading the project abck into the memory.

15
Hi,

It has actually been happening for the last few versions and sometimes is quite annoying. I have an integrated Intel UHD and Nvidia RTX 3070. So obviously I only want to use the Nvidia adapter and this is the only one switched on in the preferences. Hovewer, from time to time, quite randomly, I see in the console that Metashape uses both GPUs. Then I go to preferences and see both GPUs ticked. So I untick one and work on the project for several days and then it happens again.

Also, even when Intel GPU is switched on, it still seems to be used during depth maps generation alog with the NVIDIA GPU. Is it normal?

Pages: [1] 2 3