Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Mariusz_M

Pages: [1] 2 3
1
I have been placing markers on a model that is made of around 8000 photos, which is not that big compared to other models I work with. As long as I place a new or existing marker on a photo (when everything is already aligned), it works well. But when I accidently press "add marker" instead of place marker, it goes through every single photo and it takes ages. I would simply like to abort adding a marker when it takes a long time by pressing "Esc".

2
I am working o a model made of 8000 photos. The model has 50M triangles.

 Now each time I want to process 10 textures it takes ages, because first it wants to do it on CPU instead of GPU. Even if I click processing in the background and it closes the model, for some reason it does not have enough momory to do it anyway. The part that takes ages is blending textures, so after UVs are done and I do not udnerstand why the size of the model matters at this stage. Creating 10 textures on a 3-5M triangles takes only a few minutes.

A huge part of the process is "Estimating Quality". Is is the same as the option of Estimating Image Quality? I ran Estimate Image Quality on all images just before texture generation and it took only around 35 minutes. Did not help though. Now at the stage of blending textures it is estimating quality again. It has run for the last 6 hours and so fat 72% done.

3
Feature Requests / Second layer of masks
« on: May 03, 2024, 02:06:30 AM »
In underwater photogrammetry quite often I need to mask either divers or bigger fish. The masks must stay on during the model creation, so no diver or big fish affect depth maps. Because of that, I am unable to use the volumetric masks at the same time, so most of the time. Volumetric masks are quire useful to fix some small problems of shipwreck parts.

The solution is quite simple. Please introduce a second layer of masks. Then I will be able to draw separate m asks for actual masking, and another independend layer of masks for volumetric maks. Then during the model creation I would be able to select: Depth maps - layer 1 masks or both, Volumetric masks - Layer 2 masks or both. Quite simple solution and I guess from the programming side also not very difficult.

4
Hi.

There are several cases when prioritizing closest photos for texture generation would be prefered. There are also cases, when it would be prefered to select a specific camera group and prioritize them in texture generation. So please implement these options if possible. Below only two scenarios.

Case 1.

Flying a drone around a building. First flying far away, to get the overview and to get informaion what is around the building. Then flying much closer only to focus on the details of the walls. At the moment I have no control which photos will be used for generation of textures on the walls and in some cases the ones from far away will be chosen, although there are closer photos that better show the details I care about.

Case 2

I have underwater photosets where I scanned a wreck. Most of the photos are taken with a 20mpix GoPro and they help building the model well. But there are also some photos from a higher resolution and quality DLSR camera showing only details up close in those parts of the wreck viewers will most likely zoom in and expect details. At the moment I have no control which photos will be used for textures. In this case I would prefer to select a camera group from the DLSR and set "prioritize in texture generation".

5
Recently a new option was added to split models in blocks. There is not much about it in the manual. What would be the use of it and how much does it differ from building a tiled model?

6
It is a simple CSV file.

Latitude       Longitude      Depth(m)   Time(UTC)
         
46.5109524881009   6.6092179741136   0.00   2024-04-22T13:22:57.127
         
46.510946284954   6.60920896074062   0.00   2024-04-22T13:22:59.647
         
46.5109524881009   6.60920896074062   0.00   2024-04-22T13:22:59.942
         
46.510946284954   6.60920896074062   0.00   2024-04-22T13:23:00.239
         
46.510946284954   6.60920896074062   0.00   2024-04-22T13:23:01.964
         
46.5109400818063   6.60919994736765   0.00   2024-04-22T13:23:03.085
         
46.5109524881009   6.60920896074062   0.00   2024-04-22T13:23:15.794
         
46.5109524881009   6.6092179741136   0.00   2024-04-22T13:23:19.425
         
46.510946284954   6.6092179741136   0.00   2024-04-22T13:23:20.895
         
46.5109400818063   6.6092179741136   0.00   2024-04-22T13:23:23.162
         
46.5109338786579   6.6092179741136   0.00   2024-04-22T13:23:27.696
         
46.5109400818063   6.6092179741136   0.00   2024-04-22T13:23:31.141
         
46.5109400818063   6.60922698748657   0.00   2024-04-22T13:23:33.478

Result of the command:

{'Exif/ApertureValue': '2.5', 'Exif/DateTime': '2024:04:22 22:03:39', 'Exif/DateTimeOriginal': '2024:04:22 22:03:39', 'Exif/ExposureTime': '0.00162866', 'Exif/FNumber': '2.5', 'Exif/FocalLength': '2.71', 'Exif/FocalLengthIn35mmFilm': '15', 'Exif/ISOSpeedRatings': '184', 'Exif/Make': 'GoPro', 'Exif/Model': 'HERO10 Black', 'Exif/Orientation': '1', 'Exif/ShutterSpeedValue': '9', 'Exif/Software': 'H21.01.01.46.00', 'File/ImageHeight': '4176', 'File/ImageWidth': '5568', 'System/FileModifyDate': '2024:04:22 22:03:38', 'System/FileSize': '5137827'}

As you can see, the sonar on the boat creates a set of points and each of them represents GPS co-ordinates and time (UTC). So it is a line with known GPS co-orginates every 1-2 seconds.

 Each photo, shot every second, has a capture date and once the internal gopro clock is syncronised (isn't at the moment), these will be corresponding times.

 So it should be no problem to put these photos on the reference line (trajectory) created by the sonar by exact capture time, so somewhere around GPS/time points and in between these points. This should allow for both preselection based on capture time and referencing by sonar track.

The sonar also saves depth, which is nothing else than negative height and can also be used in calculations. In my photoset the depth is always 0, as the pictures are taken from the surface. But including deth into calculations may make this new feature useful with ROVs and underwater navigations systems.

7
Hi.

I have a photoset from underwater without GPS co-ordinates recorded with an autonomous boat. The same boat has a sonar with GPS and saves a track. I would like to use the CSV file from the sonar which contains the track (trajectory) of the boat to preselect and reference the photos. At the moment it does not seem possible, as the "import trajectory" function is for laser scaners.

So Metashape can already load the boat trajectory and displays points with GPS co-ordinates and time. Now all that is needed is Metashape to also read the timestamp on each photo during the alignment process and place it on the same trajectory based on the time the photo was taken. There could be another option to the existing Reference Preselection, like Reference Preselection: Timestamp

This simple addition can help with underwater photogrammetry a lot. Since nowadays more and more underwater navigation devices can output a track which sometimes has lower accuracy than similar devices above water, but could easily be used for preselection of big datasets and rough reference of the whole model

8
See below a part of the log. It says: GPU 2 and GPU 1. I guess one of them is Intel UHD?


2022-12-25 11:55:24 BuildModel: quality = Low, depth filtering = Mild, PM version, source data = Depth maps, surface type = Arbitrary, face count = Medium, volumetric masking = 0, OOC version, interpolation = Enabled, vertex colors = 0
2022-12-25 11:55:25 Generating depth maps...
2022-12-25 11:55:25 Preparing 41030 cameras info...
2022-12-25 11:55:42 91 cameras skipped (due to <=2 tie points)
2022-12-25 11:55:42 40939/41030 cameras prepaired
2022-12-25 11:55:42 cameras data loaded in 16.697 s
2022-12-25 12:01:24 cameras graph built in 342.142 s
2022-12-25 12:01:24 filtering neighbors with too low common points, threshold=50...
2022-12-25 12:01:24 Camera 1875 has no neighbors
2022-12-25 12:01:24 Camera 1876 has no neighbors
2022-12-25 12:01:24 Camera 2716 has no neighbors
2022-12-25 12:01:24 Camera 2994 has no neighbors
2022-12-25 12:01:24 Camera 3027 has no neighbors
2022-12-25 12:01:24 Camera 4501 has no neighbors
2022-12-25 12:01:24 Camera 4502 has no neighbors
2022-12-25 12:01:24 Camera 4503 has no neighbors
2022-12-25 12:01:24 Camera 4500 has no neighbors
2022-12-25 12:01:24 Camera 4885 has no neighbors
2022-12-25 12:01:24 Camera 4886 has no neighbors
2022-12-25 12:01:24 Camera 5626 has no neighbors
2022-12-25 12:01:24 Camera 5849 has no neighbors
2022-12-25 12:01:24 Camera 6058 has no neighbors
2022-12-25 12:01:24 Camera 6254 has no neighbors
2022-12-25 12:01:24 Camera 6349 has no neighbors
2022-12-25 12:01:24 Camera 5791 has no neighbors
2022-12-25 12:01:24 Camera 5876 has no neighbors
2022-12-25 12:01:24 Camera 5687 has no neighbors
2022-12-25 12:01:24 Camera 5943 has no neighbors
2022-12-25 12:01:24 Camera 5957 has no neighbors
2022-12-25 12:01:24 Camera 6070 has no neighbors
2022-12-25 12:01:24 Camera 9498 has no neighbors
2022-12-25 12:01:24 Camera 10118 has no neighbors
2022-12-25 12:01:24 Camera 5544 has no neighbors
2022-12-25 12:01:24 Camera 11388 has no neighbors
2022-12-25 12:01:24 Camera 11391 has no neighbors
2022-12-25 12:01:24 Camera 9499 has no neighbors
2022-12-25 12:01:24 Camera 11449 has no neighbors
2022-12-25 12:01:24 Camera 9500 has no neighbors
2022-12-25 12:01:24 Camera 9845 has no neighbors
2022-12-25 12:01:24 Camera 5050 has no neighbors
2022-12-25 12:01:24 Camera 9927 has no neighbors
2022-12-25 12:01:24 Camera 12370 has no neighbors
2022-12-25 12:01:24 Camera 10138 has no neighbors
2022-12-25 12:01:24 Camera 6140 has no neighbors
2022-12-25 12:01:24 Camera 10240 has no neighbors
2022-12-25 12:01:24 Camera 10557 has no neighbors
2022-12-25 12:01:24 Camera 11129 has no neighbors
2022-12-25 12:01:24 Camera 11302 has no neighbors
2022-12-25 12:01:24 Camera 11309 has no neighbors
2022-12-25 12:01:24 Camera 11358 has no neighbors
2022-12-25 12:01:24 Camera 6706 has no neighbors
2022-12-25 12:01:24 Camera 11661 has no neighbors
2022-12-25 12:01:24 Camera 11647 has no neighbors
2022-12-25 12:01:24 Camera 12210 has no neighbors
2022-12-25 12:01:24 Camera 9958 has no neighbors
2022-12-25 12:01:24 Camera 14024 has no neighbors
2022-12-25 12:01:24 Camera 15537 has no neighbors
2022-12-25 12:01:24 Camera 15643 has no neighbors
2022-12-25 12:01:24 Camera 15685 has no neighbors
2022-12-25 12:01:24 Camera 15686 has no neighbors
2022-12-25 12:01:24 Camera 15682 has no neighbors
2022-12-25 12:01:24 Camera 15681 has no neighbors
2022-12-25 12:01:24 Camera 13204 has no neighbors
2022-12-25 12:01:24 Camera 15540 has no neighbors
2022-12-25 12:01:24 Camera 15728 has no neighbors
2022-12-25 12:01:24 Camera 15726 has no neighbors
2022-12-25 12:01:24 Camera 15683 has no neighbors
2022-12-25 12:01:24 Camera 16298 has no neighbors
2022-12-25 12:01:24 Camera 16355 has no neighbors
2022-12-25 12:01:24 Camera 16306 has no neighbors
2022-12-25 12:01:24 Camera 15684 has no neighbors
2022-12-25 12:01:24 Camera 16603 has no neighbors
2022-12-25 12:01:24 Camera 17239 has no neighbors
2022-12-25 12:01:24 Camera 13188 has no neighbors
2022-12-25 12:01:24 Camera 17549 has no neighbors
2022-12-25 12:01:24 Camera 17597 has no neighbors
2022-12-25 12:01:24 Camera 21220 has no neighbors
2022-12-25 12:01:24 Camera 21237 has no neighbors
2022-12-25 12:01:24 Camera 16566 has no neighbors
2022-12-25 12:01:24 Camera 21333 has no neighbors
2022-12-25 12:01:24 Camera 16624 has no neighbors
2022-12-25 12:01:24 Camera 16372 has no neighbors
2022-12-25 12:01:24 Camera 17255 has no neighbors
2022-12-25 12:01:24 Camera 17315 has no neighbors
2022-12-25 12:01:24 Camera 21709 has no neighbors
2022-12-25 12:01:24 Camera 17461 has no neighbors
2022-12-25 12:01:24 Camera 16369 has no neighbors
2022-12-25 12:01:24 Camera 17622 has no neighbors
2022-12-25 12:01:24 Camera 16370 has no neighbors
2022-12-25 12:01:24 Camera 16373 has no neighbors
2022-12-25 12:01:24 Camera 16622 has no neighbors
2022-12-25 12:01:24 Camera 22263 has no neighbors
2022-12-25 12:01:24 Camera 21710 has no neighbors
2022-12-25 12:01:24 Camera 17451 has no neighbors
2022-12-25 12:01:24 Camera 24007 has no neighbors
2022-12-25 12:01:24 Camera 27797 has no neighbors
2022-12-25 12:01:24 Camera 29401 has no neighbors
2022-12-25 12:01:24 Camera 29923 has no neighbors
2022-12-25 12:01:24 Camera 21666 has no neighbors
2022-12-25 12:01:24 Camera 30698 has no neighbors
2022-12-25 12:01:24 Camera 32788 has no neighbors
2022-12-25 12:01:24 Camera 33471 has no neighbors
2022-12-25 12:01:24 Camera 33999 has no neighbors
2022-12-25 12:01:24 Camera 22217 has no neighbors
2022-12-25 12:01:24 Camera 33998 has no neighbors
2022-12-25 12:01:24 Camera 34944 has no neighbors
2022-12-25 12:01:24 Camera 35274 has no neighbors
2022-12-25 12:01:24 Camera 35500 has no neighbors
2022-12-25 12:01:24 Camera 35501 has no neighbors
2022-12-25 12:01:24 Camera 37080 has no neighbors
2022-12-25 12:01:24 Camera 37558 has no neighbors
2022-12-25 12:01:24 Camera 37627 has no neighbors
2022-12-25 12:01:24 Camera 29922 has no neighbors
2022-12-25 12:01:24 Camera 38007 has no neighbors
2022-12-25 12:01:24 Camera 38664 has no neighbors
2022-12-25 12:01:24 Camera 38701 has no neighbors
2022-12-25 12:01:24 Camera 38677 has no neighbors
2022-12-25 12:01:24 Camera 37569 has no neighbors
2022-12-25 12:01:24 Camera 39919 has no neighbors
2022-12-25 12:01:24 Camera 39967 has no neighbors
2022-12-25 12:01:24 Camera 40027 has no neighbors
2022-12-25 12:01:24 Camera 40594 has no neighbors
2022-12-25 12:01:24 Camera 41694 has no neighbors
2022-12-25 12:01:24 Camera 43127 has no neighbors
2022-12-25 12:01:24 Camera 44011 has no neighbors
2022-12-25 12:01:24 Camera 44065 has no neighbors
2022-12-25 12:01:24 Camera 38659 has no neighbors
2022-12-25 12:01:24 Camera 45612 has no neighbors
2022-12-25 12:01:24 Camera 46399 has no neighbors
2022-12-25 12:01:24 Camera 46601 has no neighbors
2022-12-25 12:01:24 Camera 38709 has no neighbors
2022-12-25 12:01:24 Camera 46791 has no neighbors
2022-12-25 12:01:24 Camera 46611 has no neighbors
2022-12-25 12:01:24 Camera 38708 has no neighbors
2022-12-25 12:01:24 Camera 46728 has no neighbors
2022-12-25 12:01:24 Camera 48103 has no neighbors
2022-12-25 12:01:24 Camera 48702 has no neighbors
2022-12-25 12:01:24 Camera 48780 has no neighbors
2022-12-25 12:01:24 Camera 46800 has no neighbors
2022-12-25 12:01:24 Camera 48797 has no neighbors
2022-12-25 12:01:24 Camera 49476 has no neighbors
2022-12-25 12:01:24 Camera 50189 has no neighbors
2022-12-25 12:01:24 Camera 50788 has no neighbors
2022-12-25 12:01:24 Camera 50848 has no neighbors
2022-12-25 12:01:24 Camera 50825 has no neighbors
2022-12-25 12:01:24 Camera 51074 has no neighbors
2022-12-25 12:01:24 Camera 51147 has no neighbors
2022-12-25 12:01:24 Camera 51229 has no neighbors
2022-12-25 12:01:24 Camera 46602 has no neighbors
2022-12-25 12:01:24 Camera 51575 has no neighbors
2022-12-25 12:01:24 Camera 51581 has no neighbors
2022-12-25 12:01:24 Camera 51099 has no neighbors
2022-12-25 12:01:24 Camera 51589 has no neighbors
2022-12-25 12:01:24 Camera 51685 has no neighbors
2022-12-25 12:01:24 Camera 51688 has no neighbors
2022-12-25 12:01:24 Camera 51712 has no neighbors
2022-12-25 12:01:24 avg neighbors before -> after filtering: 55.0101 -> 9.9775 (82% filtered out)
2022-12-25 12:01:24 limiting neighbors to 16 best...
2022-12-25 12:04:01 avg neighbors before -> after filtering: 9.98116 -> 2.08355 (8% filtered out)
2022-12-25 12:04:01 neighbors number min/1%/10%/median/90%/99%/max: 0, 1, 4, median=9, 16, 16, 16
2022-12-25 12:04:01 cameras info prepared in 516.736 s
2022-12-25 12:04:16 saved cameras info in 14.289
2022-12-25 12:04:16 Partitioning 40939 cameras...
2022-12-25 12:04:16 number of mini clusters: 3150
2022-12-25 12:04:16 1050 groups: avg_ref=38.9895 avg_neighb=55.9429 total_io=243%
2022-12-25 12:04:16 max_ref=39 max_neighb=126 max_total=165
2022-12-25 12:04:16 cameras partitioned in 0.098 s
2022-12-25 12:04:16 saved depth map partition in 0.009 sec
2022-12-25 12:04:19 loaded cameras info in 0.778
2022-12-25 12:04:19 loaded depth map partition in 0.001 sec
2022-12-25 12:04:19 already partitioned (38<=50 ref cameras, 36<=200 neighb cameras)
2022-12-25 12:04:19 group 1/1: preparing 74 cameras images...
2022-12-25 12:04:19 point cloud loaded in 0.18 s
2022-12-25 12:04:21 Found 2 GPUs in 0.009 sec (CUDA: 0.002 sec, OpenCL: 0.007 sec)
2022-12-25 12:04:22 Using device: NVIDIA GeForce RTX 3070 Laptop GPU, 40 compute units, free memory: 6815/8191 MB, compute capability 8.6
2022-12-25 12:04:22   driver/runtime CUDA: 12000/10010
2022-12-25 12:04:22   max work group size 1024
2022-12-25 12:04:22   max work item sizes [1024, 1024, 64]
2022-12-25 12:04:30 group 1/1: cameras images prepared in 10.547 s
2022-12-25 12:04:30 group 1/1: 74 x frame
2022-12-25 12:04:30 group 1/1: 74 x uint8
2022-12-25 12:04:30 group 1/1: expected peak VRAM usage: 88 MB (48 MB max alloc, 1000x1125 mipmap texture, 12 max neighbors)
2022-12-25 12:04:30 Found 2 GPUs in 0 sec (CUDA: 0 sec, OpenCL: 0 sec)
2022-12-25 12:04:30 Using device: NVIDIA GeForce RTX 3070 Laptop GPU, 40 compute units, free memory: 6815/8191 MB, compute capability 8.6
2022-12-25 12:04:30   driver/runtime CUDA: 12000/10010
2022-12-25 12:04:30   max work group size 1024
2022-12-25 12:04:30   max work item sizes [1024, 1024, 64]
2022-12-25 12:04:30 Using device 'NVIDIA GeForce RTX 3070 Laptop GPU' in concurrent. (2 times)
2022-12-25 12:04:30 Camera 51575 skipped (no neighbors)
2022-12-25 12:04:30 Camera 51589 skipped (no neighbors)
2022-12-25 12:04:30 [GPU 1] group 1/1: estimating depth map for 1/36 camera 48718 (9 neighbs)...
2022-12-25 12:04:30 [GPU 2] group 1/1: estimating depth map for 2/36 camera 48719 (5 neighbs)...
2022-12-25 12:04:30 [GPU 1] Camera 48718 samples after final filtering: 51% (2.05146 avg inliers) = 100% - 3% (not matched) - 16% (bad matched) - 2% (no neighbors) - 9% (no cost neighbors) - 13% (inconsistent normal) - 0% (estimated bad angle) - 0% (found bad angle) - 7% (speckles filtering)
2022-12-25 12:04:30 [GPU 1] Camera 48718: level #3/3 (x8 downscale: 500x375, image blowup: 1000x750) done in 0.063 s = 24% propagation + 33% refinement + 25% filtering + 0% smoothing
2022-12-25 12:04:30 Peak VRAM usage updated: Camera 48718 (9 neihbs): 73 MB = 36 MB gpu_tmp_hypo_ni_cost (49%) + 12 MB gpu_tmp_normal (16%) + 8 MB gpu_neighbImages (12%) + 4 MB gpu_tmp_depth (5%) + 4 MB gpu_tmp_avg_cost (5%) + 2 MB gpu_tmp_cost_ni_inliers_masks (3%) + 1 MB gpu_mipmapNeighbImage (1%) + 0 MB gpu_neighbMasks (1%) + 0 MB gpu_refImage (1%) + 0 MB gpu_depth_map (1%)
2022-12-25 12:04:31 [GPU 1] group 1/1: estimating depth map for 3/36 camera 48720 (4 neighbs)...
2022-12-25 12:04:31 [GPU 2] Camera 48719 samples after final filtering: 54% (1.7128 avg inliers) = 100% - 3% (not matched) - 16% (bad matched) - 2% (no neighbors) - 7% (no cost neighbors) - 12% (inconsistent normal) - 0% (estimated bad angle) - 0% (found bad angle) - 6% (speckles filtering)
2022-12-25 12:04:31 [GPU 2] Camera 48719: level #3/3 (x8 downscale: 500x375, image blowup: 1000x750) done in 0.046 s = 26% propagation + 24% refinement + 20% filtering + 0% smoothing
2022-12-25 12:04:31 [GPU 2] group 1/1: estimating depth map for 4/36 camera 48721 (6 neighbs)...
2022-12-25 12:04:31 [GPU 1] Camera 48720 samples after final filtering: 47% (1.7595 avg inliers) = 100% - 2% (not matched) - 22% (bad matched) - 3% (no neighbors) - 8% (no cost neighbors) - 11% (inconsistent normal) - 0% (estimated bad angle) - 0% (found bad angle) - 7% (speckles filtering)
2022-12-25 12:04:31 [GPU 1] Camera 48720: level #3/3 (x8 downscale: 500x375, image blowup: 1000x750) done in 0.057 s = 32% propagation + 26% refinement + 21% filtering + 0% smoothing
2022-12-25 12:04:31 [GPU 1] group 1/1: estimating depth map for 5/36 camera 48722 (8 neighbs)...
2022-12-25 12:04:31 [GPU 2] Camera 48721 samples after final filtering: 52% (2.20245 avg inliers) = 100% - 2% (not matched) - 19% (bad matched) - 2% (no neighbors) - 7% (no cost neighbors) - 11% (inconsistent normal) - 0% (estimated bad angle) - 0% (found bad angle) - 6% (speckles filtering)
2022-12-25 12:04:31 [GPU 2] Camera 48721: level #3/3 (x8 downscale: 500x375, image blowup: 1000x750) done in 0.043 s = 23% propagation + 26% refinement + 16% filtering + 0% smoothing
2022-12-25 12:04:31 [GPU 2] group 1/1: estimating depth map for 6/36 camera 48777 (4 neighbs)...
2022-12-25 12:04:31 [GPU 1] Camera 48722 samples after final filtering: 54% (2.33461 avg inliers) = 100% - 3% (not matched) - 15% (bad matched) - 1% (no neighbors) - 7% (no cost neighbors) - 13% (inconsistent normal) - 0% (estimated bad angle) - 0% (found bad angle) - 7% (speckles filtering)

9
Feature Requests / Pause/play button for background processing
« on: December 15, 2022, 04:23:07 PM »
I often process big projects and they run in the background for a few days. Every time I want to pause processing and use my computer for something else, I first need to re-open the project to pause it. It would be so much easier if I caould just pause/resume background processing with one click without loading the project abck into the memory.

10
Hi,

It has actually been happening for the last few versions and sometimes is quite annoying. I have an integrated Intel UHD and Nvidia RTX 3070. So obviously I only want to use the Nvidia adapter and this is the only one switched on in the preferences. Hovewer, from time to time, quite randomly, I see in the console that Metashape uses both GPUs. Then I go to preferences and see both GPUs ticked. So I untick one and work on the project for several days and then it happens again.

Also, even when Intel GPU is switched on, it still seems to be used during depth maps generation alog with the NVIDIA GPU. Is it normal?

11
Feature Requests / Add UNDO support for accidently deselected selections
« on: February 01, 2022, 09:22:39 PM »
Sometimes I use the selection tools on a mesh, creating a quite complicated selection using SHIFT and CTRL keys. But if I accidently do not press either of the keys, all the selection gets lost and I need to start again. UNDO does not work in this case. Can you please change it? So next time I accidently deselect what I have spent 20 minutes on selecting, I will just undo deselection and continue selecting.

12
I use intelligent scissors a lot and I have a high DPI screen. So the points created with the tool are very small. It does not affect the process so much, but when I want to close the poligon, I actually need to zoom in a few times and it is still difficult to click exactly where the starting point is. If the points were bigger or it was possiible to click near the point to auto close, it would be so much easier.

Also, the auto close option should be available in the right click menu. So every time I wanted to close the selection created with the tool, I would just use the option auto-close and Metashape would just connect the last point with the first a straight line, even if they were far apart. So much easier.


13
I use the magic wand tool for maskig a lot. And since the beginning, to change the tolerance of the tool, I have to go to a separate tool/window, set the new value and come back to using maric wand. Can you please make the tolerance setting available every time the wand is used, so I can easily readjust the tolerance each time I need it?

14
I had 1.8.0 very briefly. I started processing one of my bigger model and it kept crashing. Never got to the stage of texturing. So I downgraded to 1.7.6 to be able to work on it.

15
Quite often, when processing textures, I end up with some low quality patches. It happens only because Metashape chooses a more distant photograph to use in that particular spot. Could you pleas add a tickbox to texture generation box that would prioritise the closest and therefore most detailed photos. It would save a lot of manual work I need to do with disabling the farthest photos at texture generating stage.

Pages: [1] 2 3