Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - andyroo

Pages: 1 [2] 3 4 ... 11
16
I would love, both in the GUI and in the python API, to be able to copy a certain selected area of a chunk into a new chunk, similar to this post. At the moment I have to duplicate the whole chunk, then prune it to a certain area. With tens of thousands of images and hundreds of millions of tiepoints this is quite tedious. If I could select tie points and markers by area (lat/lon) or within a shapefile boundary, then select photos by tiepoints, then copy selected to a new chunk (and/or do the same via manual selection in the GUI) it would make me even happier than I already am :-)

17
I'm running gradual selection on a 386 million point cloud (from 82,000 cameras) and the initial "Analyzing Point Cloud" step after I select which parameter to use and before I select the gradual selection threshold is taking quite a while. It looks like it's going to be about 5-6 hours on my workstation (80% done @ 4h), and it's going to take > 8 hours on a HPC login node with slower RAM. Wondering if there's anything I can do to speed this up (headless? 1.7.x?).

On the workstation (Threadripper 3960x with 256GB 3000MHz RAM and 2x RTX 2080 Super GPUs) it appears to be using minimal CPU resources (maybe a single core, resource monitor says ~4% CPU) and  it's using about 60GB of 256GB total RAM (total system usage is about 72GB).

18
I am wondering about the AlignCameras.finalize step in network processing. I see that it's limited to one node, and that if the node dies it has to restart the finalize step. I am wondering if ~24h for the finalize step is reasonable on Metashape 1.6.5 with ~82000 cameras (36 megapixel). I notice that this project has some "merging failed" messages, and it seems like a similar run (same project with different camera accuracies and no GCPs) finished much faster, but then I did have to restart this node ~20h in because the job expired. I also have two copies of the chunk in the same project, but I'm only operating on one of them.

Node is dual 18-core CPUs @ 2.3GHz with 376GB RAM

Andy

19
This is a 2-part feature request:

1) - to add the ability to output Cloud-optimized geotiffs (COGs) in Export Orthomosaic/Tiff/Export TIF/JPG/PNG... and Export DEM/Export TIF/BIL/XYZ...
2) - to add/combine tiled export options to be able to produce one set of tiles (and associated template html & kml) that can be used with multiple viewing options including KML Superoverlay, leaflet, openlayers, google maps, and other tile map services.

Currently I do this with gdal, but it requires multiple passes and is MUCH slower than metashape's efficient raster outputs. We are shifting to cloud-optimized geotif format for DEMs and orthos and starting to use tiled datasets more for serving ortho products to end users, so exporting a temporary raster or raster tiles that I postprocess into something else using slow tools is starting to become a significantly inefficient part of my workflow.

Right now I export TIFFs in metashape either as a single tiff or tiles, then use gdal_translate -of COG to generate COGs (slow), gdalbuildvrt and gdalwarp to make VRTs in EPSG:4326 (fast), and gdal_translate -of KMLSUPEROVERLAY -co FORMAT=AUTO (verrrry sloow) or gdal2tiles -k -processes=[NumCores] (fast but sometimes buggy with gdalwarp generating a LOT of empty tiles) to make the KML superoverlay.

gdal2tiles is nice because it automagically creates viewers for leaflet, openlayers, google maps, and kml (with the -k option). Also it uses all cores for building the KML Superoverlay where gdal_translate doesn't. But gdal_translate supports hybrid JPG and PNG (for edge transparency) tiles. If I could do all of this within metashape, I would jump for joy.

I would love to be able to do these things within Metashape in the export dialog/API. Especially the part where I can create the KML superoverlay not in one giant zip but in a folder heirarchy, and where I can use the same tiles for KML/Google Maps/OpenLayers/Leaflet - lots of flexibility there. If that also generated a VRT or something I could generate a VRT from (point gdal at a product recognized as a raster with gdal) - then I could use those tiles for my COG even if that feature wasn't implemented and it would be much more streamlined.

[edit] - the "-co FORMAT=auto" option with gdal_translate is nice for optimizing size of KML layers, but I'm not sure how it would work with sharing multiple tiles with tile services - I guess probably not well, so maybe that wouldn't make sense to do..

Also adding the KML superoverlay with network links as an option instead of KMZ would be nice because then the whole tile heirarchy could be more easily moved to online services.

Andy


20
Feature Requests / crop raster to nodata extent?
« on: December 10, 2020, 04:55:51 AM »
For some reason I was under the impression that rasters were built only to the data extent. Is it actually the bounding box?

If so, it would be nice to have a "crop to data extent" option. I have a project where I have three sets of images overlapping but covering different sub-parts of the total aligned project. I iterated through aligned sets of images disabling all but one to generate three different dense clouds, then built DEMs for each dense cloud. THe DEMs all have roughly the same extent (as the total cloud) even though one of them only occupies about 10% of the area. It makes the raster size and export time much larger/longer than if they were cropped to the data extent. Now I have to process to trim to the data extent in external software. To do this with a shapefile is complicated because I am trying to maintain integer bounding boxes, and it would be nice to be able to do it procedurally (python script) or in batch.

21
I attempted to save out four approximately equal subset regions from a large (~80,000 image) project as separate chunks to process more efficiently. The original project was 467 GB Most of the size of the project was saved key points (437.14 GB), and after trimming it to sub-regions, and running the Tools/Preferences/Advanced/Clean up project... task (no junk files found), The smaller chunks were still around 440GB - and had exactly the same size key point file (437.14 GB).

To trim the project extent, I selected the region(s) to delete, then right-click/Remove Cameras, reset the region, and saved out the new chunk.

Is there a way to delete the key points that are no longer referenced by any photo in the project?

Andy

22
I am trying to figure out how - or if - I can write a script to select or de-select only tie points/matches that are shared between camera groups (camera folders) in a chunk. I want to treat tie points between cameras differently than tie points for the same camera while performing gradual selection filtering, so after I select points with a certain threshold, I want to be able to deselect points that either do or don't have cameras for multiple groups (I want to do both at different times). If I knew how to do one I could do the other, but I'm confused where to start.

Is this possible to do? I'm not sure if I should start with chunk.point_cloud.projections or chunk.point_cloud.points or chunk.point_cloud.cameras. It looks like I can access chunk.point_cloud.projections by camera, but there's not a straightforward way to get a list of cameras for a given projection (tie point?).

Then I guess I'd have to parse that list and see if it had more than one camera group?

My brain wants to iterate through points (for point in chunk.point_cloud.points:) but maybe I have to iterate through cameras?

Any hints would be super-appreciated.

Andy

23
Feature Requests / Reuse depth maps - generate if not exist
« on: December 04, 2020, 05:29:37 AM »
Not sure about how other folks use this, but I often disable a subset of aligned images, generate a dense cloud, then enable some/disable some images, and generate another dense cloud.

"reuse depth maps" defaults to "checked" in the dialog box (advanced settings, hidden by default) (probably defaults to checked because of my "keep depth maps" setting), but if depth maps don't exist I get a "zero resolution" error, rather than (1) depth maps being created for images that don't have them, or (2) a prompt saying they don't exist.

I would prefer that when I have "keep depth maps" selected in advanced options, "reuse depth maps" does not automatically regenerate existing depth maps, but DOES generate depth maps that don't exist.

Also a quick note - Thank you Agisoft developers for being so responsive to user questions/requests/comments. It makes for a very nice user experience, and I think that it generally shows in the helpfulness and enthusiasm of the user community.

24
Bug Reports / Network Project completed, but won't open?
« on: December 03, 2020, 03:04:00 AM »
I finished and closed a project yesterday in network processing and tried to open it today in Metashape and it's stuck on "processing..." and "untitled" in metashape (tried to open it a good half-hour ago). The task is #9, visible in the monitoring console in the screenshot..

The storage is high-speed storage on a HPC optimized for machine learning, so file access speed shouldn't be a problem.

[edit] - I tried this with an idle node in case there was cleanup work needing to be done, but the log shows that cleanup completed.

25
General / Network vs Non-network alignment performance
« on: December 02, 2020, 10:28:00 AM »
I was comparing alignment time on a relatively large project on network vs non-network, and was surprised that the non-network machine seems to be going much faster (~4x). Especially in alignment finalization. One thing I noticed is that the node is only processing 7 images at a time, while the workstation is processing  ~20. The workstation (Threadripper 3960x/256GB RAM/2x RTX2080 Super) takes 5-6 minutes to adjust points from each 20-image batch, while the network node (2x 18-core 2.3GHz Skylake CPU/384GB RAM/4x NVidia V100) takes 7-8 minutes to adjust points from each 7-image batch. I understand that the 3960x is higher frequency, but not why the machine with more RAM/cores is taking fewer images. The project is the same, just copied to the network and paths changed on images. Network nodes have faster disk/network access than my workstation.

log excerpts below:

Code: [Select]
...
2020-12-01 22:52:00 adding camera 75068 (77523 of 80065), 1128 of 1132 used
2020-12-01 22:52:00 adding camera 76188 (77524 of 80065), 1056 of 1056 used
2020-12-01 22:52:43 adding 116073 points, 45 far (12.272 threshold), 311 inaccurate, 346 invisible, 129 weak
2020-12-01 22:53:48 adjusting: xxxxxxxxxx 0.268141 -> 0.267689
2020-12-01 22:59:16 adding 964 points, 413 far (12.272 threshold), 322 inaccurate, 352 invisible, 131 weak
2020-12-01 22:59:16 optimized in 393.425 seconds
2020-12-01 22:59:36 adding camera 77283 (77525 of 80065), 7480 of 7500 used
2020-12-01 22:59:36 adding camera 78352 (77526 of 80065), 5854 of 5858 used
2020-12-01 22:59:36 adding camera 76647 (77527 of 80065), 4961 of 4964 used
2020-12-01 22:59:36 adding camera 77607 (77528 of 80065), 4405 of 4415 used
2020-12-01 22:59:36 adding camera 77284 (77529 of 80065), 3959 of 3986 used
2020-12-01 22:59:36 adding camera 77323 (77530 of 80065), 3047 of 3059 used
2020-12-01 22:59:36 adding camera 76345 (77531 of 80065), 2832 of 2832 used
2020-12-01 22:59:36 adding camera 77427 (77532 of 80065), 2829 of 2833 used
2020-12-01 22:59:36 adding camera 76648 (77533 of 80065), 2721 of 2727 used
2020-12-01 22:59:36 adding camera 78353 (77534 of 80065), 2613 of 2620 used
2020-12-01 22:59:36 adding camera 78543 (77535 of 80065), 2494 of 2504 used
2020-12-01 22:59:36 adding camera 76189 (77536 of 80065), 2260 of 2260 used
2020-12-01 22:59:36 adding camera 77608 (77537 of 80065), 2020 of 2030 used
2020-12-01 22:59:36 adding camera 77285 (77538 of 80065), 1845 of 1859 used
2020-12-01 22:59:36 adding camera 76344 (77539 of 80065), 1408 of 1408 used
2020-12-01 22:59:36 adding camera 76649 (77540 of 80065), 1401 of 1413 used
2020-12-01 22:59:36 adding camera 75067 (77541 of 80065), 1349 of 1349 used
2020-12-01 22:59:36 adding camera 77322 (77542 of 80065), 1299 of 1310 used
2020-12-01 22:59:36 adding camera 77426 (77543 of 80065), 1145 of 1145 used
2020-12-01 22:59:36 adding camera 76190 (77544 of 80065), 1016 of 1016 used
2020-12-01 23:00:18 adding 101126 points, 36 far (12.272 threshold), 314 inaccurate, 352 invisible, 133 weak
2020-12-01 23:01:24 adjusting: xxxxxxxxxx 0.26763 -> 0.267341
2020-12-01 23:06:53 adding 916 points, 422 far (12.272 threshold), 324 inaccurate, 351 invisible, 134 weak
2020-12-01 23:06:53 optimized in 395.194 seconds
2020-12-01 23:07:13 adding camera 77286 (77545 of 80065), 7435 of 7447 used
2020-12-01 23:07:13 adding camera 77609 (77546 of 80065), 6284 of 6292 used
2020-12-01 23:07:13 adding camera 77321 (77547 of 80065), 6004 of 6017 used
...

and the network version:

Code: [Select]
...
2020-12-02 01:09:59 adding camera 76868 (78524 of 80065), 2417 of 2423 used
2020-12-02 01:09:59 adding camera 76869 (78525 of 80065), 1439 of 1449 used
2020-12-02 01:10:19 adding 32960 points, 8 far (12.272 threshold), 314 inaccurate, 364 invisible, 153 weak
2020-12-02 01:11:11 adjusting: xxxxxxxxxx 0.254763 -> 0.25469
2020-12-02 01:18:11 adding 832 points, 303 far (12.272 threshold), 323 inaccurate, 364 invisible, 153 weak
2020-12-02 01:18:11 optimized in 472.695 seconds
2020-12-02 01:18:32 adding camera 76870 (78526 of 80065), 4766 of 4783 used
2020-12-02 01:18:32 adding camera 77833 (78527 of 80065), 4613 of 4626 used
2020-12-02 01:18:32 adding camera 76871 (78528 of 80065), 2639 of 2651 used
2020-12-02 01:18:32 adding camera 77834 (78529 of 80065), 2339 of 2347 used
2020-12-02 01:18:32 adding camera 76872 (78530 of 80065), 1394 of 1406 used
2020-12-02 01:18:52 adding 32871 points, 18 far (12.272 threshold), 314 inaccurate, 364 invisible, 152 weak
2020-12-02 01:19:44 adjusting: xxxxxxxxxx 0.254728 -> 0.254651
2020-12-02 01:26:29 adding 830 points, 304 far (12.272 threshold), 323 inaccurate, 364 invisible, 152 weak
2020-12-02 01:26:29 optimized in 457.407 seconds
2020-12-02 01:26:50 adding camera 77835 (78531 of 80065), 8946 of 8949 used
2020-12-02 01:26:50 adding camera 76873 (78532 of 80065), 4739 of 4748 used
...

[edit] 15 hours in on the workstation and it's within 5% of the network, which has been running for about 60h. The workstation is at 83% complete, network is at 88%. I see that the workstation also takes smaller bites sometimes (like now, where it's at about the same stage as the network alignment), but finishes those much more quickly, around 1 minute, instead of 7:

Code: [Select]
2020-12-02 09:05:59 adding camera 76833 (78428 of 80065), 1185 of 1188 used
2020-12-02 09:05:59 adding camera 76211 (78429 of 80065), 1010 of 1012 used
2020-12-02 09:06:42 adding 39366 points, 18 far (12.272 threshold), 319 inaccurate, 362 invisible, 134 weak
2020-12-02 09:07:49 adjusting: xxxxxxxxxx 0.252005 -> 0.251927
2020-12-02 09:13:54 adding 815 points, 317 far (12.272 threshold), 324 inaccurate, 362 invisible, 134 weak
2020-12-02 09:13:54 optimized in 432.302 seconds
2020-12-02 09:14:13 adding camera 77799 (78430 of 80065), 10228 of 10235 used
2020-12-02 09:14:13 adding camera 77800 (78431 of 80065), 4966 of 4976 used
2020-12-02 09:14:13 adding camera 76834 (78432 of 80065), 3857 of 3859 used
2020-12-02 09:14:13 adding camera 76210 (78433 of 80065), 2683 of 2686 used
2020-12-02 09:14:13 adding camera 77801 (78434 of 80065), 2278 of 2290 used
2020-12-02 09:14:13 adding camera 76835 (78435 of 80065), 2180 of 2182 used
2020-12-02 09:14:13 adding camera 76209 (78436 of 80065), 1538 of 1542 used
2020-12-02 09:14:13 adding camera 74924 (78437 of 80065), 1521 of 1521 used
2020-12-02 09:14:13 adding camera 76836 (78438 of 80065), 1321 of 1325 used
2020-12-02 09:14:55 adding 49143 points, 9 far (12.272 threshold), 319 inaccurate, 362 invisible, 132 weak
2020-12-02 09:16:03 adjusting: xxxxxxxxxx 0.252002 -> 0.251906
2020-12-02 09:22:48 adding 813 points, 317 far (12.272 threshold), 323 inaccurate, 362 invisible, 132 weak
2020-12-02 09:22:48 optimized in 472.756 seconds
2020-12-02 09:23:07 adding camera 77802 (78439 of 80065), 10655 of 10665 used
2020-12-02 09:23:07 adding camera 77803 (78440 of 80065), 5295 of 5308 used

26
Trying to figure out what's going on in this step in the network process - it looks like it's 30-40x slower than equivalent on my workstation. Each network node is 2 x 18c 2.3 GHz Skylake CPUs with 384GB RAM with (gpu node) or without (cpu node) 4x NVidia V100s. Workstation is a Threadripper 3960x with 256GB RAM and 2x 2080 Super GPUs. Screenshot of node activity from monitor attached.

Could this be related to CUDA version? Orthomosaic built with averaging mode.

Network node times:

Code: [Select]
2020-12-01 11:04:10 Updating orthomosaic...
2020-12-01 11:05:54 62 images blended in 104.078 sec
2020-12-01 11:10:24 153 images blended in 269.605 sec
2020-12-01 11:17:42 239 images blended in 436.92 sec
2020-12-01 11:26:53 296 images blended in 550.747 sec
2020-12-01 11:37:10 309 images blended in 617.283 sec

Workstation times:

Code: [Select]
2020-09-11 23:59:29 Updating orthomosaic...
2020-09-11 23:59:30 28 images blended in 1.47 sec
2020-09-11 23:59:34 74 images blended in 3.563 sec
2020-09-11 23:59:40 133 images blended in 5.989 sec
2020-09-11 23:59:47 170 images blended in 6.683 sec
2020-09-11 23:59:55 174 images blended in 7.564 sec
2020-09-12 00:00:02 165 images blended in 6.912 sec
2020-09-12 00:00:10 151 images blended in 7.498 sec
2020-09-12 00:00:22 167 images blended in 12.318 sec
2020-09-12 00:00:31 139 images blended in 8.179 sec
2020-09-12 00:00:33 43 images blended in 1.625 sec
2020-09-12 00:00:33 orthomosaic updated in 64.348 sec

Andy

27
Metashape 1.6.5 network processing on HPC.

I'm trying to understand the worker/processing node logfile in the network monitor during the MatchPhotos.prematch part of alignment.

It looks like ~1/2 of the time is being taken initializing the GPUs in each iteration of the task, but I don't know if there is more going on behind the scenes.

In the logfile excerpt below, it looks like each task (iteration of MatchPhotos.prematch) takes ~140-160s, and for each GPU (4 total) "free memory" takes ~18s - I'm not sure if that's querying free memory, or freeing the memory from the last run, or something else, but by my math, each iteration spends about 72s on the "free memory:" portion of the task. Is that normal?

Log excerpt below going through a few iterations starting from the end of point detection:

Code: [Select]
2020-11-29 23:47:18 [GPU] photo 82059: 70000 points
2020-11-29 23:47:18 points detected in 150.208 sec
2020-11-29 23:47:23 processing finished in 162.678 sec
2020-11-29 23:50:02 MatchPhotos.prematch (4/100): accuracy = High, preselection = generic, reference, keypoint limit = 70000, tiepoint limit = 0, apply masks = 0, filter tie points = 0
2020-11-29 23:50:10 loaded matching partition in 0.025084 sec
2020-11-29 23:50:10 loaded keypoint partition in 0.018421 sec
2020-11-29 23:50:54 loaded keypoints in 37.2838 sec
2020-11-29 23:50:54 loaded matching data in 0.000232 sec
2020-11-29 23:50:54 Selecting pairs...
2020-11-29 23:50:54 Found 4 GPUs in 0.261444 sec (CUDA: 0.217975 sec, OpenCL: 0.043425 sec)
2020-11-29 23:51:12 Using device: Tesla V100-SXM2-16GB, 80 compute units, free memory: 15806/16130 MB, compute capability 7.0
2020-11-29 23:51:12   driver/runtime CUDA: 10010/6050
2020-11-29 23:51:12   max work group size 1024
2020-11-29 23:51:12   max work item sizes [1024, 1024, 64]
2020-11-29 23:51:12   got device properties in 0.001026 sec, free memory in 18.1027 sec
2020-11-29 23:51:30 Using device: Tesla V100-SXM2-16GB, 80 compute units, free memory: 15806/16130 MB, compute capability 7.0
2020-11-29 23:51:30   driver/runtime CUDA: 10010/6050
2020-11-29 23:51:30   max work group size 1024
2020-11-29 23:51:30   max work item sizes [1024, 1024, 64]
2020-11-29 23:51:30   got device properties in 0.000813 sec, free memory in 18.3504 sec
2020-11-29 23:51:49 Using device: Tesla V100-SXM2-16GB, 80 compute units, free memory: 15806/16130 MB, compute capability 7.0
2020-11-29 23:51:49   driver/runtime CUDA: 10010/6050
2020-11-29 23:51:49   max work group size 1024
2020-11-29 23:51:49   max work item sizes [1024, 1024, 64]
2020-11-29 23:51:49   got device properties in 0.000794 sec, free memory in 18.2349 sec
2020-11-29 23:52:06 Using device: Tesla V100-SXM2-16GB, 80 compute units, free memory: 15806/16130 MB, compute capability 7.0
2020-11-29 23:52:06   driver/runtime CUDA: 10010/6050
2020-11-29 23:52:06   max work group size 1024
2020-11-29 23:52:06   max work item sizes [1024, 1024, 64]
2020-11-29 23:52:06   got device properties in 0.002548 sec, free memory in 17.4495 sec
2020-11-29 23:53:18 977304 matches found in 144.231 sec
2020-11-29 23:53:18 matches combined in 0.133228 sec
2020-11-29 23:53:19 filtered 84711 out of 579510 matches (14.6177%) in 0.116332 sec
2020-11-29 23:53:19 saved matches in 0.006697 sec
2020-11-29 23:53:19 processing finished in 197.588 sec
2020-11-29 23:53:22 MatchPhotos.prematch (11/100): accuracy = High, preselection = generic, reference, keypoint limit = 70000, tiepoint limit = 0, apply masks = 0, filter tie points = 0
2020-11-29 23:53:29 loaded matching partition in 0.010066 sec
2020-11-29 23:53:29 loaded keypoint partition in 0.004592 sec
2020-11-29 23:53:38 loaded keypoints in 7.61736 sec
2020-11-29 23:53:38 loaded matching data in 0.000374 sec
2020-11-29 23:53:38 Selecting pairs...
2020-11-29 23:53:38 Found 4 GPUs in 0.23591 sec (CUDA: 0.196422 sec, OpenCL: 0.039459 sec)
2020-11-29 23:53:56 Using device: Tesla V100-SXM2-16GB, 80 compute units, free memory: 15806/16130 MB, compute capability 7.0
2020-11-29 23:53:56   driver/runtime CUDA: 10010/6050
2020-11-29 23:53:56   max work group size 1024
2020-11-29 23:53:56   max work item sizes [1024, 1024, 64]
2020-11-29 23:53:56   got device properties in 0.000824 sec, free memory in 18.0572 sec
2020-11-29 23:54:14 Using device: Tesla V100-SXM2-16GB, 80 compute units, free memory: 15806/16130 MB, compute capability 7.0
2020-11-29 23:54:14   driver/runtime CUDA: 10010/6050
2020-11-29 23:54:14   max work group size 1024
2020-11-29 23:54:14   max work item sizes [1024, 1024, 64]
2020-11-29 23:54:14   got device properties in 0.000802 sec, free memory in 17.8316 sec
2020-11-29 23:54:32 Using device: Tesla V100-SXM2-16GB, 80 compute units, free memory: 15806/16130 MB, compute capability 7.0
2020-11-29 23:54:32   driver/runtime CUDA: 10010/6050
2020-11-29 23:54:32   max work group size 1024
2020-11-29 23:54:32   max work item sizes [1024, 1024, 64]
2020-11-29 23:54:32   got device properties in 0.000784 sec, free memory in 17.5461 sec
2020-11-29 23:54:49 Using device: Tesla V100-SXM2-16GB, 80 compute units, free memory: 15806/16130 MB, compute capability 7.0
2020-11-29 23:54:49   driver/runtime CUDA: 10010/6050
2020-11-29 23:54:49   max work group size 1024
2020-11-29 23:54:49   max work item sizes [1024, 1024, 64]
2020-11-29 23:54:49   got device properties in 0.000805 sec, free memory in 17.486 sec
2020-11-29 23:56:01 269548 matches found in 142.999 sec
2020-11-29 23:56:01 matches combined in 0.062111 sec
2020-11-29 23:56:02 filtered 30416 out of 151205 matches (20.1157%) in 0.045621 sec
2020-11-29 23:56:02 saved matches in 0.006546 sec
2020-11-29 23:56:02 processing finished in 160.668 sec

Thanks for any insight. Happy to read more about network processing if I'm missing an informative reference/resource.

[edit] - a quick additional question/comment - I notice that in the code snippet below during the step between "setting point indices..." and "generated XXX tiepoints", there is a long time (almost an hour) with almost no CPU activity (I couldn't tell if the GPUs were active on this node). I was reviewing my workstation log and this long period occurs in non-network processing too - is this a GPU or CPU step? I guess it slows down with many tiepoints?

Code: [Select]
2020-11-30 01:46:24 MatchPhotos.finalize (1/1): accuracy = High, preselection = generic, reference, keypoint limit = 70000, tiepoint limit = 0, apply masks = 0, filter tie points = 0
2020-11-30 01:46:30 loaded matching data in 0.016731 sec
2020-11-30 01:46:31 loaded matching partition in 0.113757 sec
2020-11-30 01:46:31 loaded keypoint partition in 0.000596 sec
2020-11-30 01:46:44 loaded matches in 8.68071 sec
2020-11-30 01:46:44 setting point indices... 63857782 done in 25.3933 sec
2020-11-30 02:37:58 generated 63857782 tie points, 3.26963 average projections
2020-11-30 02:38:12 removed 344871 multiple indices
2020-11-30 02:38:13 removed 21032 track

28
I was wondering if the dense cloud filtering step that starts immediately after the depth reconstruction step finishes is skipped if filtering is set to none.

I'm trying to figure out if exploring external ways to filter the cloud would be worth my time on larger projects - the current project is  11779 36MP images. the depth reconstruction phase took about 30 hours, and filtering started ~6h ago and is projected to take 2.25 more days.

I have one project that is still in the alignment stage that has about 10x more images than this. Also wondering, since this step appears to be CPU-bound, if speed would scale mostly linearly with #cores in network processing.

log excerpt for context,:
Code: [Select]
2020-11-29 05:14:48 [GPU] estimating 1574x2838x192 disparity using 787x1472x8u tiles
2020-11-29 05:14:49 timings: rectify: 0.021 disparity: 0.174 borders: 0.275 filter: 0.066 fill: 0
2020-11-29 05:14:49 timings: rectify: 0.028 disparity: 0.325 borders: 0.26 filter: 0.093 fill: 0
2020-11-29 05:14:49 timings: rectify: 0.036 disparity: 0.352 borders: 0.256 filter: 0.078 fill: 0
2020-11-29 05:14:49 timings: rectify: 0.048 disparity: 1.608 borders: 0.224 filter: 0.056 fill: 0
2020-11-29 05:14:49 timings: rectify: 0.022 disparity: 0.241 borders: 0.258 filter: 0.075 fill: 0
2020-11-29 05:14:51
2020-11-29 05:14:51 Depth reconstruction devices performance:
2020-11-29 05:14:51  - 2% done by CPU
2020-11-29 05:14:51  - 49% done by GeForce RTX 2080 SUPER
2020-11-29 05:14:51  - 48% done by GeForce RTX 2080 SUPER
2020-11-29 05:14:51 Total time: 929.507 seconds
2020-11-29 05:14:51
2020-11-29 05:14:51 118 depth maps generated in 962.028 sec
2020-11-29 05:14:53 saved depth maps block in 2.155 sec
2020-11-29 05:15:01 loaded depth map partition in 7.002 sec
2020-11-29 05:15:02 Generating dense point cloud...
2020-11-29 05:15:02 Generating dense point cloud...
2020-11-29 05:15:02 initializing...
2020-11-29 05:23:52 selected 11745 cameras in 529.964 sec
2020-11-29 05:23:52 working volume: 70081x58202x12191
2020-11-29 05:23:52 tiles: 17x14x3
2020-11-29 05:23:52 saved dense cloud data in 0.011 sec
2020-11-29 05:23:52 saved regions in 0.026 sec
2020-11-29 05:23:52 saved camera partition in 0.005 sec
2020-11-29 05:23:52 scheduled 100 depth filtering groups
2020-11-29 05:23:52 scheduled 348 dense cloud regions
2020-11-29 05:23:52 loaded camera partition in 0.001 sec
2020-11-29 05:23:52 loaded dense cloud data in 0.02 sec
2020-11-29 05:23:55 filtering 117 depth maps...
2020-11-29 05:23:55 using 356 depth maps including neighbors (25.7405 GB)
2020-11-29 05:23:55 preloading data... done in 16.374 sec
2020-11-29 05:24:12 filtering CAM001_20201104133441_80 using 100 neighbors... done in 16.655 sec (load: 0.754, render: 2.115 filter: 13.786)
2020-11-29 05:24:29 filtering CAM001_20201104133442_80 using 100 neighbors... done in 16.279 sec (load: 1.337, render: 2.027 filter: 12.915)
2020-11-29 05:24:47 filtering CAM001_20201104133443_80 using 100 neighbors... done in 14.456 sec (load: 0.477, render: 1.923 filter: 12.056)
2020-11-29 05:25:02 filtering CAM001_20201104133444_80 using 100 neighbors... done in 13.109 sec (load: 0.487, render: 1.834 filter: 10.788)

29
Bug Reports / jumpy marker placement (right-click, 1.6.5)
« on: November 24, 2020, 11:03:46 PM »
I am having an issue where, when I <pg-down> through markers to place them - change the white flag to the green flag - by adjusting (left-click/drag) or accepting (right-click/place marker), they jump to a different position when i right-click/place. A few versions ago I was happy to see that right-click/place allowed me to place markers without adjusting where metashape "thinks" the marker is. But now that seems broken.

Screenshot of the behavior attached. the panels from left to right show (1) marker as metashape calculates positions (2) context menu when I right-click to place the marker, and (3) the marker where it ends up.

Note that when I right-click/place my cursor is right on the marker, but when I'm done the marker is about 1.5 pix below that (in this instance - distance and vector is variable).

Project is NAD83 HARN, cameras unreferenced, aligned, markers imported with coordinates, then manually placed. Transform updated (no optimization) periodically during marker placement.

30
Is the estimating camera locations step supposed to be limited to a single core?

During this prolonged period of single core usage, my log is recording "pair XXXXX and YYYYY: 0 robust from ZZZZZ" and only processing an average of about 1.2 pairs/second.

Screenshot of processing and Task Manager attached.

I'm running a project with ~16,000 cameras, 2500 with precise positions and the rest with no positions, gradual and reference preselection enabled, six separate camera models (all default). RAM is only about half-used, workstation is an AMD Threadripper 3960x with 256GB RAM and 2 RTX 2080 Super GPUs. 

Most recent bits of log for this step and the end of the previous step is below:

Code: [Select]
...
2020-11-22 10:08:47 Found 2 GPUs in 0 sec (CUDA: 0 sec, OpenCL: 0 sec)
2020-11-22 10:08:47 Using device: GeForce RTX 2080 SUPER, 48 compute units, free memory: 7081/8192 MB, compute capability 7.5
2020-11-22 10:08:47   driver/runtime CUDA: 11010/8000
2020-11-22 10:08:47   max work group size 1024
2020-11-22 10:08:47   max work item sizes [1024, 1024, 64]
2020-11-22 10:08:47 Using device: GeForce RTX 2080 SUPER, 48 compute units, free memory: 7129/8192 MB, compute capability 7.5
2020-11-22 10:08:47   driver/runtime CUDA: 11010/8000
2020-11-22 10:08:47   max work group size 1024
2020-11-22 10:08:47   max work item sizes [1024, 1024, 64]
2020-11-22 10:11:20 9888655 matches found in 152.423 sec
2020-11-22 10:11:20 matches combined in 0.879 sec
2020-11-22 10:11:22 filtered 130916 out of 5044254 matches (2.59535%) in 1.428 sec
2020-11-22 10:11:22 saved matches in 0.06 sec
2020-11-22 10:11:23 loaded matching data in 0.001 sec
2020-11-22 10:11:23 loaded matching partition in 0.004 sec
2020-11-22 10:11:23 loaded keypoint partition in 0.001 sec
2020-11-22 10:11:24 loaded matches in 1.318 sec
2020-11-22 10:11:24 setting point indices... 46118303 done in 10.224 sec
2020-11-22 10:14:40 generated 46118303 tie points, 3.50232 average projections
2020-11-22 10:14:52 removed 482049 multiple indices
2020-11-22 10:14:52 removed 21268 tracks
2020-11-22 10:15:29 loaded keypoint partition in 0 sec
2020-11-22 10:15:29 loaded matching partition in 0.005 sec
2020-11-22 10:15:29 loaded matching partition in 0.719 sec
2020-11-22 10:15:30 Estimating camera locations...
2020-11-22 10:15:30 processing matches... done in 132.972 sec
2020-11-22 10:17:43 selecting camera groups... done in 3.817 sec
2020-11-22 10:17:47 scheduled 588 alignment groups
2020-11-22 10:17:47 saved camera partition in 0.013 sec
2020-11-22 10:17:47 loaded camera partition in 0.001 sec
2020-11-22 10:17:49 processing block: 250 photos
2020-11-22 10:17:53 pair 13255 and 13259: 0 robust from 38522
2020-11-22 10:17:54 pair 13242 and 13245: 0 robust from 38499
2020-11-22 10:17:55 pair 13320 and 13334: 0 robust from 38247
...
2020-11-22 12:44:39 pair 13354 and 13385: 0 robust from 30544
2020-11-22 12:44:40 pair 13315 and 13453: 0 robust from 30544
2020-11-22 12:44:40 pair 13312 and 13384: 0 robust from 30544
2020-11-22 12:44:40 pair 13297 and 13427: 0 robust from 30544
2020-11-22 12:44:41 pair 13374 and 13455: 0 robust from 30543
...


Pages: 1 [2] 3 4 ... 11