Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - andyroo

Pages: [1] 2 3 ... 12
1
When I draw a polygon in QGIS, and export as a geopackage (or shapefile) in EPSG:6318, then try to import into Metashape while setting outerBoundary as the boundary type it imports successfully but does not set the outer boundary.

I can set the outer boundary after import manually or with a python script.

I'm pretty sure I tried it as a shapefile too and tried a manual import. In all cases I had to set the boundary after import.

sample gpkg and jpg attached.

jpg shows logfile with "boundary_type = OuterBoundary" and context menu with /Set Boundary Type/None checked (project state immediately after import)

2
Hi Agisoft folks. I'm running version 2.1.0.17526 on host and workers, on Windows 10 (host and some workers) and Windows 11 (a few workers).

TLDR; I'm having problems during align.finalize with freezing/disconnecting workers, and after an overnight freeze without disconnect, I was able to restart worker progress today by pressing the <enter> key in the console window?!

Additional details below, and I filed a freshdesk ticket (#194391) a couple days ago and have been updating it. Some information is repeated here in case it helps anyone or anyone has any insight...

On my first network align of ~26,000 images everything went fine until the align.finalize step and then all of the workers were periodically disconnected about 275 times with a message something like this on the monitor (on the host machine):

2024-01-06 14:40:04 192.168.88.201:58729] recv: An established connection was aborted by the software in your host machine (10053)
2024-01-06 14:40:04 [192.168.88.201:58729] failed #0 AlignCameras.finalize (1/1): Connection closed
2024-01-06 14:40:04 [192.168.88.201:58729] send: An existing connection was forcibly closed by the remote host (10054)
2024-01-06 14:40:04 [192.168.88.201:58729] worker removed

...

after about 250 times, I disconnected all nodes, saved the batch, updated network card drivers, restarted the host and a single node, and tried again. That was the day before yesterday. Overnight the worker disconnected another 25 or so times, then finally finished the first step(?!)

So that's weird - why would it fail 275 times then suddenly work? But it gets weirder -  I noticed yesterday around 5pm that the worker node appeared frozen - there was no activity on the host/monitor on the worker/details progress graph, and the last update was at 2024-01-10 14:59:30. I left it alone until a few moments ago, and was surprised to see that the worker node never disconnected. I noticed that the host showed this:
2024-01-10 14:59:30 adjusting: !xx

while the worker showed this:
2024-01-10 14:59:30 adjusting: !x

Out of frustration, or desperation, or I don't know why, I hit <enter> in the console (cmd) window of the worker, and then it showed this!
2024-01-10 14:59:30 adjusting: !xxx

AND weirdest of all (to me) the host graph started updating again, and the worker appears to be running normally...

The current worker log looks like this (with around 25 more disconnects above that I didn't paste in)

...
x2024-01-10 08:18:51 Error: Aborted by user
2024-01-10 08:18:51 processing failed in 1950.93 sec
disconnected from server
connected to 192.168.88.205:5840
registration accepted
2024-01-10 08:19:48 AlignCameras.finalize (1/1): subtask = finalize, adaptive_fitting = off, level = 6, cache_path = //SFM-HOST/Network_SfM/psx/SBC_master_all_images_guided_5k.files/0/align.1
2024-01-10 08:20:26 3 blocks: 20308 146 2
2024-01-10 08:34:38 block: 14 sensors, 20456 cameras, 381612284 points, 1593718574 projections
2024-01-10 08:34:38 block_sensors: 0.0118561 MB (0.0127029 MB allocated)
2024-01-10 08:34:38 block_cameras: 7.95941 MB (11.8527 MB allocated)
2024-01-10 08:34:38 block_points: 17468.8 MB (21044.8 MB allocated)
2024-01-10 08:34:38 block_tracks: 1455.74 MB (1455.74 MB allocated)
2024-01-10 08:34:38 block_obs: 72954.6 MB (72954.6 MB allocated)
2024-01-10 08:34:38 block_ofs: 2911.47 MB (2911.47 MB allocated)
2024-01-10 08:34:38 block_fre: 0 MB (0 MB allocated)
2024-01-10 08:37:58 adding 353608275 points, 0 far, 1612701 inaccurate, 14316 invisible, 179 weak
2024-01-10 08:40:59 adjusting: !x[192.168.88.205:5840] recv: An existing connection was forcibly closed by the remote host (10054)
x2024-01-10 08:52:17 Error: Aborted by user
2024-01-10 08:52:17 processing failed in 1949.04 sec
disconnected from server
connected to 192.168.88.205:5840
registration accepted
2024-01-10 08:53:15 AlignCameras.finalize (1/1): subtask = finalize, adaptive_fitting = off, level = 6, cache_path = //SFM-HOST/Network_SfM/psx/SBC_master_all_images_guided_5k.files/0/align.1
2024-01-10 08:53:53 3 blocks: 20308 146 2
2024-01-10 09:08:08 block: 14 sensors, 20456 cameras, 381612284 points, 1593718574 projections
2024-01-10 09:08:08 block_sensors: 0.0118561 MB (0.0127029 MB allocated)
2024-01-10 09:08:08 block_cameras: 7.95941 MB (11.8527 MB allocated)
2024-01-10 09:08:08 block_points: 17468.8 MB (21044.8 MB allocated)
2024-01-10 09:08:08 block_tracks: 1455.74 MB (1455.74 MB allocated)
2024-01-10 09:08:08 block_obs: 72954.6 MB (72954.6 MB allocated)
2024-01-10 09:08:08 block_ofs: 2911.47 MB (2911.47 MB allocated)
2024-01-10 09:08:08 block_fre: 0 MB (0 MB allocated)
2024-01-10 09:11:28 adding 353608275 points, 0 far, 1612701 inaccurate, 14316 invisible, 179 weak
2024-01-10 09:14:34 adjusting: !xxxxxxxxxxxxx!x!x!x 0.823228 -> 0.75973
2024-01-10 10:40:05 disabled 1 points
2024-01-10 10:43:18 adding 1641944 points, 117372 far, 1628257 inaccurate, 14315 invisible, 180 weak
2024-01-10 10:43:18 optimized in 5510.16 seconds
2024-01-10 10:43:18 f 8429.71, cx 27.8211, cy 29.5716, k1 -0.118365, k2 0.125958, k3 0.0315367
2024-01-10 10:43:18 f 8430.09, cx 25.449, cy 29.4083, k1 -0.118596, k2 0.127053, k3 0.030009
2024-01-10 10:43:18 f 8428.33, cx -0.407983, cy 16.2905, k1 -0.11752, k2 0.1239, k3 0.0364546
2024-01-10 10:43:18 f 8433.91, cx -2.50267, cy 15.1906, k1 -0.117851, k2 0.125403, k3 0.0334202
2024-01-10 10:43:18 f 8426.79, cx 6.52533, cy 14.7927, k1 -0.118174, k2 0.12028, k3 0.0464695
2024-01-10 10:43:18 f 8427.1, cx 10.2056, cy 13.7934, k1 -0.11839, k2 0.123723, k3 0.0398974
2024-01-10 10:43:18 f 8427.91, cx 39.1095, cy 31.2033, k1 -0.118537, k2 0.128099, k3 0.0267767
2024-01-10 10:43:18 f 8429.72, cx 20.7913, cy 33.5567, k1 -0.117826, k2 0.123211, k3 0.0376948
2024-01-10 10:43:18 f 7366.84, cx -13.8719, cy 18.8519, k1 -0.0897347, k2 0.117177, k3 -0.0388326
2024-01-10 10:43:18 f 8428.96, cx 22.1275, cy 34.2011, k1 -0.11777, k2 0.12464, k3 0.0328493
2024-01-10 10:43:18 f 8748.63, cx 4.73646, cy -35.1598, k1 -0.114165, k2 0.15207, k3 0.0610519
2024-01-10 10:43:18 f 8193.61, cx 0, cy 0, k1 0, k2 0, k3 0
2024-01-10 10:43:18 f 10242, cx 0, cy 0, k1 0, k2 0, k3 0
2024-01-10 10:43:18 f 8193.61, cx 0, cy 0, k1 0, k2 0, k3 0
2024-01-10 10:46:12 adjusting: !xxxxxxxxxxxxx!x!x!x 0.74288 -> 0.742355
2024-01-10 12:13:53 final block size: 20456
2024-01-10 12:17:13 adding 353617053 points, 0 far, 1613259 inaccurate, 14284 invisible, 179 weak
2024-01-10 12:17:13 (3 px, 2 3d) sigma filtering...
2024-01-10 12:20:19 adjusting: !xxxxxxxxxxxxx!x!x!x 0.822517 -> 0.759586
2024-01-10 13:47:34 point variance: 0.841952 px, threshold: 2.52586 px
2024-01-10 13:50:44 adding 1449532 points, 9624987 far (2.52586 px threshold), 1323685 inaccurate, 2899 invisible, 91 weak
2024-01-10 13:51:13 removed 4 cameras: 20395, 20397, 20398, 20399
2024-01-10 13:51:13 removed 4 stations
2024-01-10 13:53:46 adjusting: !xxxxxxxxxxxxx!x 0.278704 -> 0.27798
2024-01-10 14:54:00 point variance: 0.306791 px, threshold: 0.920373 px
2024-01-10 14:57:03 adding 1254004 points, 14764293 far (0.920373 px threshold), 1111041 inaccurate, 1055 invisible, 137 weak
2024-01-10 14:59:30 adjusting: !xxxxxxxx


[edit]
Update - the host killed the process when it was about 87% complete. Looks like I might have to restart from scratch and process on a local machine. Going to try to copy everything to the worker node and run host/worker/monitor all on one machine so I don't have to restart everything but I need to modify paths... not sure how that's going to work...


3
Hi Metashape folks!

I'm running an alignment and I had a node doing AlignCameras.update and it got disconnected from the server (three other nodes show but not that one) - another node took over the operation, but the disconnected node is still going through the motions. is this unusual? what can I do to keep it from happening? Is there anyway to reconnect the disconnected node since it's still chugging away? I noticed that other nodes showed disconnect messages too but are still there...

Attached screenshots of the monitor showing the disconnection and the disconnected node still processing 22 minutes later...

4
Howdy Agisoft folks!

I'm digging into the guts of network processing on a 4-node cluster I built and I see that in the AlignCameras.update portion, I'm limited to a single node, and when I dive into that node, it's working on a single core, but using 80% of system RAM (256GB). Wondering if I would be able to process this faster on a node with faster RAM and CPU frequencies, but that node also has less RAM (64GB). The AlignCameras.update step appears to be a significant bottleneck for this workflow (processing multiple overlapping PPK flights together - about 26,000 images).

Thanks for making awesome software! :-)

5
Hi Metashape folks. I recently configured a (windoze) network with 4 nodes and I'm wondering if I need to manually set the GPU mask at 3 to use both GPUs or if the "default GPU mask" uses the number of detected GPUs (and if there's a way to see what the default is for a given node).

Thanks!

6
I have a chunk with 78218 cameras, 7372 of which are disabled and have no estimated positions. When I export all values (all checkboxes) from the GUI (12 decimals of precision) in tab-delimited format, all of the "enabled" flags are set to 1 regardless of whether the image is enabled or disabled.

-EDIT- when I calculate this flag as 1 or 0 based on whether the "estimated lat" value is present or absent, regardless of the value all images still import as enabled.

7
Can anyone explain for the chunk.matchPhotos method:
  • what are the workitem_size_cameras, workitem_size_pairs, and max_workgroup_size attributes?
  • how do they affect matching results?
  • (if applicable) what are the limits/guidelines for these values based on image mpx and GPU RAM?

I'm wondering how best I can use this method + chunk.triangulateTiePoints for "densifying" tie points after initial alignment and before optimization to increase intra-camera-group matches(before generating dense cloud).

For example, I have chunks with 10s of camera groups with thousands of images each that are coaligned (incrementally) using generic preselection 60k keypoints limit, 0 tiepoints limit.

Experimenting with a single camera group after optimization, by copying the chunk and deleting the other groups, I was able to increase the number of intra-group tiepoints by running chunk.matchPhotos then chunk.triangulateTiePoints with either the same matching parameters as before, or by switching to guided matching, for example:

Code: [Select]
chunk.matchPhotos(downscale = 1, generic_preselection=True, reference_preselection=False, keypoint_limit_per_mpx=4000, tiepoint_limit=0, guided_matching=True, reset_matches = True)
chunk.triangulateTiePoints()

Then if I re-filter with the same gradual selection parameters that I used before optimization I end up with anywhere from 2x to 10x the original number of tie points without changing intrinsics or extrinsics. This is nice because some of my early image collections didn't previously have enough projections to reconstruct well (many missing images from dense cloud), and using this method dramatically increases the projections on many of the trouble images without a huge increase in processing time. But I'm wondering if there are downsides that I'm not realizing to doing this, and if there are even more useful ways for me to take advantage of this by playing with the  workitem_size_cameras, workitem_size_pairs, and max_workgroup_size elements.

I'm also wondering if I can use this technique to increase inter-group matches by combining specific groups of cameras and passing that list to the 'cameras' variable of the matchPhotos method, and what the difference is between reset_matches = True vs False. "True" seems to re-run the entire matching process on every image but generated 360k matches vs 24k). Is that because there was a tiepoint threshold that was limiting new tiepoints (many of the original tie points were with images from other groups)?

If anyone has some deeper knowledge of the inner workings of this method, I'd be most grateful for your insight.

8
Build Tie Points is a useful tool to densify the tie point population before gradual selection or dense cloud building. It would be nice to be able to direct it to only densify certain chunks of a project in batch rather than working with names and loops so that it works in python. I tend to use a combo of scripts that check for the existence of a product before they run, and it's difficult to do with this algorithm, but if I had it in the batch dialog it would be grand. As I understand, this tool is the same as running chunk,matchPhotos, then chunk.triangulateTiePoints in the python API?

9
I am getting these errors trying to build a dense cloud on only one set of images and wondering if somehow my alignment is corrupt - this is in a psx with multiple chunks and dense clouds generated fine from other chunks (all derivatives of a "master" chunk aligned with multiple photosets).

Initially I thought that I had corrupt images but they read fine in other software. When I synced them from an archive anyway with checksum nothing was overwritten, and if I force-sync them one-at-a-time the dense cloud building exits with an error (very quickly 1-2s) referring to various images -

Error: Assertion 239101010894 failed. Image size mismatch: 20140828_SN000_IMG_4607.JPG
Error: Assertion 239101010894 failed. Image size mismatch: 20140828_SN000_IMG_4902.JPG
Error: Assertion 239101010894 failed. Image size mismatch: 20140828_SN000_IMG_4900.JPG
Error: Assertion 239101010894 failed. Image size mismatch: 20140828_SN000_IMG_4904.JPG
Error: Assertion 239101010894 failed. Image size mismatch: 20140828_SN000_IMG_4902.JPG

Anything I can do to potentially fix this?

10
I'm finding around >100GB of files that look like they're "orphaned" after crashes in 2.0.2 while building dense clouds on a AMD R9 7950X with dual RX 7900 XTX GPUs running win11.

I am filling out the crash reporter and reporting the crashes using AMD's bug report tool, but just want to make sure these are files I should delete, and ask - what's the best way to delete them? should I leave an empty dir or replace the dir? Is there any way I can make metashape reuse these files (e.g. by running in network mode with host/client/monitor all on this machine)?

I see that in projects where I later successfully generated the dense cloud, the "leftover" files are in /depth_maps and the completed files are in /depth_maps.1 - can I just delete /depth_maps if it has the *unfiltered* and *inliers* files?

Below is an excerpt of a dir listing for my latest crash:

Code: [Select]
07/03/2023  10:26 AM    <DIR>          .
07/03/2023  08:30 AM    <DIR>          ..
07/03/2023  10:23 AM       288,485,718 data0.zip
07/03/2023  10:24 AM       564,512,732 data1.zip
07/03/2023  10:25 AM       501,141,795 data2.zip
07/03/2023  10:26 AM       220,401,350 data3.zip
07/03/2023  08:32 AM       314,578,131 data_unfiltered0.zip
07/03/2023  08:34 AM       581,660,929 data_unfiltered1.zip
...
07/03/2023  10:22 AM       223,559,401 data_unfiltered65.zip
07/03/2023  08:44 AM       307,905,838 data_unfiltered7.zip
07/03/2023  08:46 AM       461,265,241 data_unfiltered8.zip
07/03/2023  08:47 AM       606,145,517 data_unfiltered9.zip
07/03/2023  08:32 AM       180,486,450 inliers0.zip
07/03/2023  08:34 AM       379,902,421 inliers1.zip
...
07/03/2023  10:22 AM        80,761,792 inliers65.zip
07/03/2023  08:44 AM       102,940,373 inliers7.zip
07/03/2023  08:46 AM       262,037,387 inliers8.zip
07/03/2023  08:47 AM       387,124,437 inliers9.zip
07/03/2023  08:31 AM       150,178,344 pm_cameras_info.data
07/03/2023  08:31 AM            26,992 pm_cameras_partitioning.grp
             138 File(s) 60,588,842,406 bytes
               2 Dir(s)  5,667,775,660,032 bytes free

11
I've been running incremental alignment on a new machine using dual AMD RX 7900 XTX GPUs and have incrementally aligned 20 photo sets so far running two at a time (two cameras on the aircraft). Matching just failed trying to align sets 21 and 22 and threw this error:

Error: ciErrNum: CL_OUT_OF_HOST_MEMORY (-6) at line 212

This is the first time I came across this error after doing incremental alignment on 20 other photo sets in the same project on this machine. I've been doing the same thing on a HPC and another workstation using NVidia CUDA GPUs for several weeks without encountering anything like this.  Log excerpt before the error is shown below, and the whole log for this run is attached as a zip. I'm going to reboot in case there's a GPU memory leak or something and will report back if it happens again.

Other details:

Driver Version
22.40.57.05-230523a-392837C-AMD-Software-Adrenalin-Edition

OS    Microsoft Windows 11 Pro N
Version   10.0.22621 Build 22621
Processor   AMD Ryzen 9 7950X
BaseBoard   ProArt X670E-CREATOR WIFI
Installed RAM   64.0 GB

Log excerpt:
Code: [Select]
2023-06-22 17:13:14 filtered 3261150 out of 3307520 matches (98.598%) in 0.448 sec
2023-06-22 17:13:16 saved matches in 0.007 sec
2023-06-22 17:13:18 loaded matching partition in 0.002 sec
2023-06-22 17:13:18 loaded keypoint partition in 0.001 sec
2023-06-22 17:13:46 loaded keypoints in 27.724 sec
2023-06-22 17:13:46 loaded matching data in 0.001 sec
2023-06-22 17:13:46 Matching points...
2023-06-22 17:13:48 AMD Radeon RX 7900 XTX (gfx1100): no SPIR support
2023-06-22 17:13:48 AMD Radeon(TM) Graphics (gfx1036): no SPIR support
2023-06-22 17:13:48 AMD Radeon RX 7900 XTX (gfx1100): no SPIR support
2023-06-22 17:13:48 Found 3 GPUs in 0 sec (CUDA: 0 sec, OpenCL: 0 sec)
2023-06-22 17:13:48 Using device: AMD Radeon RX 7900 XTX (gfx1100), 48 compute units, free memory: 24557/24560 MB, OpenCL 2.0
2023-06-22 17:13:48   driver version: 3516.0 (PAL,LC), platform version: OpenCL 2.1 AMD-APP (3516.0)
2023-06-22 17:13:48   max work group size 256
2023-06-22 17:13:48   max work item sizes [1024, 1024, 1024]
2023-06-22 17:13:48   max mem alloc size 20876 MB
2023-06-22 17:13:48   wavefront width 32
2023-06-22 17:13:48 Using device: AMD Radeon RX 7900 XTX (gfx1100), 48 compute units, free memory: 24557/24560 MB, OpenCL 2.0
2023-06-22 17:13:48   driver version: 3516.0 (PAL,LC), platform version: OpenCL 2.1 AMD-APP (3516.0)
2023-06-22 17:13:48   max work group size 256
2023-06-22 17:13:48   max work item sizes [1024, 1024, 1024]
2023-06-22 17:13:48   max mem alloc size 20876 MB
2023-06-22 17:13:48   wavefront width 32
2023-06-22 17:13:48 Loading kernels for AMD Radeon RX 7900 XTX (gfx1100)...
2023-06-22 17:13:48 Kernel loaded in 0.016 seconds
2023-06-22 17:13:49 Loading kernels for AMD Radeon RX 7900 XTX (gfx1100)...
2023-06-22 17:13:49 Kernel loaded in 0.017 seconds
2023-06-22 17:16:22 4156962 matches found in 154.515 sec
2023-06-22 17:16:23 matches combined in 0.35 sec
2023-06-22 17:16:23 filtered 3525814 out of 3579945 matches (98.4879%) in 0.487 sec
2023-06-22 17:16:25 saved matches in 0.006 sec
2023-06-22 17:16:27 loaded matching partition in 0.001 sec
2023-06-22 17:16:27 loaded keypoint partition in 0 sec
2023-06-22 17:16:56 loaded keypoints in 28.814 sec
2023-06-22 17:16:56 loaded matching data in 0 sec
2023-06-22 17:16:56 Matching points...
2023-06-22 17:16:59 AMD Radeon RX 7900 XTX (gfx1100): no SPIR support
2023-06-22 17:16:59 AMD Radeon(TM) Graphics (gfx1036): no SPIR support
2023-06-22 17:16:59 AMD Radeon RX 7900 XTX (gfx1100): no SPIR support
2023-06-22 17:16:59 Found 3 GPUs in 0 sec (CUDA: 0 sec, OpenCL: 0 sec)
2023-06-22 17:16:59 Using device: AMD Radeon RX 7900 XTX (gfx1100), 48 compute units, free memory: 24557/24560 MB, OpenCL 2.0
2023-06-22 17:16:59   driver version: 3516.0 (PAL,LC), platform version: OpenCL 2.1 AMD-APP (3516.0)
2023-06-22 17:16:59   max work group size 256
2023-06-22 17:16:59   max work item sizes [1024, 1024, 1024]
2023-06-22 17:16:59   max mem alloc size 20876 MB
2023-06-22 17:16:59   wavefront width 32
2023-06-22 17:16:59 Using device: AMD Radeon RX 7900 XTX (gfx1100), 48 compute units, free memory: 24557/24560 MB, OpenCL 2.0
2023-06-22 17:16:59   driver version: 3516.0 (PAL,LC), platform version: OpenCL 2.1 AMD-APP (3516.0)
2023-06-22 17:16:59   max work group size 256
2023-06-22 17:16:59   max work item sizes [1024, 1024, 1024]
2023-06-22 17:16:59   max mem alloc size 20876 MB
2023-06-22 17:16:59   wavefront width 32
2023-06-22 17:16:59 Loading kernels for AMD Radeon RX 7900 XTX (gfx1100)...
2023-06-22 17:16:59 Kernel loaded in 0.016 seconds
2023-06-22 17:17:04 loaded keypoint partition in 0 sec
2023-06-22 17:17:04 loaded matching partition in 0.033 sec
2023-06-22 17:17:05 loaded matching partition in 0.773 sec
2023-06-22 17:17:05 Error: ciErrNum: CL_OUT_OF_HOST_MEMORY (-6) at line 212
2023-06-22 17:17:05 Saving project...
2023-06-22 17:17:05 saved project in 0.111 sec
2023-06-22 17:17:05 Finished batch processing in 35851.5 sec (exit code 1)

12
I reset an aligned project (~16k images) after correcting an error in image positions, by selecting "reset alignment",  zeroing out the adjusted camera calibration parameters, and resetting the transform (trying to be thorough so I didn't corrupt camera models with previous alignment).

I was (pleasantly) surprised when I realigned, and matching completed very quickly (minutes instead of hours). It was clear that key points were saved, rather than regenerated, and I'm curious how key point behavior changed from this discussion:

Hello andyroo,

Currently it is not possible to split key point detection and image matching stages, they are grouped into Match Photos task. Keep key points feature has been introduced to allow the incremental matching, when new images are added to the already matched and aligned set of images.

As for the name of "reset current alignment" is meant to reset all the results obtained with running Align Photos operation, which include key points, tie points and EO/IO parameters.

Do I now need to explicitly delete keypoints or will they be automatically deleted if I change the tie point criteria? will they be automagically generated for new images if I add them, but kept for the old ones? Just trying to understand keypoints in detail because I frequently align multiple image collections together and this is a useful feature for me.

13
I saw a bunch of error messages in the Metashape console/log after saving a large project. The storage location is a lustre file system powered by a Cray ClusterStor L300. Doesn't appear to be related to filesystem performance according to the admin. I'm trying to figure out if I have a corrupt (60,000 image alignment) project that I need to realign or if this was a momentary hiccup and metashape recovered, and properly saved projections. I saw this in the log after saving the project:

<many previous errors saying the same>
...
...
2022-12-16 11:06:09 Error: Bad local file header signature
2022-12-16 11:06:09 Error: Can't load projections: <secret-path>/<secret-filename>_align.files/1/0/point_cloud/point_cloud.zip
2022-12-16 11:06:09 Error: Bad local file header signature
2022-12-16 11:06:09 Error: Can't load projections: <secret-path>/<secret-filename>_align.files/1/0/point_cloud/point_cloud.zip
2022-12-16 11:06:09 Error: Bad local file header signature
2022-12-16 11:06:09 Error: Can't load projections: <secret-path>/<secret-filename>_align.files/1/0/point_cloud/point_cloud.zip
2022-12-16 11:06:45 saved project in 195.219 sec


Tested file integrity after the errors and it looked fine:

No errors detected in compressed data of <secret-path>/<secret-filename>_align.files/1/0/point_cloud/point_cloud.zip

Seems to be a similar issue as this post by freetec1 from February, but I have run a couple big projects with 1.8.4 already and not seen this error before. I checked logs and this is the first time it's reported.

14
I just set up four a batch of DEM exports of large areas with 16384x16384 tiles so I can build COGs (which I also wish I could export directly from metashape) and it's painful to see only one node working, even though there are hundreds of tiles to build. Seems like this could be done similar to building orthos/dems and have many subtasks that could be node-distributed.

Still love the software though, and the dev responsiveness. Excited for the upcoming 2.0!

15
Feature Requests / Cloud-optimized geotiff export
« on: December 05, 2022, 08:21:02 AM »
I would love to be able to export DEMs and orthos with cloud-optimized geotiff (COG) format, specifying tile size, compression, predictor etc. We are using COGs to provide cloud-ready ortho and DEM data that can be queried or viewed without downloading an entire dataset (and tile servers such as TiTiler can digest and serve too).

As-is, it takes me MUCH  longer to convert the exported TIF than it does to create it. I would be very grateful if I could natively export COGs...

Currently I export geotiffs and convert using GDAL. For large geotiffs I batch-export in tiles (16384 x 16384) into <dest_dir>/{{chunklabel}/{chunklabel}.tif and convert like this:

for DIR in *; do gdalwarp "$DIR"/**.tif ./cog/"$DIR"_<projection>_cog.tif -of cog -overwrite -multi -wm 80% -co BLOCKSIZE=256 -co BIGTIFF=YES -co COMPRESS=DEFLATE -co PREDICTOR=YES -co NUM_THREADS=ALL_CPUS; done

Note this is basically a re-ask of my post from 2 years ago.

Andy

Pages: [1] 2 3 ... 12