Forum

Author Topic: corrupt .laz dense_cloud export? (1.6.5 linux)  (Read 3984 times)

andyroo

  • Sr. Member
  • ****
  • Posts: 443
    • View Profile
corrupt .laz dense_cloud export? (1.6.5 linux)
« on: October 16, 2021, 12:57:59 AM »
I exported 64 dense clouds from ~15 metashape projects and 23 of them are corrupt. I haven't been able to identify a consistent pattern to the corrupt files, but I tried re-copying from the source and verified that the files are corrupted on the source drive. I also re-exported a dense cloud from the GUI that was corrupted in a scripted export, and I get exactly the same error ( 'chunk with index 0 of 1 is corrupt' after 332 of 107417870 points).

[EDIT] I noticed that of the un-corrupt files, only two are one is larger than 50GB (largest is 57,663,283 52,403,552), and all of the corrupted ones are larger than 50GB (smallest is 51,941,696)

-most corrupted files were exported with a script on HPC nodes, but one was manually exported. (I exported 15 of the clouds manually through the GUI on my login node, one-at-a-time)

for the script, I used this code (snippet) for the exportPoints function:
Code: [Select]
chunk.exportPoints(exportFile, source_data=Metashape.DenseCloudData, crs = v_projection, format=Metashape.PointsFormatLAZ)
network processing was used, and scripts were distributed to 3 nodes, writing to the same dir on a lustre filesystem.

node 1: psx file 1 4/4 dense clouds ok
              psx file 2 2/4 dense clouds ok (#1 and #4 bad)
              psx file 3 2/4 dense clouds ok (#1 and #4 bad - different version of previous chunk)
              psx file 4 4/4 dense clouds ok
              psx file 5 2/4 dense clouds ok (#1 and #4 bad)
              psx file 6 0/4 dense clouds ok (ALL BAD)

node 2: psx file 1 4/4 dense clouds ok
              psx file 2 0/4 dense clouds ok (ALL BAD)
              psx file 3 0/4 dense clouds ok (ALL BAD)

node 3: psx file 1 0/4 dense clouds ok (ALL BAD)
              psx file 2 0/4 dense clouds ok (ALL BAD)

The "bad" files appear to be about the same size as the files that process ok, and the problem seems to be in the beginning of the file (there may be more bad ones further on in the file, but I've processed ~10 of the "good" files so far with no errors (gridding normals and confidence and Stddev elevation).
Here are the relevant errors I get with lasvalidate and lasinfo:

Code: [Select]
>lasvalidate

WARNING: end-of-file after 222 of 369637189 points
needed 0.00 sec for '20181007_NR_to_OA_dense.laz' fail
WARNING: end-of-file after 997 of 2943737 points
needed 0.00 sec for '20190830-0902_VA_to_OR_dense.laz' fail
WARNING: end-of-file after 1409 of 1011263656 points
needed 0.00 sec for '20190830_OR_to_HA_dense.laz' fail
WARNING: end-of-file after 1823 of 155724795 points
needed 0.00 sec for '20190830_VA_to_OR_dense.laz' fail
WARNING: end-of-file after 1920 of 2700500566 points
needed 0.00 sec for '20190902_VA_to_OR_dense.laz' fail
WARNING: end-of-file after 332 of 107417870 points
needed 0.00 sec for '20191011_OC_to_LO_dense.laz' fail
WARNING: end-of-file after 1906 of 1629065455 points
needed 0.00 sec for '20191011_OR_to_HA_dense.laz' fail
WARNING: end-of-file after 4167 of 2477398798 points
needed 0.01 sec for '20191011_VA_OR_dense.laz' fail
WARNING: end-of-file after 27 of 1681857002 points
needed 0.00 sec for '20191126_OC_to_LO_dense.laz' fail
WARNING: end-of-file after 85 of 2932739702 points
needed 0.00 sec for '20191126_OR_to_HA_dense.laz' fail
WARNING: end-of-file after 3906 of 785969002 points
needed 0.01 sec for '20191126_VA_OR_dense.laz' fail
WARNING: end-of-file after 3875 of 1345029075 points
needed 0.00 sec for '20200208-9_OC_to_LO_dense.laz' fail
WARNING: end-of-file after 460 of 2881636414 points
needed 0.00 sec for '20200208-9_OR_to_HA_dense.laz' fail
WARNING: end-of-file after 3017 of 1373215110 points
needed 0.00 sec for '20200208-9_VA_OR_dense.laz' fail
WARNING: end-of-file after 413 of 1500086455 points
needed 0.00 sec for '20200508-9_OC_to_LO_dense.laz' fail
WARNING: end-of-file after 898 of 3101815941 points
needed 0.00 sec for '20200508-9_OR_to_HA_dense.laz' fail
WARNING: end-of-file after 489 of 2661668716 points
needed 0.00 sec for '20200508-9_VA_OR_dense.laz' fail
WARNING: end-of-file after 4294 of 908102077 points
needed 0.01 sec for '20200802_OC_to_LO_dense.laz' fail
WARNING: end-of-file after 1631 of 1270674803 points
needed 0.00 sec for '20200802_OR_dense.laz' fail
WARNING: end-of-file after 1609 of 2230961910 points
needed 0.00 sec for '20200802_OR_to_HA_dense.laz' fail
WARNING: end-of-file after 4220 of 586845194 points
needed 0.01 sec for '20210430_OC_to_LO_dense.laz' fail
WARNING: end-of-file after 119 of 1732898564 points
needed 0.00 sec for '20210430_OR_dense.laz' fail
WARNING: end-of-file after 2076 of 2464394245 points
needed 0.00 sec for '20210430_OR_to_HA_dense.laz' fail
done. total time 0.08 sec. total fail (pass=0,warning=0,fail=23)

>lasinfo:
ERROR: 'chunk with index 0 of 1 is corrupt' after 222 of 369637189 points for '20181007_NR_to_OA_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 997 of 2943737 points for '20190830-0902_VA_to_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 1409 of 1011263656 points for '20190830_OR_to_HA_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 1823 of 155724795 points for '20190830_VA_to_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 1920 of 2700500566 points for '20190902_VA_to_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 332 of 107417870 points for '20191011_OC_to_LO_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 1906 of 1629065455 points for '20191011_OR_to_HA_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 4167 of 2477398798 points for '20191011_VA_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 27 of 1681857002 points for '20191126_OC_to_LO_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 85 of 2932739702 points for '20191126_OR_to_HA_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 3906 of 785969002 points for '20191126_VA_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 3875 of 1345029075 points for '20200208-9_OC_to_LO_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 460 of 2881636414 points for '20200208-9_OR_to_HA_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 3017 of 1373215110 points for '20200208-9_VA_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 413 of 1500086455 points for '20200508-9_OC_to_LO_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 898 of 3101815941 points for '20200508-9_OR_to_HA_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 489 of 2661668716 points for '20200508-9_VA_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 4294 of 908102077 points for '20200802_OC_to_LO_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 1631 of 1270674803 points for '20200802_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 1609 of 2230961910 points for '20200802_OR_to_HA_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 4220 of 586845194 points for '20210430_OC_to_LO_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 119 of 1732898564 points for '20210430_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 2076 of 2464394245 points for '20210430_OR_to_HA_dense.laz'

« Last Edit: October 17, 2021, 12:51:44 AM by andyroo »

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 15067
    • View Profile
Re: corrupt .laz dense_cloud export? (1.6.5 linux)
« Reply #1 on: October 17, 2021, 08:32:32 PM »
Hello Andy,

Were there any errors/warnings in the export log? If you can rather easily reproduce the problem with the large point clouds on certain nodes, can you please check, if the problem persists in 1.7.5 release version?
Best regards,
Alexey Pasumansky,
Agisoft LLC

andyroo

  • Sr. Member
  • ****
  • Posts: 443
    • View Profile
Re: corrupt .laz dense_cloud export? (1.6.5 linux)
« Reply #2 on: December 03, 2021, 01:10:14 AM »
I'm still troubleshooting this, but I realized by looking at the exported file that the las major/minor version is still reported as 1.2 even though Metashape is putting the normals in the "extra bytes" field  introduced in las 1.4?

Does that mean that Metashape is writing the extended point count in las 1.4 format too? I looked at the header of a .laz file exported from Metashape, using lastools (RIP Martin  :'(), and it still says

version major.minor:        1.2

Since las 1.2 has a theoretical limit of ~4.2 billion points and las 1.4 has a limit of something like 18 quintillion points, it seems like that's the source of my error. I've confirmed that all of my corrupted files have >~4.2 billion points.

Before I spend time trying to repair/recover the file - does Metashape (1.6.5 or 1.7.5) write the total number of points as UINT64 to the last two fields in the public header block as reported in the las 1.4 specification described here (top of numbered page 9/PDF page 12)?

If not, it seems like even though the points are in the huge laz files I made, They're probably not recoverable unless I manually add/edit those records in the public header block after making the file las 1.4 compatible, which is pushing my abilities a bit, I think.

--edit-- looking a little more carefully at the lasinfo output, I get a WARNING: 'corrupt chunk table' message - If I understand this right, it's probably not a recoverable error in the .laz file, and my only option would be to export a .las file instead, and try to repair the header and convert that to 1.4. Since the compressed .laz files are from 50 - 200 GB each, I'm not sure this is practical.
« Last Edit: December 03, 2021, 09:43:15 AM by andyroo »

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 15067
    • View Profile
Re: corrupt .laz dense_cloud export? (1.6.5 linux)
« Reply #3 on: December 07, 2021, 03:30:47 PM »
Hello Andy,

Please check version 1.7.6 pre-release (build 13639) with the LAZ export fix:
http://download.agisoft.com/metashape-pro_1_7_6_x64.msi                            Windows
http://download.agisoft.com/metashape-pro_1_7_6.dmg                                   macOS
http://download.agisoft.com/metashape-pro_1_7_6_amd64.tar.gz                   Linux

If you need to fix already exported point clouds, please let me know.
Best regards,
Alexey Pasumansky,
Agisoft LLC

andyroo

  • Sr. Member
  • ****
  • Posts: 443
    • View Profile
Re: corrupt .laz dense_cloud export? (1.6.5 linux)
« Reply #4 on: December 09, 2021, 01:53:58 AM »
Thank you Alexey! PM sent.

andyroo

  • Sr. Member
  • ****
  • Posts: 443
    • View Profile
Re: corrupt .laz dense_cloud export? (1.6.5 linux)
« Reply #5 on: December 15, 2021, 12:00:39 AM »
Update to anyone who needs to fix this issue with LAZ files (Thanks Alexey for your help):

According to Alexey (PM), the "Offset to Point Data" value in the LAS header is set to 148 bytes larger than expected when promoting 1.2 LAS format file to 1.4 format version for point clouds larger than 4 billion points. Alexey suggested editing the value with a hex editor. My solution was a little easier. I used lasinfo with the -set_offset_to_point_data switch to adjust the value.  In my version of lasinfo things were slightly complicated because it was not setting the offset correctly (offset was lower than specified by 106 bytes). Your results may differ (e.g. different lasinfo version or .las instead of .laz) but the same general method should work.Steps as follows:

first query the file(s) for existing value:

lasinfo -i *.laz -no_check -no_vlrs -stdout |findstr /C:"offset to point data"

  offset to point data:       1388
  offset to point data:       1536
  offset to point data:       1536
  offset to point data:       1536
  offset to point data:       1536
  offset to point data:       1536
  offset to point data:       1536

next set new value:

lasinfo -i *.laz -no_check -no_vlrs -set_offset_to_point_data <old value - 148 + 106>

in my case the old value was 1536 (for all but one file - so I changed that one separately) so the command was this:

lasinfo -i *.laz -no_check -no_vlrs -set_offset_to_point_data 1494

now check to see if it's correct:

lasinfo -i *.laz -no_check -no_vlrs -stdout |findstr /C:"offset to point data"

  offset to point data:       1388
  offset to point data:       1388
  offset to point data:       1388
  offset to point data:       1388
  offset to point data:       1388
  offset to point data:       1388
  offset to point data:       1388

if not, figure out what you need to substitute for 106. If you get error (lasreader error) try setting it back to the original value. When I tried setting mine to 1536 again it reported 1430 which is where I realized I was getting a 106-byte offset from what I specified (maybe because .laz instead of .las?)

Anyway, here's the fix for posterity.