Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - andyroo

Pages: [1] 2 3 ... 11
1
I have the following script, which works fine in Metashape 1.6.5 but fails in 1.7.5 with error: vertical datum out of range

My dense cloud is in GCS and my DEM (from which the dense cloud CRS is derived) is in a compound CS (NAD832011 + NAVD88). Geoid is in the appropriate subdir.

If I open the 1.7.5 project in 1.6.6 I can run the script below and export everything fine.


Code: [Select]
import Metashape
import math
import os
from os import path
laz_subdir = 'laz'
doc = Metashape.app.document
doc_path = os.path.split(doc.path)[0]
outPath = os.path.normpath(doc_path + os.sep + laz_subdir)
app = Metashape.app
doc = app.document
network_tasks = list()

for chunk in doc.chunks:
    if chunk.dense_cloud:
        print(chunk.label)
       
        v_projection = chunk.elevation.crs #presumes DEM built with desired PCS
        crs_label = v_projection.name
        crs_label = ''.join([x if x.isalnum() else '_' for x in crs_label if x not in '()/+'])
        crs_label = crs_label.replace('__','_')
        crs_label = crs_label.replace('_zone','')

        outFilename = chunk.label + '_dense_' + crs_label + '.laz'
        exportFile = os.path.normpath(outPath+os.sep+outFilename)
        if not os.path.exists(outPath):
            print('testing create path: ' + outPath)
            os.makedirs(outPath)
            print('testing file writestring: ' + exportFile)
            chunk.exportPoints(exportFile, source_data=Metashape.DenseCloudData, crs=v_projection, format=Metashape.PointsFormatLAZ)
        else:
            if not os.path.isfile(exportFile):
                print('testing file writestring: ' + exportFile)
                chunk.exportPoints(exportFile, source_data=Metashape.DenseCloudData, crs=v_projection, format=Metashape.PointsFormatLAZ)
    else:
        print(chunk.label, ' has no dense cloud')
print('script complete')

2
I remember back in 1.4 or 1.5 it seemed like the point detection was continuous and very fast on 2-3 GPUs. Now I get a "found 2 GPUs" message every 20 images with a 1.5s pause, then a fraction of a second pause after every pair of photos for 10 pairs of photos, then the sequence repeats.

This seems like a significantly slower performance than before, but I'm not sure if it's actually faster because I don't remember what version to compare it with. Also if there are other performance benefits, I understand, I'm just curious what change.

Finally if there's a tweak to use an old method, or to just assume that I have the same video cards and they're behaving properly so it avoids the 1.5s pause ever 20 photos that would be cool because when I'm doing projects with 100,000 photos that equates to just over 2h of pause time, and it's pretty clear looking at my CUDA graphs that it's not continuously using the CUDA.

I'm willing to experiment with different tweaks if they're available, and happy to report my results.

3
Bug Reports / corrupt .laz dense_cloud export? (1.6.5 linux)
« on: October 16, 2021, 12:57:59 AM »
I exported 64 dense clouds from ~15 metashape projects and 23 of them are corrupt. I haven't been able to identify a consistent pattern to the corrupt files, but I tried re-copying from the source and verified that the files are corrupted on the source drive. I also re-exported a dense cloud from the GUI that was corrupted in a scripted export, and I get exactly the same error ( 'chunk with index 0 of 1 is corrupt' after 332 of 107417870 points).

[EDIT] I noticed that of the un-corrupt files, only two are one is larger than 50GB (largest is 57,663,283 52,403,552), and all of the corrupted ones are larger than 50GB (smallest is 51,941,696)

-most corrupted files were exported with a script on HPC nodes, but one was manually exported. (I exported 15 of the clouds manually through the GUI on my login node, one-at-a-time)

for the script, I used this code (snippet) for the exportPoints function:
Code: [Select]
chunk.exportPoints(exportFile, source_data=Metashape.DenseCloudData, crs = v_projection, format=Metashape.PointsFormatLAZ)
network processing was used, and scripts were distributed to 3 nodes, writing to the same dir on a lustre filesystem.

node 1: psx file 1 4/4 dense clouds ok
              psx file 2 2/4 dense clouds ok (#1 and #4 bad)
              psx file 3 2/4 dense clouds ok (#1 and #4 bad - different version of previous chunk)
              psx file 4 4/4 dense clouds ok
              psx file 5 2/4 dense clouds ok (#1 and #4 bad)
              psx file 6 0/4 dense clouds ok (ALL BAD)

node 2: psx file 1 4/4 dense clouds ok
              psx file 2 0/4 dense clouds ok (ALL BAD)
              psx file 3 0/4 dense clouds ok (ALL BAD)

node 3: psx file 1 0/4 dense clouds ok (ALL BAD)
              psx file 2 0/4 dense clouds ok (ALL BAD)

The "bad" files appear to be about the same size as the files that process ok, and the problem seems to be in the beginning of the file (there may be more bad ones further on in the file, but I've processed ~10 of the "good" files so far with no errors (gridding normals and confidence and Stddev elevation).
Here are the relevant errors I get with lasvalidate and lasinfo:

Code: [Select]
>lasvalidate

WARNING: end-of-file after 222 of 369637189 points
needed 0.00 sec for '20181007_NR_to_OA_dense.laz' fail
WARNING: end-of-file after 997 of 2943737 points
needed 0.00 sec for '20190830-0902_VA_to_OR_dense.laz' fail
WARNING: end-of-file after 1409 of 1011263656 points
needed 0.00 sec for '20190830_OR_to_HA_dense.laz' fail
WARNING: end-of-file after 1823 of 155724795 points
needed 0.00 sec for '20190830_VA_to_OR_dense.laz' fail
WARNING: end-of-file after 1920 of 2700500566 points
needed 0.00 sec for '20190902_VA_to_OR_dense.laz' fail
WARNING: end-of-file after 332 of 107417870 points
needed 0.00 sec for '20191011_OC_to_LO_dense.laz' fail
WARNING: end-of-file after 1906 of 1629065455 points
needed 0.00 sec for '20191011_OR_to_HA_dense.laz' fail
WARNING: end-of-file after 4167 of 2477398798 points
needed 0.01 sec for '20191011_VA_OR_dense.laz' fail
WARNING: end-of-file after 27 of 1681857002 points
needed 0.00 sec for '20191126_OC_to_LO_dense.laz' fail
WARNING: end-of-file after 85 of 2932739702 points
needed 0.00 sec for '20191126_OR_to_HA_dense.laz' fail
WARNING: end-of-file after 3906 of 785969002 points
needed 0.01 sec for '20191126_VA_OR_dense.laz' fail
WARNING: end-of-file after 3875 of 1345029075 points
needed 0.00 sec for '20200208-9_OC_to_LO_dense.laz' fail
WARNING: end-of-file after 460 of 2881636414 points
needed 0.00 sec for '20200208-9_OR_to_HA_dense.laz' fail
WARNING: end-of-file after 3017 of 1373215110 points
needed 0.00 sec for '20200208-9_VA_OR_dense.laz' fail
WARNING: end-of-file after 413 of 1500086455 points
needed 0.00 sec for '20200508-9_OC_to_LO_dense.laz' fail
WARNING: end-of-file after 898 of 3101815941 points
needed 0.00 sec for '20200508-9_OR_to_HA_dense.laz' fail
WARNING: end-of-file after 489 of 2661668716 points
needed 0.00 sec for '20200508-9_VA_OR_dense.laz' fail
WARNING: end-of-file after 4294 of 908102077 points
needed 0.01 sec for '20200802_OC_to_LO_dense.laz' fail
WARNING: end-of-file after 1631 of 1270674803 points
needed 0.00 sec for '20200802_OR_dense.laz' fail
WARNING: end-of-file after 1609 of 2230961910 points
needed 0.00 sec for '20200802_OR_to_HA_dense.laz' fail
WARNING: end-of-file after 4220 of 586845194 points
needed 0.01 sec for '20210430_OC_to_LO_dense.laz' fail
WARNING: end-of-file after 119 of 1732898564 points
needed 0.00 sec for '20210430_OR_dense.laz' fail
WARNING: end-of-file after 2076 of 2464394245 points
needed 0.00 sec for '20210430_OR_to_HA_dense.laz' fail
done. total time 0.08 sec. total fail (pass=0,warning=0,fail=23)

>lasinfo:
ERROR: 'chunk with index 0 of 1 is corrupt' after 222 of 369637189 points for '20181007_NR_to_OA_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 997 of 2943737 points for '20190830-0902_VA_to_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 1409 of 1011263656 points for '20190830_OR_to_HA_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 1823 of 155724795 points for '20190830_VA_to_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 1920 of 2700500566 points for '20190902_VA_to_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 332 of 107417870 points for '20191011_OC_to_LO_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 1906 of 1629065455 points for '20191011_OR_to_HA_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 4167 of 2477398798 points for '20191011_VA_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 27 of 1681857002 points for '20191126_OC_to_LO_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 85 of 2932739702 points for '20191126_OR_to_HA_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 3906 of 785969002 points for '20191126_VA_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 3875 of 1345029075 points for '20200208-9_OC_to_LO_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 460 of 2881636414 points for '20200208-9_OR_to_HA_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 3017 of 1373215110 points for '20200208-9_VA_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 413 of 1500086455 points for '20200508-9_OC_to_LO_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 898 of 3101815941 points for '20200508-9_OR_to_HA_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 489 of 2661668716 points for '20200508-9_VA_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 4294 of 908102077 points for '20200802_OC_to_LO_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 1631 of 1270674803 points for '20200802_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 1609 of 2230961910 points for '20200802_OR_to_HA_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 4220 of 586845194 points for '20210430_OC_to_LO_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 119 of 1732898564 points for '20210430_OR_dense.laz'
ERROR: 'chunk with index 0 of 1 is corrupt' after 2076 of 2464394245 points for '20210430_OR_to_HA_dense.laz'


4
I wrote the below code to build a 25cm ortho (rather than full-res) with integer cell bounds, and it works great if I copy/paste into the console, but if I run it from the <>run script command and send to network processing, or if I launch it as a batch/run script (and send the batch to network processing) it fails to generate any network tasks (only shows as "running script" and the node reports "no network tasks to do!" from my else statement)

- note that I've only tested this on my local workstation configured as host/monitor/node/client, but I haven't seen this behavior on other scripts I tested.

Code: [Select]
import Metashape
import math
import os
from os import path
#-------------------------------------------------------#
#define user-set variables
raster_rounding_multiple = 40   # Default = 40 - This will be the multiple that the raster resolution is multiplied by to define the units the min/max extents are rounded to (DEM is 10 so this keeps unit rounding consistent)
raster_resolution = 0.25           # Default = 0.25 - cell size of exported ortho
raster_crop = True              # Default = True - True means Bounding Box in rounded IN - minimum extent is rounded up and maximum extent is rounded down from raster edges. False is reversed
                                # TODO - make it so metashape checks to see if this is an interpolated raster (shrink) or uninterpolated (grow?)
                                # ALSO - maybe we want to project the xy coordinates of the 3D dense cloud region and use those instead? this will result in no/minimal collar though...
ortho_subdir = 'ortho'              # this is a subdir that will be created under the document (PSX) path
ortho_suffix = '_NAD83_2011_UTM18'  #suffix to append to ortho (future - modify this to grab from WKT)
raster_rounding_interval = raster_rounding_multiple * raster_resolution
app = Metashape.app
doc = app.document
network_tasks = list()
for chunk in doc.chunks:
    if chunk.elevation:
        print(chunk.label)
        out_projection = chunk.elevation.projection
        compression = Metashape.ImageCompression()
        compression.tiff_compression = Metashape.ImageCompression.TiffCompressionLZW
        compression.tiff_big = True
        compression.tiff_overviews = True
        compression.tiff_tiled = True
           
        def round_down(x):
            return int(raster_rounding_interval * math.floor(float(x)/raster_rounding_interval))

        def round_up(x):
            return int(raster_rounding_interval * math.ceil(float(x)/raster_rounding_interval))


        #chunk.elevation.crs(wkt) #returns CRS for DEM
        testbox = Metashape.BBox() #create a bounding box for the raster
        print('')
        print('original DEM BBox coordinates:')
        print('min: ', Metashape.Vector((min(chunk.elevation.left, chunk.elevation.right), min(chunk.elevation.bottom, chunk.elevation.top))))
        print('max: ', Metashape.Vector((max(chunk.elevation.left, chunk.elevation.right), max(chunk.elevation.bottom, chunk.elevation.top))))

        if raster_crop:
            testbox.min = Metashape.Vector((round_up(min(chunk.elevation.left, chunk.elevation.right)), round_up(min(chunk.elevation.bottom, chunk.elevation.top))))
            testbox.max = Metashape.Vector((round_down(max(chunk.elevation.left, chunk.elevation.right)), round_down(max(chunk.elevation.bottom, chunk.elevation.top))))
        else:
            testbox.min = Metashape.Vector((round_down(min(chunk.elevation.left, chunk.elevation.right)), round_down(min(chunk.elevation.bottom, chunk.elevation.top))))
            testbox.max = Metashape.Vector((round_up(max(chunk.elevation.left, chunk.elevation.right)), round_up(max(chunk.elevation.bottom, chunk.elevation.top))))

        if raster_crop:
            print('extent was SHRUNK to: ')
            print('min: ',testbox.min)
            print('max: ',testbox.max)
        else:
            print('extent was GROWN to: ')
            print('min: ',testbox.min)
            print('max: ',testbox.max)
       
        print('building ortho in network mode')
        task = Metashape.Tasks.BuildOrthomosaic()
        task.blending_mode = Metashape.BlendingMode.AverageBlending
        task.cull_faces = False
        task.fill_holes = True
        task.projection = out_projection
        task.region = testbox
        task.resolution = raster_resolution
        task.resolution_x = raster_resolution
        task.resolution_y = raster_resolution
        task.refine_seamlines = False
        task.subdivide_task = True
        task.surface_data = Metashape.DataSource.ElevationData

        n_task = Metashape.NetworkTask()
        n_task.name = task.name
        n_task.params = task.encode()
        n_task.frames.append((chunk.key, 0))
        network_tasks.append(n_task)


if network_tasks:
    print('sending ', len(network_tasks), 'for processing')
    client = Metashape.NetworkClient()
    client.connect(app.settings.network_host) #server ip
    batch_id = client.createBatch(doc.path, network_tasks)
    client.resumeBatch(batch_id)
else:
    print('no network tasks to do!')
print('script complete')

5
I was a little surprised to find that our accuracy decreased, and processing time and memory requirements increased, when processing imagery with the most recent versions of Metashape.

I compared alignment and optimization results for 1.6.5/1.6.6 and 1.7.2 using eight overlapping flights (F1 through F8 in table below, ~140,000 images total) processed with precise camera positions and a single GCP.  These eight flights were aligned and optimized in two batches of four temporally adjacent flights using a 4D technique (sensu Warrick et al 2017), with each batch being roughly 70,000 images total. Results were compared by differencing 1m integer-cell-bounded DSMs produced by metashape for each flight with RTK elevations from 34 GCPs distributed in a region comprising about 20% of the total flight area.

The same source project and processing steps were used for both 1.6.5 and 1.7.2 (alignment, optimization, dense cloud processing, and DEM production and export were entirely batch processing or python API). Settings such as alignment quality and keypoint/tiepoint limit were identical (70,000/0). After alignment and optimization, a sub-region with ground control points was reconstructed and DEM was output, then the DEM elevations at 34 different GCPs were extracted (one of which was used in the optimization).

The table below shows mean and median signed and unsigned error when differencing 34 GCPs with DEMs created from each flight, as well as the average  of each error for all flights (last data column). In all cases, the 1.7.2 values were elevated vs 1.6.5.

F1F2F3F4F5F6F7F8Avg
-0.01-0.03-0.06-0.07-0.11-0.09-0.10-0.11-0.07mean signed differenceV 1.6.5 RuPaReX2+FitAddl on last iteration
-0.03-0.07-0.08-0.09-0.14-0.13-0.14-0.13-0.10mean signed differenceV 1.7.2 RuPaReX2+ FitAddl on last iteration
     
0.00-0.02-0.04-0.05-0.09-0.09-0.09-0.11-0.06median signed differenceV 1.6.5 RuPaReX2+FitAddl on last iteration
-0.04-0.06-0.08-0.07-0.13-0.12-0.11-0.13-0.09median signed differenceV 1.7.2 RuPaReX2+ FitAddl on last iteration
     
0.060.070.090.090.120.110.150.120.10mean absolute differenceV 1.6.5 RuPaReX2+FitAddl on last iteration
0.120.160.170.180.210.240.260.210.19mean absolute differenceV 1.7.2 RuPaReX2+ FitAddl on last iteration
     
0.050.060.050.060.090.100.110.110.08median absolute differenceV 1.6.5 RuPaReX2+FitAddl on last iteration
0.060.080.090.070.130.120.110.130.10median absolute differenceV 1.7.2 RuPaReX2+ FitAddl on last iteration

Interestingly, 1.7.2 found more points, and kept more points after optimization. A few other observations (drawn from one of the two batches - sorry for the code formatting, was having trouble with column alignment):

Code: [Select]
1.6.5 1.7.2 1.7.2/1.6.5
Matching time (h) 14 28 200%
Alignment time (h) 44 60 136%
Optimize time 15.1 6.2 41%
Matching memory(GB) 48 217 452%
Alignment memory 97 132 136%


Total Align/Opt (h) 73.1 94.2 129%

Reference:

Jonathan A. Warrick, Andrew C. Ritchie, Gabrielle Adelman, Kenneth Adelman, Patrick W. Limber "New Techniques to Measure Cliff Change from Historical Oblique Aerial Photographs and Structure-from-Motion Photogrammetry," Journal of Coastal Research, 33(1), 39-55, (1 January 2017), https://doi.org/10.2112/JCOASTRES-D-16-00095.1


6
TLDR: GPU masking with Metashape 1.7.2 on a CentOS linux node is mirrored/reversed, or applied from high-to-low.

Not sure if this is expected behavior because the examples I found in the API/forum were ambiguous.

We had uncorrectable GPU memory errors on one of our cards on a HPC GPU node (CentOS) that I worked around by masking the offending GPU:
Code: [Select]
Jun 21 19:56:35 dl-0001 kernel: NVRM: GPU at PCI:0000:89:00: GPU-<censored>
Jun 21 19:56:35 dl-0001 kernel: NVRM: GPU Board Serial Number: <censored>
Jun 21 19:56:35 dl-0001 kernel: NVRM: Xid (PCI:0000:89:00): 63, Dynamic Page Retirement: New page retired, reboot to activate (0x00000000000a44da).
Jun 21 19:56:37 dl-0001 kernel: NVRM: Xid (PCI:0000:89:00): 63, Dynamic Page Retirement: New page retired, reboot to activate (0x00000000000a249c).
Jun 21 19:56:40 dl-0001 kernel: NVRM: Xid (PCI:0000:89:00): 48, An uncorrectable double bit error (DBE) has been detected on GPU in the framebuffer at partition 6, subpartition 0.
Jun 21 19:56:40 dl-0001 kernel: NVRM: Xid (PCI:0000:89:00): 63, Dynamic Page Retirement: New page retired, reboot to activate (0x00000000000a2ca1).

 nvidia-smi reported GPU2 was bad (of GPUs 0-3) :
Code: [Select]
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.67       Driver Version: 418.67       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla V100-SXM2...  On   | 00000000:61:00.0 Off |                    0 |
| N/A   30C    P0    40W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla V100-SXM2...  On   | 00000000:62:00.0 Off |                    0 |
| N/A   29C    P0    38W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  Tesla V100-SXM2...  On   | 00000000:89:00.0 Off |                    1 |
| N/A   30C    P0    38W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  Tesla V100-SXM2...  On   | 00000000:8A:00.0 Off |                    0 |
| N/A   31C    P0    41W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

But my attempts to mask led to some confusion about expected mask behavior. For clarity below I'm representing the masks as binary, but they were converted to decimal in my metashape -gpu_mask argument (ie binary 1011 = decimal 11).

With a depth mapping job active on the node, we confirmed that masking 0011 activated GPU 0 and 1, masking 1100  crashed the metashape process, and masking 1011 (decimal 11) enabled GPU 0,1, and 3. Metashape was called as below to mask GPU2:
Code: [Select]
srun metashape.sh --node --dispatch $ip4 --capability any --cpu_enable 1 --gpu_mask 11 --inprocess -platform offscreen

7
I'm trying to set up --nice slurm scripts to run big jobs at a low priority but using all available nodes. I have a couple questions about best practices:
  • Is it possible to pass a signal to the server/monitor/node to die/suspend nicely (i.e. pause/stop, finish particular task, then quit)? Right now by default if I scancel a job it just dies, but I can pass signals to child processes of the batch script. I'd like to figure out how my --nice nodes could exit/suspend in a way that finishes whatever subtask I'm in the middle of (because some of them take hours)
  • Is it possible to specify that certain nodes get priorities for specific long-running tasks (like AlignCameras.finalize)? Ideally I'd assign that task to a node that has normal priority, and not a --nice node. Also ideally if there's an unused node of the right type available (CPU vs GPU), I'd like to start a fresh job to maximize my time in case I'm near the end of my allocation for a given node, since interrupting that task sometimes costs up to 24+ hours.
  • are there other metashape tips/tricks of network processing nicely that I haven't thought of? I'm especially interested in being able to spawn and retire nodes as needed during different stages of a batch or script. At the moment I have a workflow that is batch-driven, where some steps run scripts that affect the whole document, and others are simple batch steps. I imagine to do good node management I'd need to go to 100% scripted.
  • If anyone has some example python code that spawns and retires nodes I would be much-obliged for sharing

Thanks!

8
Bug Reports / 1.7.3 Possible save bug if disk full
« on: June 16, 2021, 11:26:50 PM »
I ran out of space on my work disk while saving a project, so I deleted some stuff unrelated to the project to make room, then tried re-saving. During the second attempt at saving I got an error that the file is being used by another process. I have no other instances of Metashape running. Guessing my only option is to save the entire project as another project.

 Full console errors below:

Code: [Select]
2021-06-16 10:46:56 SaveProject: path = D:/FloSup/FloSup_Align4d/FloSup_batch_2/optimize_tests/FloSup_4D_202008-202104.psx
2021-06-16 10:46:56 Saving project...
2021-06-16 10:52:11 Error: Can't write file: There is not enough space on the disk (112): D:/FloSup/FloSup_Align4d/FloSup_batch_2/optimize_tests/FloSup_4D_202008-202104.files/10/0/point_cloud/point_cloud.zip.tmp
2021-06-16 10:52:11 Error: Can't remove file: The process cannot access the file because it is being used by another process (32): D:/FloSup/FloSup_Align4d/FloSup_batch_2/optimize_tests/FloSup_4D_202008-202104.files/10/0/point_cloud/point_cloud.zip.tmp
2021-06-16 10:52:11 Finished processing in 314.97 sec (exit code 0)
2021-06-16 10:52:11 Error: Can't write file: There is not enough space on the disk (112): D:/FloSup/FloSup_Align4d/FloSup_batch_2/optimize_tests/FloSup_4D_202008-202104.files/10/0/point_cloud/point_cloud.zip.tmp
2021-06-16 13:11:10 SaveProject: path = D:/FloSup/FloSup_Align4d/FloSup_batch_2/optimize_tests/FloSup_4D_202008-202104.psx
2021-06-16 13:11:11 Saving project...
2021-06-16 13:16:20 Error: Can't remove file: The process cannot access the file because it is being used by another process (32): D:/FloSup/FloSup_Align4d/FloSup_batch_2/optimize_tests/FloSup_4D_202008-202104.files/10/0/point_cloud/point_cloud.zip.tmp
2021-06-16 13:16:20 Finished processing in 309.418 sec (exit code 0)
2021-06-16 13:16:20 Error: Can't replace file or directory: The process cannot access the file because it is being used by another process (32): D:/FloSup/FloSup_Align4d/FloSup_batch_2/optimize_tests/FloSup_4D_202008-202104.files/10/0/point_cloud/point_cloud.zip

9
I'm getting the error below periodically. The first time it happened I was doing two things at once (sorry don't remember - I think I ran code right after saving) and I thought maybe I did the one thing too soon after the other.

This time (2nd time, a day later) I was just doing standard stuff in the console. I don't think I did a full reboot after the last error. Will try that now.

When I got googly on the error the only stuff I saw that jumped out was folks fixing a similar error by upgrading ipykernel or downgrading tornado

Here's the log showing me triggering the error with a few commands prior.

Code: [Select]
In[29]: chunk
Out[29]: 2021-06-11 12:35:15 <Chunk 'Copy of Hatteras_Inlet_to_Ocracoke_Inlet_RuPaRe_x2_FloSup_4D_202008-202104'>

In[30]: group_label = chunk.camera_groups[0].label

In[31]: group_label
2021-06-11 12:35:48 ERROR:tornado.general:Uncaught exception in ZMQStream callback
2021-06-11 12:35:48 Traceback (most recent call last):
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\zmq\eventloop\zmqstream.py", line 438, in _run_callback
2021-06-11 12:35:48     callback(*args, **kwargs)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\iostream.py", line 120, in _handle_event
2021-06-11 12:35:48     event_f()
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\iostream.py", line 214, in <lambda>
2021-06-11 12:35:48     self.schedule(lambda : self._really_send(*args, **kwargs))
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\iostream.py", line 222, in _really_send
2021-06-11 12:35:48     self.socket.send_multipart(msg, *args, **kwargs)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\inprocess\socket.py", line 62, in send_multipart
2021-06-11 12:35:48     self.message_sent += 1
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\traitlets\traitlets.py", line 585, in __set__
2021-06-11 12:35:48     self.set(obj, value)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\traitlets\traitlets.py", line 574, in set
2021-06-11 12:35:48     obj._notify_trait(self.name, old_value, new_value)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\traitlets\traitlets.py", line 1134, in _notify_trait
2021-06-11 12:35:48     self.notify_change(Bunch(
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\traitlets\traitlets.py", line 1176, in notify_change
2021-06-11 12:35:48     c(change)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\inprocess\ipkernel.py", line 130, in _io_dispatch
2021-06-11 12:35:48     ident, msg = self.session.recv(self.iopub_socket, copy=False)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\jupyter_client\session.py", line 814, in recv
2021-06-11 12:35:48     msg_list = socket.recv_multipart(mode, copy=copy)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\iostream.py", line 246, in __getattr__
2021-06-11 12:35:48     warnings.warn("Accessing zmq Socket attribute %s on BackgroundSocket" % attr,
2021-06-11 12:35:48 DeprecationWarning: Accessing zmq Socket attribute recv_multipart on BackgroundSocket
2021-06-11 12:35:48 ERROR:tornado.general:Uncaught exception in zmqstream callback
2021-06-11 12:35:48 Traceback (most recent call last):
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\zmq\eventloop\zmqstream.py", line 456, in _handle_events
2021-06-11 12:35:48     self._handle_recv()
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\zmq\eventloop\zmqstream.py", line 486, in _handle_recv
2021-06-11 12:35:48     self._run_callback(callback, msg)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\zmq\eventloop\zmqstream.py", line 438, in _run_callback
2021-06-11 12:35:48     callback(*args, **kwargs)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\iostream.py", line 120, in _handle_event
2021-06-11 12:35:48     event_f()
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\iostream.py", line 214, in <lambda>
2021-06-11 12:35:48     self.schedule(lambda : self._really_send(*args, **kwargs))
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\iostream.py", line 222, in _really_send
2021-06-11 12:35:48     self.socket.send_multipart(msg, *args, **kwargs)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\inprocess\socket.py", line 62, in send_multipart
2021-06-11 12:35:48     self.message_sent += 1
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\traitlets\traitlets.py", line 585, in __set__
2021-06-11 12:35:48     self.set(obj, value)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\traitlets\traitlets.py", line 574, in set
2021-06-11 12:35:48     obj._notify_trait(self.name, old_value, new_value)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\traitlets\traitlets.py", line 1134, in _notify_trait
2021-06-11 12:35:48     self.notify_change(Bunch(
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\traitlets\traitlets.py", line 1176, in notify_change
2021-06-11 12:35:48     c(change)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\inprocess\ipkernel.py", line 130, in _io_dispatch
2021-06-11 12:35:48     ident, msg = self.session.recv(self.iopub_socket, copy=False)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\jupyter_client\session.py", line 814, in recv
2021-06-11 12:35:48     msg_list = socket.recv_multipart(mode, copy=copy)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\iostream.py", line 246, in __getattr__
2021-06-11 12:35:48     warnings.warn("Accessing zmq Socket attribute %s on BackgroundSocket" % attr,
2021-06-11 12:35:48 DeprecationWarning: Accessing zmq Socket attribute recv_multipart on BackgroundSocket
2021-06-11 12:35:48 ERROR:asyncio:Exception in callback BaseAsyncIOLoop._handle_events(2028, 1)
2021-06-11 12:35:48 handle: <Handle BaseAsyncIOLoop._handle_events(2028, 1)>
2021-06-11 12:35:48 Traceback (most recent call last):
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\asyncio\events.py", line 81, in _run
2021-06-11 12:35:48     self._context.run(self._callback, *self._args)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\tornado\platform\asyncio.py", line 139, in _handle_events
2021-06-11 12:35:48     handler_func(fileobj, events)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\zmq\eventloop\zmqstream.py", line 456, in _handle_events
2021-06-11 12:35:48     self._handle_recv()
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\zmq\eventloop\zmqstream.py", line 486, in _handle_recv
2021-06-11 12:35:48     self._run_callback(callback, msg)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\zmq\eventloop\zmqstream.py", line 438, in _run_callback
2021-06-11 12:35:48     callback(*args, **kwargs)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\iostream.py", line 120, in _handle_event
2021-06-11 12:35:48     event_f()
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\iostream.py", line 214, in <lambda>
2021-06-11 12:35:48     self.schedule(lambda : self._really_send(*args, **kwargs))
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\iostream.py", line 222, in _really_send
2021-06-11 12:35:48     self.socket.send_multipart(msg, *args, **kwargs)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\inprocess\socket.py", line 62, in send_multipart
2021-06-11 12:35:48     self.message_sent += 1
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\traitlets\traitlets.py", line 585, in __set__
2021-06-11 12:35:48     self.set(obj, value)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\traitlets\traitlets.py", line 574, in set
2021-06-11 12:35:48     obj._notify_trait(self.name, old_value, new_value)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\traitlets\traitlets.py", line 1134, in _notify_trait
2021-06-11 12:35:48     self.notify_change(Bunch(
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\traitlets\traitlets.py", line 1176, in notify_change
2021-06-11 12:35:48     c(change)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\inprocess\ipkernel.py", line 130, in _io_dispatch
2021-06-11 12:35:48     ident, msg = self.session.recv(self.iopub_socket, copy=False)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\jupyter_client\session.py", line 814, in recv
2021-06-11 12:35:48     msg_list = socket.recv_multipart(mode, copy=copy)
2021-06-11 12:35:48   File "C:\Program Files\Agisoft\Metashape Pro\python\lib\site-packages\ipykernel\iostream.py", line 246, in __getattr__
2021-06-11 12:35:48     warnings.warn("Accessing zmq Socket attribute %s on BackgroundSocket" % attr,
2021-06-11 12:35:48 DeprecationWarning: Accessing zmq Socket attribute recv_multipart on BackgroundSocket

after that the interpreter dies, and there's like a 30 second unresponsive spinning wheel after each command but no output:

Code: [Select]
In [32]: chunk

In [33]: Metashape.app.document

In [34]: print('oh no!')

In [35]:

[edit] after I closed metashape, the attached window hung around for a minute or two with the message:

IOStream.flush timed out

repeating a dozen times or so.

10
I recently attempted a network processing alignment with around 140k images. When the Alignment.Cleanup failed due to out-of-RAM errors on the cleanup node, I killed the job in Monitor, then divided the project into two separate projects by copying the original chunk, deleting half the images from the original chunk, and the other half from the copied chunk. Each chunk was saved out to a new PSX file, and the original was closed without saving. "save keypoints" is not enabled (since I can't selectively delete them when/if I delete some images and divide the project later for dense matching). I also killed the server and monitor and restarted all processes.

I then restarted network alignment on the PSX saved out from the original chunk. There was an initial error about "can't resume matching without keypoints" or something like that, before the nodes started on the AlignCameras.align task without performing any matching (?!).

BUT - there is a point_cloud folder (~40GB) in the original project .files heirarchy, and in both of the sub-projects I divided and saved out, there are also ~20GB point_cloud folders. Despite the fact that I canceled alignment because cleanup couldn't continue, and that I didn't have "save keypoints" enabled. So it appears that the network processing saved the matched-but-not-aligned state - effectively saving the keypoints anyway since the align stage didn't complete?! (if so, yay!).

The nodes appear to be crunching through what they perceive as valid matches (pic of monitor attached), and I'm confused - did the server save the partially complete state/matched keypoints, or are these data/status saved in the original project and transferred when I exported the edited chunk as a new PSX? Did the network processing task, because it didn't complete, somehow save the "matched-but-not-aligned" state of the original project? Would this NOT be saved in the copied chunk, but only as a property of the original chunk? So many questions.

The interesting thing to me was that the project skipped matching entirely, but essentially restarted the align task from some post-matching point - even though I don't have "save keypoints" enabled, and  after it looked like it initially tried to restart align.cleanup. I'll have to take a look at the logs when this is done (and see if it runs out of RAM again during cleanup) but I'm guessing since each sub-project only has half the points, that the task will complete.

This would be a nice "feature" in non-network mode, and it makes me want to bench running a large project in network vs non-network mode on a single workstation (I do this with small projects sometimes to test python script). I could definitely see value in running projects in network mode if it allows me to skip re-matching even if I choose not to save keypoints, if the process is somehow interrupted.

11
I haven't seen updated alignment ram usage numbers lately so I figured I'd share my latest learnings. I processed two collections of 36 MPix aerial images with roughly the same geometry. The first was processed in Metashape 1.6.5 and the second in 1.7.2.

Working on a cluster with 384 GB of RAM, the alignment limit (on high) appears to be between 82,000 and 139,000 images, with the final step of alignment being the limiting factor (performed on a single node).

Maximum RAM usage to align 82,129 images was 173.18GB in 1.6.5.11249. If this scaled linearly, 139,152 images should take ~293GB of RAM. But we ran out of RAM on a 384GB node trying to complete the alignment stage in 1.7.2. with that number of images.

Obviously these are different versions, but wanted to share what I know.

Andy

12
Getting std::bad_alloc when trying to build an interpolated (not extrapolated) mesh on high from the dense cloud on some big chunks. Dense cloud is GCS (NAD83(2011)). I have successfully built interpolated and uninterpolated DEMs, and orthoimages for these chunks.

We first built an uninterpolated DEM from the dense cloud for the elevation model, then built an interpolated DEM and orthophoto (using the interpolated DEM.

I am now trying to build a mesh from the dense cloud to use for a comparison orthoimage (because in smaller experiments the mesh was much faster and smaller than the interpolated DEM).

The mesh was generated after rotating the bounding box to the DEM projected coordinate system (PCS = NAD83 UTM). Rotation was performed to minimize the height/width of the nodata collars on the DEM generated from the dense cloud, since if it stays rotated, the DEM bounds go all the way to the corners of the along-track-oriented (not PCS-oriented) bounding box. I wonder if the mesh is failing because it's doing grid interpolation over the whole empty area of the rotated bounding box. In that case, I need to switch the order or re-rotate the region to be oriented with the data, but it will probably still fail on another section that is L-shaped with a bunch of empty space.

These are the details from the node - I included a previous successful (smaller) mesh generation before too:

2021-05-07 17:45:55 BuildModel: source data = Dense cloud, surface type = Height field, face count = High, interpolation = Enabled, vertex colors = 0
2021-05-07 17:45:56 Generating mesh...
2021-05-07 17:46:20 generating 213317x132869 grid (0.00214379 resolution)
2021-05-07 17:46:20 rasterizing dem... done in 81.9141 sec
2021-05-07 17:47:42 filtering dem... done in 375.867 sec
2021-05-07 17:55:06 constructed triangulation from 21327465 vertices, 42654924 faces
2021-05-07 17:57:38 grid interpolated in 220.33 sec
2021-05-07 18:13:56 triangulating... 106374525 points 212748181 faces done in 4727.18 sec
2021-05-07 19:32:45 Peak memory used: 181.40 GB at 2021-05-07 19:32:43
2021-05-07 19:33:00 processing finished in 6425.13 sec
2021-05-07 19:33:00 BuildModel: source data = Dense cloud, surface type = Height field, face count = High, interpolation = Enabled, vertex colors = 0
2021-05-07 19:33:01 Generating mesh...
2021-05-07 19:33:37 generating 262471x233536 grid (0.00219694 resolution)
2021-05-07 19:33:37 rasterizing dem... done in 209.04 sec
2021-05-07 19:37:06 filtering dem... done in 847.863 sec
2021-05-07 19:53:17 constructed triangulation from 23493503 vertices, 46987000 faces
2021-05-07 19:57:34 grid interpolated in 380.113 sec
2021-05-07 20:20:53 Error: std::bad_alloc
2021-05-07 20:20:53 processing failed in 2872.89 sec

13
[EDIT 2 - this was my screw-up, not a bug, and I corrected the code - TLDR; the code below works now, original code (now deleted) had document path, not the PSX itself, in the batch_id ]

I wrote a script to loop through chunks in a psx, and for each chunk with a default (checked) DEM, it will get the extent and export DEMs with the bounding box/BBox rounded to some multiple of the specified DEM export resolution.

I designed the script to work in either network or non-network mode, and tested it in both modes on a Win10 machine (tested network with node/monitor/GUI/host all on 127.0.0.1). It looks for app.settings.network_enable = True and runs in network mode if True, standalone if not. On the Windows machine I was able to generate DEMs from multiple chunks as expected.

When I tried it in network mode on our unix machines, I got Error: Can't read file: Is a directory (21): and I have no idea why.

In non-network mode it runs just fine. It kind of seems like the network task is truncating the file length or something, but the network task looks fine to me. The total path length including filename was 154 characters and I've attached a screenshot showing the bad script run on a node plus several attempts to duplicate the filename at the end, the last one was successful and the extra comma is because I pasted it into the filename apparently (created a file with a comma in the extension, which I didn't even know was legal).


[edit accidentally hit post before attaching image]


Code: [Select]
'''
make bounding boxes and build integer bounded DEMs for ALL default DEMs in the open PSX file
aritchie@usgs.gov 2021-05-03 tested on Metashape 1.7.1

This script creates a bounding box from the extent of the existing default full resDEM, rounded to the specified interval, then creates a raster with a specified resolution,
FOR EVERY DEFAULT DEM IN EVERY CHUNK IN THE PSX.

Raster will be placed in a user-specified (via script variable) subdirectory of the existing project ('dem' by default)
DIRECTORY WILL BE CREATED IF IT DOESN'T EXIST.
A user-specified suffix will be appended to the chunk label (in user variables below)
---CAUTION THERE IS NO ERROR CHECKING FOR LEGAL FILENAMES----
There is no error checking in the script. It will throw errors if there is no default DEM,

If there are bad filename characters, etc. I have NO idea what will happen. Be careful.

Andy
'''
import Metashape
import math
import os
from os import path
#-------------------------------------------------------#
#define user-set variables
raster_rounding_multiple = 10   # Default = 10 - This will be the multiple that the raster resolution is multiplied by to define the units the min/max extents are rounded to
raster_resolution = 1           # Default = 1 - cell size of exported DEM
raster_crop = True              # Default = True - True means Bounding Box in rounded IN - minimum extent is rounded up and maximum extent is rounded down from raster edges. False is reversed
                                # TODO - make it so metashape checks to see if this is an interpolated raster (shrink) or uninterpolated (grow?)
                                # ALSO - maybe we want to project the xy coordinates of the 3D dense cloud region and use those instead? this will result in no/minimal collar though...
dem_subdir = 'dem_20210504'              # this is a subdir that will be created under the document (PSX) path
dem_suffix = '_NAD83_2011_NAVD88_UTM18'

#-----OPERATIONAL CODE IS BELOW. EDIT AT YOUR PERIL-----#
raster_rounding_interval = raster_rounding_multiple * raster_resolution
app = Metashape.app
doc = app.document
network_tasks = list()
for chunk in doc.chunks:
    if chunk.elevation:
        print(chunk.label)
        out_projection = chunk.elevation.projection
        compression = Metashape.ImageCompression()
        compression.tiff_compression = Metashape.ImageCompression.TiffCompressionLZW
        compression.tiff_big = True
        compression.tiff_overviews = True
        compression.tiff_tiled = True
           
        def round_down(x):
            return int(raster_rounding_interval * math.floor(float(x)/raster_rounding_interval))

        def round_up(x):
            return int(raster_rounding_interval * math.ceil(float(x)/raster_rounding_interval))


        testbox = Metashape.BBox() #create a bounding box for the raster
        print('')
        print('original DEM BBox coordinates:')
        print('min: ', Metashape.Vector((min(chunk.elevation.left, chunk.elevation.right), min(chunk.elevation.bottom, chunk.elevation.top))))
        print('max: ', Metashape.Vector((max(chunk.elevation.left, chunk.elevation.right), max(chunk.elevation.bottom, chunk.elevation.top))))

        if raster_crop:
            testbox.min = Metashape.Vector((round_up(min(chunk.elevation.left, chunk.elevation.right)), round_up(min(chunk.elevation.bottom, chunk.elevation.top))))
            testbox.max = Metashape.Vector((round_down(max(chunk.elevation.left, chunk.elevation.right)), round_down(max(chunk.elevation.bottom, chunk.elevation.top))))
        else:
            testbox.min = Metashape.Vector((round_down(min(chunk.elevation.left, chunk.elevation.right)), round_down(min(chunk.elevation.bottom, chunk.elevation.top))))
            testbox.max = Metashape.Vector((round_up(max(chunk.elevation.left, chunk.elevation.right)), round_up(max(chunk.elevation.bottom, chunk.elevation.top))))

        if raster_crop:
            print('extent was SHRUNK to: ')
            print('min: ',testbox.min)
            print('max: ',testbox.max)
        else:
            print('extent was GROWN to: ')
            print('min: ',testbox.min)
            print('max: ',testbox.max)

        doc_path = os.path.split(doc.path)[0]
        outPath = os.path.normpath(doc_path + os.sep + dem_subdir)

        outFilename = chunk.label + dem_suffix + '_' + str(raster_resolution) + 'm' + '.tif'
        exportFile = os.path.normpath(outPath+os.sep+outFilename)
        if not os.path.exists(outPath):
            print('testing create path: ' + outPath)
            os.makedirs(outPath)
            print('testing file writestring: ' + exportFile)
        else:
            if not os.path.isfile(exportFile):
                print('testing file writestring: ' + exportFile)
        #
        if not app.settings.network_enable:
            chunk.exportRaster(path = exportFile, image_format=Metashape.ImageFormatTIFF, projection = out_projection, region = testbox, resolution_x = raster_resolution,  resolution_y = raster_resolution, image_compression=compression, save_world = False, white_background = False,source_data = Metashape.ElevationData)
        else:
            task = Metashape.Tasks.ExportRaster()
            task.path = str(exportFile)
            task.image_compression = compression
            task.image_format = Metashape.ImageFormatTIFF
            task.projection = out_projection
            task.region = testbox
            task.resolution_x = raster_resolution
            task.resolution_y = raster_resolution
            task.save_world = False
            task.source_data = Metashape.ElevationData

            n_task = Metashape.NetworkTask()
            n_task.name = task.name
            n_task.params = task.encode()
            n_task.frames.append((chunk.key, 0))
            network_tasks.append(n_task)
    else:
        print(chunk.label, ' has no DEM.')

if app.settings.network_enable:
    client = Metashape.NetworkClient()
    client.connect(app.settings.network_host) #server ip
    batch_id = client.createBatch(doc.path, network_tasks)
    client.resumeBatch(batch_id)
print('script complete')


14
I just reviewed all of the scripts I could find, and wasn't able to find any option to resize a region to a dense cloud that occupies only part of a sparse cloud extent. I also didn't find anything in the API. The closest I found was this post from January 2020 that was asking how to get a BBox from the extent of a dense_cloud object.

I am aligning multiple sets of images with different extents together to produce a single sparse cloud, then disabling each set iteratively to generate dense clouds with different extents for each set of images.

I want to use the python API to resize the region (or generate a bounding box) based on the extent of the dense cloud data, so that the resulting DEM doesn't have a bunch of nodata on the borders. I can't just manually specify the DEM BBox, since I don't know before generating the dense cloud what the data extent will be.

Thanks for any insight.

Andy

15
Win10 w/ Threadripper 3960X ( and two RTX 1080 Super GPUs and 256GB RAM (85% free)

I'm aligning ~36000 images in one chunk and trying to figure out why metashape is being super unresponsive and barely using any resources (1 core). When trying to check where the process is, I found that the screen has not updated for about 6 hours (08:41:11 local time and it's 15:22 right now), and the logfile is being written very slowly (currently 44h behind at 11:07:23).

logfile is writing to a SSD with 500GB of space. I thought it might be too big (77MB) so I copied & cleared it but write speed didn't change.
Resource Monitor shows 7 root threads waiting for a child thread, and 358 associated handles, which I'm happy to provide if they'd be useful

 the last line showing in the console (which is currently unresponsive) is bolded and underlined below. The logfile lines being written look like it's in the same process:

2021-03-11 08:41:09 block_obs: 25.066 MB (25.066 MB allocated)
2021-03-11 08:41:09 block_ofs: 2.5294 MB (2.5294 MB allocated)
2021-03-11 08:41:09 block_fre: 0 MB (0 MB allocated)
2021-03-11 08:41:10 adding 331032 points, 0 far (13.1678 threshold), 2 inaccurate, 2 invisible, 0 weak
2021-03-11 08:41:10 adjusting: xxx 0.694264 -> 0.287228
2021-03-11 08:41:10 adding 6 points, 2 far (13.1678 threshold), 2 inaccurate, 2 invisible, 0 weak
2021-03-11 08:41:10 optimized in 0.873 seconds
2021-03-11 08:41:10 f 8863.4, cx 27.5, cy -1.32353, k1 -0.072948, k2 0.0864567, k3 -0.0213253
2021-03-11 08:41:10 f 8862.78, cx 27.5, cy -1.32353, k1 -0.0729651, k2 0.086042, k3 -0.0230285
2021-03-11 08:41:10 f 8863.4, cx 27.5, cy -1.32353, k1 -0.072312, k2 0.0833413, k3 -0.0185212
2021-03-11 08:41:11 adjusting: xxxx 0.295707 -> 0.287377
2021-03-11 08:41:12 loaded projections in 0.003 sec
2021-03-11 08:41:12 tracks initialized in 0.072 sec
2021-03-11 08:41:12 adding 331034 points, 0 far (13.1678 threshold), 1 inaccurate, 3 invisible, 0 weak
2021-03-11 08:41:12 block: 1 sensors, 28 cameras, 106716 points, 0 projections
2021-03-11 08:41:12 block_sensors: 0.000816345 MB (0.000816345 MB allocated)
2021-03-11 08:41:12 block_cameras: 0.0108948 MB (0.0108948 MB allocated)
2021-03-11 08:41:12 block_points: 4.88507 MB (4.88507 MB allocated)
2021-03-11 08:41:12 block_tracks: 0.407089 MB (0.407089 MB allocated)
2021-03-11 08:41:12 block_obs: 0 MB (0 MB allocated)
2021-03-11 08:41:12 block_ofs: 0 MB (0 MB allocated)
2021-03-11 08:41:12 block_fre: 0 MB (0 MB allocated)
2021-03-11 08:41:12 block: 2 sensors, 47 cameras, 192836 points, 0 projections
2021-03-11 08:41:12 block_sensors: 0.00163269 MB (0.00163269 MB allocated)
2021-03-11 08:41:12 block_cameras: 0.0182877 MB (0.0182877 MB allocated)
2021-03-11 08:41:12 block_points: 8.82733 MB (8.82733 MB allocated)
2021-03-11 08:41:12 block_tracks: 0.735611 MB (0.735611 MB allocated)
2021-03-11 08:41:12 block_obs: 0 MB (0 MB allocated)
2021-03-11 08:41:12 block_ofs: 0 MB (0 MB allocated)
2021-03-11 08:41:12 block_fre: 0 MB (0 MB allocated)

Pages: [1] 2 3 ... 11