Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - andyroo

Pages: 1 2 [3] 4 5 ... 27
31
Getting std::bad_alloc when trying to build an interpolated (not extrapolated) mesh on high from the dense cloud on some big chunks. Dense cloud is GCS (NAD83(2011)). I have successfully built interpolated and uninterpolated DEMs, and orthoimages for these chunks.

We first built an uninterpolated DEM from the dense cloud for the elevation model, then built an interpolated DEM and orthophoto (using the interpolated DEM.

I am now trying to build a mesh from the dense cloud to use for a comparison orthoimage (because in smaller experiments the mesh was much faster and smaller than the interpolated DEM).

The mesh was generated after rotating the bounding box to the DEM projected coordinate system (PCS = NAD83 UTM). Rotation was performed to minimize the height/width of the nodata collars on the DEM generated from the dense cloud, since if it stays rotated, the DEM bounds go all the way to the corners of the along-track-oriented (not PCS-oriented) bounding box. I wonder if the mesh is failing because it's doing grid interpolation over the whole empty area of the rotated bounding box. In that case, I need to switch the order or re-rotate the region to be oriented with the data, but it will probably still fail on another section that is L-shaped with a bunch of empty space.

These are the details from the node - I included a previous successful (smaller) mesh generation before too:

2021-05-07 17:45:55 BuildModel: source data = Dense cloud, surface type = Height field, face count = High, interpolation = Enabled, vertex colors = 0
2021-05-07 17:45:56 Generating mesh...
2021-05-07 17:46:20 generating 213317x132869 grid (0.00214379 resolution)
2021-05-07 17:46:20 rasterizing dem... done in 81.9141 sec
2021-05-07 17:47:42 filtering dem... done in 375.867 sec
2021-05-07 17:55:06 constructed triangulation from 21327465 vertices, 42654924 faces
2021-05-07 17:57:38 grid interpolated in 220.33 sec
2021-05-07 18:13:56 triangulating... 106374525 points 212748181 faces done in 4727.18 sec
2021-05-07 19:32:45 Peak memory used: 181.40 GB at 2021-05-07 19:32:43
2021-05-07 19:33:00 processing finished in 6425.13 sec
2021-05-07 19:33:00 BuildModel: source data = Dense cloud, surface type = Height field, face count = High, interpolation = Enabled, vertex colors = 0
2021-05-07 19:33:01 Generating mesh...
2021-05-07 19:33:37 generating 262471x233536 grid (0.00219694 resolution)
2021-05-07 19:33:37 rasterizing dem... done in 209.04 sec
2021-05-07 19:37:06 filtering dem... done in 847.863 sec
2021-05-07 19:53:17 constructed triangulation from 23493503 vertices, 46987000 faces
2021-05-07 19:57:34 grid interpolated in 380.113 sec
2021-05-07 20:20:53 Error: std::bad_alloc
2021-05-07 20:20:53 processing failed in 2872.89 sec

32
Bug Reports / Re: Installation fails on linux
« on: May 07, 2021, 09:28:20 PM »
Which OS distribution you are using...

CentOS 7.7.1908

and if you are working on the computer remotely (if so, then how the remote connection is established)

vncserver with Xfce desktop environment

the fix was:

yum install xcb-util-wm xcb-util-image xcb-util-keysyms xcb-util-renderutil


working now

[EDIT - adding new observations below]

I'm seeing these messages in the terminal now. I included the last debug line before the repeating sequence of console messages - with one buried error amongst the sequence. Been running for about 45 minutes now with ~8 repeats of the sequence:

Code: [Select]
loaded library "udev"
Only C and default locale supported with the posix collation implementation
Only C and default locale supported with the posix collation implementation
Case insensitive sorting unsupported in the posix collation implementation
Numeric mode unsupported in the posix collation implementation
Only C and default locale supported with the posix collation implementation
Only C and default locale supported with the posix collation implementation
Case insensitive sorting unsupported in the posix collation implementation
Numeric mode unsupported in the posix collation implementation
Only C and default locale supported with the posix collation implementation
Only C and default locale supported with the posix collation implementation
Case insensitive sorting unsupported in the posix collation implementation
Numeric mode unsupported in the posix collation implementation
Only C and default locale supported with the posix collation implementation
Only C and default locale supported with the posix collation implementation
Case insensitive sorting unsupported in the posix collation implementation
Numeric mode unsupported in the posix collation implementation
Only C and default locale supported with the posix collation implementation
Only C and default locale supported with the posix collation implementation
Case insensitive sorting unsupported in the posix collation implementation
Numeric mode unsupported in the posix collation implementation
qt.qpa.xcb: QXcbConnection: XCB error: 3 (BadWindow), sequence: 38163, resource id: 16689444, major code: 40 (TranslateCoords), minor code: 0
Only C and default locale supported with the posix collation implementation
Only C and default locale supported with the posix collation implementation
Case insensitive sorting unsupported in the posix collation implementation
Numeric mode unsupported in the posix collation implementation
Only C and default locale supported with the posix collation implementation
Only C and default locale supported with the posix collation implementation
Case insensitive sorting unsupported in the posix collation implementation
Numeric mode unsupported in the posix collation implementation


wondering if it's related to this post that says installing libicu-dev (before building Qt) makes the messages disappear...

33
Bug Reports / Re: Installation fails on linux
« on: May 07, 2021, 04:11:43 AM »
Also having this problem on a cluster with linux. Running monitor.sh or metashape.sh to get the GUI throws this error. Added "export QT_DEBUG_PLUGINS=1" to the metashape.sh and monitor.sh files and now I see the errors:

Code: [Select]
~] monitor.sh &
~] QFactoryLoader::QFactoryLoader() checking directory path "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/plugins" ...
QFactoryLoader::QFactoryLoader() checking directory path "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro" ...
QFactoryLoader::QFactoryLoader() looking at "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/Agisoft.lic"
QElfParser: '/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/Agisoft.lic' is not an ELF object (file too small)
"'/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/Agisoft.lic' is not an ELF object (file too small)"
         not a plugin
QFactoryLoader::QFactoryLoader() looking at "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/crashreporter"
"Failed to extract plugin meta data from '/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/crashreporter'"
         not a plugin
QFactoryLoader::QFactoryLoader() looking at "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/crashreporter.ini"
QElfParser: '/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/crashreporter.ini' is not an ELF object
"'/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/crashreporter.ini' is not an ELF object"
         not a plugin
QFactoryLoader::QFactoryLoader() looking at "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/eula.txt"
QElfParser: '/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/eula.txt' is not an ELF object
"'/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/eula.txt' is not an ELF object"
         not a plugin
QFactoryLoader::QFactoryLoader() looking at "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/metashape"
"Failed to extract plugin meta data from '/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/metashape'"
         not a plugin
QFactoryLoader::QFactoryLoader() looking at "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/metashape.sh"
QElfParser: '/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/metashape.sh' is not an ELF object
"'/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/metashape.sh' is not an ELF object"
         not a plugin
QFactoryLoader::QFactoryLoader() looking at "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/monitor"
"Failed to extract plugin meta data from '/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/monitor'"
         not a plugin
QFactoryLoader::QFactoryLoader() looking at "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/monitor.sh"
QElfParser: '/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/monitor.sh' is not an ELF object
"'/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/monitor.sh' is not an ELF object"
         not a plugin
QFactoryLoader::QFactoryLoader() looking at "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/qt.conf"
QElfParser: '/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/qt.conf' is not an ELF object (file too small)
"'/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/qt.conf' is not an ELF object (file too small)"
         not a plugin
QFactoryLoader::QFactoryLoader() looking at "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/rlm_roam.lic"
QElfParser: '/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/rlm_roam.lic' is not an ELF object
"'/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/rlm_roam.lic' is not an ELF object"
         not a plugin
QFactoryLoader::QFactoryLoader() looking at "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/viewer"
"Failed to extract plugin meta data from '/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/viewer'"
         not a plugin
QFactoryLoader::QFactoryLoader() looking at "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/viewer.sh"
QElfParser: '/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/viewer.sh' is not an ELF object
"'/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/viewer.sh' is not an ELF object"
         not a plugin
QFactoryLoader::QFactoryLoader() checking directory path "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/plugins/platforms" ...
QFactoryLoader::QFactoryLoader() looking at "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/plugins/platforms/libqoffscreen.so"
Found metadata in lib /home/software/tallgrass/arc/apps/metashape/1.7.2-pro/plugins/platforms/libqoffscreen.so, metadata=
{
    "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
    "MetaData": {
        "Keys": [
            "offscreen"
        ]
    },
    "archreq": 0,
    "className": "QOffscreenIntegrationPlugin",
    "debug": false,
    "version": 331520
}


Got keys from plugin meta data ("offscreen")
QFactoryLoader::QFactoryLoader() looking at "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/plugins/platforms/libqxcb.so"
Found metadata in lib /home/software/tallgrass/arc/apps/metashape/1.7.2-pro/plugins/platforms/libqxcb.so, metadata=
{
    "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
    "MetaData": {
        "Keys": [
            "xcb"
        ]
    },
    "archreq": 0,
    "className": "QXcbIntegrationPlugin",
    "debug": false,
    "version": 331520
}


Got keys from plugin meta data ("xcb")
QFactoryLoader::QFactoryLoader() checking directory path "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/platforms" ...
Cannot load library /home/software/tallgrass/arc/apps/metashape/1.7.2-pro/plugins/platforms/libqxcb.so: (libxcb-icccm.so.4: cannot open shared object file: No such file or directory)
QLibraryPrivate::loadPlugin failed on "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/plugins/platforms/libqxcb.so" : "Cannot load library /home/software/tallgrass/arc/apps/metashape/1.7.2-pro/plugins/platforms/libqxcb.so: (libxcb-icccm.so.4: cannot open shared object file: No such file or directory)"
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: offscreen, xcb.

/home/software/tallgrass/arc/apps/metashape/1.7.2-pro/monitor.sh: line 23: 235112 Aborted                 "$dirname/$appname" "$@"

it looks like libqxcb.so can't find libxcb-icccm.so.4 and if I run ldd, then apparently it also can't find libxcb-image.so.0, libxcb-keysyms.so.1, and libxcb-render-util.so.0

I did a little digging and I was wondering if it might be a similar issue to this thread where some files were overwritten when updating QT, or if there's something weird going on with conflicting libraries/paths - some of the paths shown in the output of ldd aren't in the metashape heirarchy at all. I'm not familiar enough with linux to know if the libraries that are referred to outside of the metashape heirarchy are supposed to be that way or not, or if I or the cluster admin should be expected to figure this out and install the necessary packages, or if metashape should normally do that <sigh>.

This is the output of ldd on libqxcb.so on our login node:

ldd /home/software/tallgrass/arc/apps/metashape/1.7.2-pro/plugins/platforms/libqxcb.so
Code: [Select]
linux-vdso.so.1 =>  (0x00002aaaaaacd000)
libQt5XcbQpa.so.5 => /home/software/tallgrass/arc/apps/metashape/1.7.2-pro/plugins/platforms/../../lib/libQt5XcbQpa.so.5 (0x00002aaaaaed2000)
libQt5Gui.so.5 => /home/software/tallgrass/arc/apps/metashape/1.7.2-pro/plugins/platforms/../../lib/libQt5Gui.so.5 (0x00002aaaab1f3000)
libQt5Core.so.5 => /home/software/tallgrass/arc/apps/metashape/1.7.2-pro/plugins/platforms/../../lib/libQt5Core.so.5 (0x00002aaaaba5e000)
libstdc++.so.6 => /cm/local/apps/gcc/8.2.0/lib64/libstdc++.so.6 (0x00002aaaac2c7000)
libc.so.6 => /lib64/libc.so.6 (0x00002aaaac64b000)
libfontconfig.so.1 => /lib64/libfontconfig.so.1 (0x00002aaaaca19000)
libfreetype.so.6 => /lib64/libfreetype.so.6 (0x00002aaaacc5b000)
libQt5DBus.so.5 => /home/software/tallgrass/arc/apps/metashape/1.7.2-pro/plugins/platforms/../../lib/libQt5DBus.so.5 (0x00002aaaacf1a000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00002aaaad1aa000)
libX11-xcb.so.1 => /home/software/tallgrass/arc/apps/metashape/1.7.2-pro/plugins/platforms/../../lib/libX11-xcb.so.1 (0x00002aaaad3c6000)
libxcb-icccm.so.4 => not found
libxcb-image.so.0 => not found
libxcb-shm.so.0 => /lib64/libxcb-shm.so.0 (0x00002aaaad5c8000)
libxcb-keysyms.so.1 => not found
libxcb-randr.so.0 => /lib64/libxcb-randr.so.0 (0x00002aaaad7cc000)
libxcb-render-util.so.0 => not found
libxcb-sync.so.1 => /lib64/libxcb-sync.so.1 (0x00002aaaad9dc000)
libxcb-xfixes.so.0 => /lib64/libxcb-xfixes.so.0 (0x00002aaaadbe3000)
libxcb-render.so.0 => /lib64/libxcb-render.so.0 (0x00002aaaaddeb000)
libxcb-shape.so.0 => /lib64/libxcb-shape.so.0 (0x00002aaaadff9000)
libxcb-xinerama.so.0 => /lib64/libxcb-xinerama.so.0 (0x00002aaaae1fd000)
libxcb-xkb.so.1 => /lib64/libxcb-xkb.so.1 (0x00002aaaae400000)
libxcb.so.1 => /lib64/libxcb.so.1 (0x00002aaaae61c000)
libX11.so.6 => /lib64/libX11.so.6 (0x00002aaaae844000)
libSM.so.6 => /lib64/libSM.so.6 (0x00002aaaaeb82000)
libICE.so.6 => /lib64/libICE.so.6 (0x00002aaaaed8a000)
libxkbcommon-x11.so.0 => /lib64/libxkbcommon-x11.so.0 (0x00002aaaaefa6000)
libxkbcommon.so.0 => /lib64/libxkbcommon.so.0 (0x00002aaaaf1ae000)
libglib-2.0.so.0 => /lib64/libglib-2.0.so.0 (0x00002aaaaf3ee000)
libm.so.6 => /lib64/libm.so.6 (0x00002aaaaf704000)
libgcc_s.so.1 => /cm/local/apps/gcc/8.2.0/lib64/libgcc_s.so.1 (0x00002aaaafa06000)
libGL.so.1 => /lib64/libGL.so.1 (0x00002aaaafc1e000)
libz.so.1 => /lib64/libz.so.1 (0x00002aaaafeaa000)
libdl.so.2 => /lib64/libdl.so.2 (0x00002aaab00c0000)
/lib64/ld-linux-x86-64.so.2 (0x00002aaaaaaab000)
libexpat.so.1 => /lib64/libexpat.so.1 (0x00002aaab02c4000)
libuuid.so.1 => /lib64/libuuid.so.1 (0x00002aaab04ee000)
libbz2.so.1 => /lib64/libbz2.so.1 (0x00002aaab06f3000)
libpng15.so.15 => /lib64/libpng15.so.15 (0x00002aaab0903000)
libXau.so.6 => /lib64/libXau.so.6 (0x00002aaab0b2e000)
libpcre.so.1 => /lib64/libpcre.so.1 (0x00002aaab0d32000)
libGLX.so.0 => /lib64/libGLX.so.0 (0x00002aaab0f94000)
libXext.so.6 => /lib64/libXext.so.6 (0x00002aaab11c6000)
libGLdispatch.so.0 => /lib64/libGLdispatch.so.0 (0x00002aaab13d8000)

so I *think* our admin can fix this by doing yum provides "*/<library>" for all the missing packages, then doing yum install <package>, but again, I don't know if this is just a "quirk" of linux or if it's a package misconfiguration in the current linux build of metashape.

ok that's about all my brain can handle learning today. I need a beer.

Andy

34
agh. My bad. Apparently the problem was this line:

 batch_id = client.createBatch(doc_path, network_tasks)

because doc_path was the path to the .PSX without the filename of the project. I'm guessing it worked on my Windoze system because the project was still open in the GUI on the same machine that was the node, or something like that. I fixed the issue by changing it to doc.path.


35
[EDIT 2 - this was my screw-up, not a bug, and I corrected the code - TLDR; the code below works now, original code (now deleted) had document path, not the PSX itself, in the batch_id ]

I wrote a script to loop through chunks in a psx, and for each chunk with a default (checked) DEM, it will get the extent and export DEMs with the bounding box/BBox rounded to some multiple of the specified DEM export resolution.

I designed the script to work in either network or non-network mode, and tested it in both modes on a Win10 machine (tested network with node/monitor/GUI/host all on 127.0.0.1). It looks for app.settings.network_enable = True and runs in network mode if True, standalone if not. On the Windows machine I was able to generate DEMs from multiple chunks as expected.

When I tried it in network mode on our unix machines, I got Error: Can't read file: Is a directory (21): and I have no idea why.

In non-network mode it runs just fine. It kind of seems like the network task is truncating the file length or something, but the network task looks fine to me. The total path length including filename was 154 characters and I've attached a screenshot showing the bad script run on a node plus several attempts to duplicate the filename at the end, the last one was successful and the extra comma is because I pasted it into the filename apparently (created a file with a comma in the extension, which I didn't even know was legal).


[edit accidentally hit post before attaching image]


Code: [Select]
'''
make bounding boxes and build integer bounded DEMs for ALL default DEMs in the open PSX file
aritchie@usgs.gov 2021-05-03 tested on Metashape 1.7.1

This script creates a bounding box from the extent of the existing default full resDEM, rounded to the specified interval, then creates a raster with a specified resolution,
FOR EVERY DEFAULT DEM IN EVERY CHUNK IN THE PSX.

Raster will be placed in a user-specified (via script variable) subdirectory of the existing project ('dem' by default)
DIRECTORY WILL BE CREATED IF IT DOESN'T EXIST.
A user-specified suffix will be appended to the chunk label (in user variables below)
---CAUTION THERE IS NO ERROR CHECKING FOR LEGAL FILENAMES----
There is no error checking in the script. It will throw errors if there is no default DEM,

If there are bad filename characters, etc. I have NO idea what will happen. Be careful.

Andy
'''
import Metashape
import math
import os
from os import path
#-------------------------------------------------------#
#define user-set variables
raster_rounding_multiple = 10   # Default = 10 - This will be the multiple that the raster resolution is multiplied by to define the units the min/max extents are rounded to
raster_resolution = 1           # Default = 1 - cell size of exported DEM
raster_crop = True              # Default = True - True means Bounding Box in rounded IN - minimum extent is rounded up and maximum extent is rounded down from raster edges. False is reversed
                                # TODO - make it so metashape checks to see if this is an interpolated raster (shrink) or uninterpolated (grow?)
                                # ALSO - maybe we want to project the xy coordinates of the 3D dense cloud region and use those instead? this will result in no/minimal collar though...
dem_subdir = 'dem_20210504'              # this is a subdir that will be created under the document (PSX) path
dem_suffix = '_NAD83_2011_NAVD88_UTM18'

#-----OPERATIONAL CODE IS BELOW. EDIT AT YOUR PERIL-----#
raster_rounding_interval = raster_rounding_multiple * raster_resolution
app = Metashape.app
doc = app.document
network_tasks = list()
for chunk in doc.chunks:
    if chunk.elevation:
        print(chunk.label)
        out_projection = chunk.elevation.projection
        compression = Metashape.ImageCompression()
        compression.tiff_compression = Metashape.ImageCompression.TiffCompressionLZW
        compression.tiff_big = True
        compression.tiff_overviews = True
        compression.tiff_tiled = True
           
        def round_down(x):
            return int(raster_rounding_interval * math.floor(float(x)/raster_rounding_interval))

        def round_up(x):
            return int(raster_rounding_interval * math.ceil(float(x)/raster_rounding_interval))


        testbox = Metashape.BBox() #create a bounding box for the raster
        print('')
        print('original DEM BBox coordinates:')
        print('min: ', Metashape.Vector((min(chunk.elevation.left, chunk.elevation.right), min(chunk.elevation.bottom, chunk.elevation.top))))
        print('max: ', Metashape.Vector((max(chunk.elevation.left, chunk.elevation.right), max(chunk.elevation.bottom, chunk.elevation.top))))

        if raster_crop:
            testbox.min = Metashape.Vector((round_up(min(chunk.elevation.left, chunk.elevation.right)), round_up(min(chunk.elevation.bottom, chunk.elevation.top))))
            testbox.max = Metashape.Vector((round_down(max(chunk.elevation.left, chunk.elevation.right)), round_down(max(chunk.elevation.bottom, chunk.elevation.top))))
        else:
            testbox.min = Metashape.Vector((round_down(min(chunk.elevation.left, chunk.elevation.right)), round_down(min(chunk.elevation.bottom, chunk.elevation.top))))
            testbox.max = Metashape.Vector((round_up(max(chunk.elevation.left, chunk.elevation.right)), round_up(max(chunk.elevation.bottom, chunk.elevation.top))))

        if raster_crop:
            print('extent was SHRUNK to: ')
            print('min: ',testbox.min)
            print('max: ',testbox.max)
        else:
            print('extent was GROWN to: ')
            print('min: ',testbox.min)
            print('max: ',testbox.max)

        doc_path = os.path.split(doc.path)[0]
        outPath = os.path.normpath(doc_path + os.sep + dem_subdir)

        outFilename = chunk.label + dem_suffix + '_' + str(raster_resolution) + 'm' + '.tif'
        exportFile = os.path.normpath(outPath+os.sep+outFilename)
        if not os.path.exists(outPath):
            print('testing create path: ' + outPath)
            os.makedirs(outPath)
            print('testing file writestring: ' + exportFile)
        else:
            if not os.path.isfile(exportFile):
                print('testing file writestring: ' + exportFile)
        #
        if not app.settings.network_enable:
            chunk.exportRaster(path = exportFile, image_format=Metashape.ImageFormatTIFF, projection = out_projection, region = testbox, resolution_x = raster_resolution,  resolution_y = raster_resolution, image_compression=compression, save_world = False, white_background = False,source_data = Metashape.ElevationData)
        else:
            task = Metashape.Tasks.ExportRaster()
            task.path = str(exportFile)
            task.image_compression = compression
            task.image_format = Metashape.ImageFormatTIFF
            task.projection = out_projection
            task.region = testbox
            task.resolution_x = raster_resolution
            task.resolution_y = raster_resolution
            task.save_world = False
            task.source_data = Metashape.ElevationData

            n_task = Metashape.NetworkTask()
            n_task.name = task.name
            n_task.params = task.encode()
            n_task.frames.append((chunk.key, 0))
            network_tasks.append(n_task)
    else:
        print(chunk.label, ' has no DEM.')

if app.settings.network_enable:
    client = Metashape.NetworkClient()
    client.connect(app.settings.network_host) #server ip
    batch_id = client.createBatch(doc.path, network_tasks)
    client.resumeBatch(batch_id)
print('script complete')


36
I just reviewed all of the scripts I could find, and wasn't able to find any option to resize a region to a dense cloud that occupies only part of a sparse cloud extent. I also didn't find anything in the API. The closest I found was this post from January 2020 that was asking how to get a BBox from the extent of a dense_cloud object.

I am aligning multiple sets of images with different extents together to produce a single sparse cloud, then disabling each set iteratively to generate dense clouds with different extents for each set of images.

I want to use the python API to resize the region (or generate a bounding box) based on the extent of the dense cloud data, so that the resulting DEM doesn't have a bunch of nodata on the borders. I can't just manually specify the DEM BBox, since I don't know before generating the dense cloud what the data extent will be.

Thanks for any insight.

Andy

37
Fine level task subdivision is enabled. Max RAM usage was around 30-40GB out of 256GB so disk swap shouldn't have been used, but I didn't exhaustively check with powershell or Perfmon. If it would help, I can run again and try that.

38
Feature Requests / Re: crop raster to nodata extent?
« on: March 17, 2021, 12:29:34 AM »
To clarify:

I originally thought that when exporting a DEM and I select "region" that the x and y bounds that auto-populate are the extent of data in the DEM, but it appears that these numbers are the XY extent of the bounding rectangle for the Metashape region when the DEM was created. It would be nice to be able to only export a raster that has the extent of the data, rather than having to manually build a bounding box. Because the regions are often oriented at an angle to the gridding system of the DEM, the effect is to produce a nodata collar - in some cases one that is tens or hundreds of km outside of the data extent in our use case.

I produce my DEMs in a GCS at full resolution then export in a PCS because in some cases we may export in several different PCSs depending on user.

Alternatively, is there a good way to query the data extent and deliver it in a projected coordinate system so it can be scripted?

Also if anyone knows a good (efficient) way outside of metashape to "trim" nodata I would love to hear it. My method was to create a model in QGIS that used the SAGA crop data to raster tool, then converted from .sdat back to geotiff with gdal_translate. clunky.

39
Win10 w/ Threadripper 3960X ( and two RTX 1080 Super GPUs and 256GB RAM (85% free)

I'm aligning ~36000 images in one chunk and trying to figure out why metashape is being super unresponsive and barely using any resources (1 core). When trying to check where the process is, I found that the screen has not updated for about 6 hours (08:41:11 local time and it's 15:22 right now), and the logfile is being written very slowly (currently 44h behind at 11:07:23).

logfile is writing to a SSD with 500GB of space. I thought it might be too big (77MB) so I copied & cleared it but write speed didn't change.
Resource Monitor shows 7 root threads waiting for a child thread, and 358 associated handles, which I'm happy to provide if they'd be useful

 the last line showing in the console (which is currently unresponsive) is bolded and underlined below. The logfile lines being written look like it's in the same process:

2021-03-11 08:41:09 block_obs: 25.066 MB (25.066 MB allocated)
2021-03-11 08:41:09 block_ofs: 2.5294 MB (2.5294 MB allocated)
2021-03-11 08:41:09 block_fre: 0 MB (0 MB allocated)
2021-03-11 08:41:10 adding 331032 points, 0 far (13.1678 threshold), 2 inaccurate, 2 invisible, 0 weak
2021-03-11 08:41:10 adjusting: xxx 0.694264 -> 0.287228
2021-03-11 08:41:10 adding 6 points, 2 far (13.1678 threshold), 2 inaccurate, 2 invisible, 0 weak
2021-03-11 08:41:10 optimized in 0.873 seconds
2021-03-11 08:41:10 f 8863.4, cx 27.5, cy -1.32353, k1 -0.072948, k2 0.0864567, k3 -0.0213253
2021-03-11 08:41:10 f 8862.78, cx 27.5, cy -1.32353, k1 -0.0729651, k2 0.086042, k3 -0.0230285
2021-03-11 08:41:10 f 8863.4, cx 27.5, cy -1.32353, k1 -0.072312, k2 0.0833413, k3 -0.0185212
2021-03-11 08:41:11 adjusting: xxxx 0.295707 -> 0.287377
2021-03-11 08:41:12 loaded projections in 0.003 sec
2021-03-11 08:41:12 tracks initialized in 0.072 sec
2021-03-11 08:41:12 adding 331034 points, 0 far (13.1678 threshold), 1 inaccurate, 3 invisible, 0 weak
2021-03-11 08:41:12 block: 1 sensors, 28 cameras, 106716 points, 0 projections
2021-03-11 08:41:12 block_sensors: 0.000816345 MB (0.000816345 MB allocated)
2021-03-11 08:41:12 block_cameras: 0.0108948 MB (0.0108948 MB allocated)
2021-03-11 08:41:12 block_points: 4.88507 MB (4.88507 MB allocated)
2021-03-11 08:41:12 block_tracks: 0.407089 MB (0.407089 MB allocated)
2021-03-11 08:41:12 block_obs: 0 MB (0 MB allocated)
2021-03-11 08:41:12 block_ofs: 0 MB (0 MB allocated)
2021-03-11 08:41:12 block_fre: 0 MB (0 MB allocated)
2021-03-11 08:41:12 block: 2 sensors, 47 cameras, 192836 points, 0 projections
2021-03-11 08:41:12 block_sensors: 0.00163269 MB (0.00163269 MB allocated)
2021-03-11 08:41:12 block_cameras: 0.0182877 MB (0.0182877 MB allocated)
2021-03-11 08:41:12 block_points: 8.82733 MB (8.82733 MB allocated)
2021-03-11 08:41:12 block_tracks: 0.735611 MB (0.735611 MB allocated)
2021-03-11 08:41:12 block_obs: 0 MB (0 MB allocated)
2021-03-11 08:41:12 block_ofs: 0 MB (0 MB allocated)
2021-03-11 08:41:12 block_fre: 0 MB (0 MB allocated)

40
...There were some optimizations in the gradual selection process related to the tie points selection, which should give a considerable performance boost for this task. So if you have a chance to check the same procedure in 1.7.1 version, please report, whether it work significantly faster now.

Holey Moley!

"considerable performance boost" No kidding! Gradual selection "Analyzing points" step went from ~4300s to 243s from 1.6.5 to 1.7.1! That's some pretty nice algorithm optimization - like a 95% speed-up! Gradual selection point selection looks to be about 18x faster, cutting the overall optimization time in half if the rest of the optimization process stayed constant (testing now)

Nice job Agisoft, and THANK YOU for being so responsive  ;D  ;D

41
Thank you Alexey. I will look at gradual selection speed in 1.7.x. I was holding off because of the depth maps issue but I can turn on the tweak to use 1.6.x depths maps method.

42
I would love, both in the GUI and in the python API, to be able to copy a certain selected area of a chunk into a new chunk, similar to this post. At the moment I have to duplicate the whole chunk, then prune it to a certain area. With tens of thousands of images and hundreds of millions of tiepoints this is quite tedious. If I could select tie points and markers by area (lat/lon) or within a shapefile boundary, then select photos by tiepoints, then copy selected to a new chunk (and/or do the same via manual selection in the GUI) it would make me even happier than I already am :-)

43
I'm running gradual selection on a 386 million point cloud (from 82,000 cameras) and the initial "Analyzing Point Cloud" step after I select which parameter to use and before I select the gradual selection threshold is taking quite a while. It looks like it's going to be about 5-6 hours on my workstation (80% done @ 4h), and it's going to take > 8 hours on a HPC login node with slower RAM. Wondering if there's anything I can do to speed this up (headless? 1.7.x?).

On the workstation (Threadripper 3960x with 256GB 3000MHz RAM and 2x RTX 2080 Super GPUs) it appears to be using minimal CPU resources (maybe a single core, resource monitor says ~4% CPU) and  it's using about 60GB of 256GB total RAM (total system usage is about 72GB).

44
I am wondering about the AlignCameras.finalize step in network processing. I see that it's limited to one node, and that if the node dies it has to restart the finalize step. I am wondering if ~24h for the finalize step is reasonable on Metashape 1.6.5 with ~82000 cameras (36 megapixel). I notice that this project has some "merging failed" messages, and it seems like a similar run (same project with different camera accuracies and no GCPs) finished much faster, but then I did have to restart this node ~20h in because the job expired. I also have two copies of the chunk in the same project, but I'm only operating on one of them.

Node is dual 18-core CPUs @ 2.3GHz with 376GB RAM

Andy

45
This is a 2-part feature request:

1) - to add the ability to output Cloud-optimized geotiffs (COGs) in Export Orthomosaic/Tiff/Export TIF/JPG/PNG... and Export DEM/Export TIF/BIL/XYZ...
2) - to add/combine tiled export options to be able to produce one set of tiles (and associated template html & kml) that can be used with multiple viewing options including KML Superoverlay, leaflet, openlayers, google maps, and other tile map services.

Currently I do this with gdal, but it requires multiple passes and is MUCH slower than metashape's efficient raster outputs. We are shifting to cloud-optimized geotif format for DEMs and orthos and starting to use tiled datasets more for serving ortho products to end users, so exporting a temporary raster or raster tiles that I postprocess into something else using slow tools is starting to become a significantly inefficient part of my workflow.

Right now I export TIFFs in metashape either as a single tiff or tiles, then use gdal_translate -of COG to generate COGs (slow), gdalbuildvrt and gdalwarp to make VRTs in EPSG:4326 (fast), and gdal_translate -of KMLSUPEROVERLAY -co FORMAT=AUTO (verrrry sloow) or gdal2tiles -k -processes=[NumCores] (fast but sometimes buggy with gdalwarp generating a LOT of empty tiles) to make the KML superoverlay.

gdal2tiles is nice because it automagically creates viewers for leaflet, openlayers, google maps, and kml (with the -k option). Also it uses all cores for building the KML Superoverlay where gdal_translate doesn't. But gdal_translate supports hybrid JPG and PNG (for edge transparency) tiles. If I could do all of this within metashape, I would jump for joy.

I would love to be able to do these things within Metashape in the export dialog/API. Especially the part where I can create the KML superoverlay not in one giant zip but in a folder heirarchy, and where I can use the same tiles for KML/Google Maps/OpenLayers/Leaflet - lots of flexibility there. If that also generated a VRT or something I could generate a VRT from (point gdal at a product recognized as a raster with gdal) - then I could use those tiles for my COG even if that feature wasn't implemented and it would be much more streamlined.

[edit] - the "-co FORMAT=auto" option with gdal_translate is nice for optimizing size of KML layers, but I'm not sure how it would work with sharing multiple tiles with tile services - I guess probably not well, so maybe that wouldn't make sense to do..

Also adding the KML superoverlay with network links as an option instead of KMZ would be nice because then the whole tile heirarchy could be more easily moved to online services.

Andy


Pages: 1 2 [3] 4 5 ... 27