Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - andyroo

Pages: 1 2 [3] 4 5 ... 12
31
I haven't seen updated alignment ram usage numbers lately so I figured I'd share my latest learnings. I processed two collections of 36 MPix aerial images with roughly the same geometry. The first was processed in Metashape 1.6.5 and the second in 1.7.2.

Working on a cluster with 384 GB of RAM, the alignment limit (on high) appears to be between 82,000 and 139,000 images, with the final step of alignment being the limiting factor (performed on a single node).

Maximum RAM usage to align 82,129 images was 173.18GB in 1.6.5.11249. If this scaled linearly, 139,152 images should take ~293GB of RAM. But we ran out of RAM on a 384GB node trying to complete the alignment stage in 1.7.2. with that number of images.

Obviously these are different versions, but wanted to share what I know.

Andy

32
Getting std::bad_alloc when trying to build an interpolated (not extrapolated) mesh on high from the dense cloud on some big chunks. Dense cloud is GCS (NAD83(2011)). I have successfully built interpolated and uninterpolated DEMs, and orthoimages for these chunks.

We first built an uninterpolated DEM from the dense cloud for the elevation model, then built an interpolated DEM and orthophoto (using the interpolated DEM.

I am now trying to build a mesh from the dense cloud to use for a comparison orthoimage (because in smaller experiments the mesh was much faster and smaller than the interpolated DEM).

The mesh was generated after rotating the bounding box to the DEM projected coordinate system (PCS = NAD83 UTM). Rotation was performed to minimize the height/width of the nodata collars on the DEM generated from the dense cloud, since if it stays rotated, the DEM bounds go all the way to the corners of the along-track-oriented (not PCS-oriented) bounding box. I wonder if the mesh is failing because it's doing grid interpolation over the whole empty area of the rotated bounding box. In that case, I need to switch the order or re-rotate the region to be oriented with the data, but it will probably still fail on another section that is L-shaped with a bunch of empty space.

These are the details from the node - I included a previous successful (smaller) mesh generation before too:

2021-05-07 17:45:55 BuildModel: source data = Dense cloud, surface type = Height field, face count = High, interpolation = Enabled, vertex colors = 0
2021-05-07 17:45:56 Generating mesh...
2021-05-07 17:46:20 generating 213317x132869 grid (0.00214379 resolution)
2021-05-07 17:46:20 rasterizing dem... done in 81.9141 sec
2021-05-07 17:47:42 filtering dem... done in 375.867 sec
2021-05-07 17:55:06 constructed triangulation from 21327465 vertices, 42654924 faces
2021-05-07 17:57:38 grid interpolated in 220.33 sec
2021-05-07 18:13:56 triangulating... 106374525 points 212748181 faces done in 4727.18 sec
2021-05-07 19:32:45 Peak memory used: 181.40 GB at 2021-05-07 19:32:43
2021-05-07 19:33:00 processing finished in 6425.13 sec
2021-05-07 19:33:00 BuildModel: source data = Dense cloud, surface type = Height field, face count = High, interpolation = Enabled, vertex colors = 0
2021-05-07 19:33:01 Generating mesh...
2021-05-07 19:33:37 generating 262471x233536 grid (0.00219694 resolution)
2021-05-07 19:33:37 rasterizing dem... done in 209.04 sec
2021-05-07 19:37:06 filtering dem... done in 847.863 sec
2021-05-07 19:53:17 constructed triangulation from 23493503 vertices, 46987000 faces
2021-05-07 19:57:34 grid interpolated in 380.113 sec
2021-05-07 20:20:53 Error: std::bad_alloc
2021-05-07 20:20:53 processing failed in 2872.89 sec

33
[EDIT 2 - this was my screw-up, not a bug, and I corrected the code - TLDR; the code below works now, original code (now deleted) had document path, not the PSX itself, in the batch_id ]

I wrote a script to loop through chunks in a psx, and for each chunk with a default (checked) DEM, it will get the extent and export DEMs with the bounding box/BBox rounded to some multiple of the specified DEM export resolution.

I designed the script to work in either network or non-network mode, and tested it in both modes on a Win10 machine (tested network with node/monitor/GUI/host all on 127.0.0.1). It looks for app.settings.network_enable = True and runs in network mode if True, standalone if not. On the Windows machine I was able to generate DEMs from multiple chunks as expected.

When I tried it in network mode on our unix machines, I got Error: Can't read file: Is a directory (21): and I have no idea why.

In non-network mode it runs just fine. It kind of seems like the network task is truncating the file length or something, but the network task looks fine to me. The total path length including filename was 154 characters and I've attached a screenshot showing the bad script run on a node plus several attempts to duplicate the filename at the end, the last one was successful and the extra comma is because I pasted it into the filename apparently (created a file with a comma in the extension, which I didn't even know was legal).


[edit accidentally hit post before attaching image]


Code: [Select]
'''
make bounding boxes and build integer bounded DEMs for ALL default DEMs in the open PSX file
aritchie@usgs.gov 2021-05-03 tested on Metashape 1.7.1

This script creates a bounding box from the extent of the existing default full resDEM, rounded to the specified interval, then creates a raster with a specified resolution,
FOR EVERY DEFAULT DEM IN EVERY CHUNK IN THE PSX.

Raster will be placed in a user-specified (via script variable) subdirectory of the existing project ('dem' by default)
DIRECTORY WILL BE CREATED IF IT DOESN'T EXIST.
A user-specified suffix will be appended to the chunk label (in user variables below)
---CAUTION THERE IS NO ERROR CHECKING FOR LEGAL FILENAMES----
There is no error checking in the script. It will throw errors if there is no default DEM,

If there are bad filename characters, etc. I have NO idea what will happen. Be careful.

Andy
'''
import Metashape
import math
import os
from os import path
#-------------------------------------------------------#
#define user-set variables
raster_rounding_multiple = 10   # Default = 10 - This will be the multiple that the raster resolution is multiplied by to define the units the min/max extents are rounded to
raster_resolution = 1           # Default = 1 - cell size of exported DEM
raster_crop = True              # Default = True - True means Bounding Box in rounded IN - minimum extent is rounded up and maximum extent is rounded down from raster edges. False is reversed
                                # TODO - make it so metashape checks to see if this is an interpolated raster (shrink) or uninterpolated (grow?)
                                # ALSO - maybe we want to project the xy coordinates of the 3D dense cloud region and use those instead? this will result in no/minimal collar though...
dem_subdir = 'dem_20210504'              # this is a subdir that will be created under the document (PSX) path
dem_suffix = '_NAD83_2011_NAVD88_UTM18'

#-----OPERATIONAL CODE IS BELOW. EDIT AT YOUR PERIL-----#
raster_rounding_interval = raster_rounding_multiple * raster_resolution
app = Metashape.app
doc = app.document
network_tasks = list()
for chunk in doc.chunks:
    if chunk.elevation:
        print(chunk.label)
        out_projection = chunk.elevation.projection
        compression = Metashape.ImageCompression()
        compression.tiff_compression = Metashape.ImageCompression.TiffCompressionLZW
        compression.tiff_big = True
        compression.tiff_overviews = True
        compression.tiff_tiled = True
           
        def round_down(x):
            return int(raster_rounding_interval * math.floor(float(x)/raster_rounding_interval))

        def round_up(x):
            return int(raster_rounding_interval * math.ceil(float(x)/raster_rounding_interval))


        testbox = Metashape.BBox() #create a bounding box for the raster
        print('')
        print('original DEM BBox coordinates:')
        print('min: ', Metashape.Vector((min(chunk.elevation.left, chunk.elevation.right), min(chunk.elevation.bottom, chunk.elevation.top))))
        print('max: ', Metashape.Vector((max(chunk.elevation.left, chunk.elevation.right), max(chunk.elevation.bottom, chunk.elevation.top))))

        if raster_crop:
            testbox.min = Metashape.Vector((round_up(min(chunk.elevation.left, chunk.elevation.right)), round_up(min(chunk.elevation.bottom, chunk.elevation.top))))
            testbox.max = Metashape.Vector((round_down(max(chunk.elevation.left, chunk.elevation.right)), round_down(max(chunk.elevation.bottom, chunk.elevation.top))))
        else:
            testbox.min = Metashape.Vector((round_down(min(chunk.elevation.left, chunk.elevation.right)), round_down(min(chunk.elevation.bottom, chunk.elevation.top))))
            testbox.max = Metashape.Vector((round_up(max(chunk.elevation.left, chunk.elevation.right)), round_up(max(chunk.elevation.bottom, chunk.elevation.top))))

        if raster_crop:
            print('extent was SHRUNK to: ')
            print('min: ',testbox.min)
            print('max: ',testbox.max)
        else:
            print('extent was GROWN to: ')
            print('min: ',testbox.min)
            print('max: ',testbox.max)

        doc_path = os.path.split(doc.path)[0]
        outPath = os.path.normpath(doc_path + os.sep + dem_subdir)

        outFilename = chunk.label + dem_suffix + '_' + str(raster_resolution) + 'm' + '.tif'
        exportFile = os.path.normpath(outPath+os.sep+outFilename)
        if not os.path.exists(outPath):
            print('testing create path: ' + outPath)
            os.makedirs(outPath)
            print('testing file writestring: ' + exportFile)
        else:
            if not os.path.isfile(exportFile):
                print('testing file writestring: ' + exportFile)
        #
        if not app.settings.network_enable:
            chunk.exportRaster(path = exportFile, image_format=Metashape.ImageFormatTIFF, projection = out_projection, region = testbox, resolution_x = raster_resolution,  resolution_y = raster_resolution, image_compression=compression, save_world = False, white_background = False,source_data = Metashape.ElevationData)
        else:
            task = Metashape.Tasks.ExportRaster()
            task.path = str(exportFile)
            task.image_compression = compression
            task.image_format = Metashape.ImageFormatTIFF
            task.projection = out_projection
            task.region = testbox
            task.resolution_x = raster_resolution
            task.resolution_y = raster_resolution
            task.save_world = False
            task.source_data = Metashape.ElevationData

            n_task = Metashape.NetworkTask()
            n_task.name = task.name
            n_task.params = task.encode()
            n_task.frames.append((chunk.key, 0))
            network_tasks.append(n_task)
    else:
        print(chunk.label, ' has no DEM.')

if app.settings.network_enable:
    client = Metashape.NetworkClient()
    client.connect(app.settings.network_host) #server ip
    batch_id = client.createBatch(doc.path, network_tasks)
    client.resumeBatch(batch_id)
print('script complete')


34
I just reviewed all of the scripts I could find, and wasn't able to find any option to resize a region to a dense cloud that occupies only part of a sparse cloud extent. I also didn't find anything in the API. The closest I found was this post from January 2020 that was asking how to get a BBox from the extent of a dense_cloud object.

I am aligning multiple sets of images with different extents together to produce a single sparse cloud, then disabling each set iteratively to generate dense clouds with different extents for each set of images.

I want to use the python API to resize the region (or generate a bounding box) based on the extent of the dense cloud data, so that the resulting DEM doesn't have a bunch of nodata on the borders. I can't just manually specify the DEM BBox, since I don't know before generating the dense cloud what the data extent will be.

Thanks for any insight.

Andy

35
Win10 w/ Threadripper 3960X ( and two RTX 1080 Super GPUs and 256GB RAM (85% free)

I'm aligning ~36000 images in one chunk and trying to figure out why metashape is being super unresponsive and barely using any resources (1 core). When trying to check where the process is, I found that the screen has not updated for about 6 hours (08:41:11 local time and it's 15:22 right now), and the logfile is being written very slowly (currently 44h behind at 11:07:23).

logfile is writing to a SSD with 500GB of space. I thought it might be too big (77MB) so I copied & cleared it but write speed didn't change.
Resource Monitor shows 7 root threads waiting for a child thread, and 358 associated handles, which I'm happy to provide if they'd be useful

 the last line showing in the console (which is currently unresponsive) is bolded and underlined below. The logfile lines being written look like it's in the same process:

2021-03-11 08:41:09 block_obs: 25.066 MB (25.066 MB allocated)
2021-03-11 08:41:09 block_ofs: 2.5294 MB (2.5294 MB allocated)
2021-03-11 08:41:09 block_fre: 0 MB (0 MB allocated)
2021-03-11 08:41:10 adding 331032 points, 0 far (13.1678 threshold), 2 inaccurate, 2 invisible, 0 weak
2021-03-11 08:41:10 adjusting: xxx 0.694264 -> 0.287228
2021-03-11 08:41:10 adding 6 points, 2 far (13.1678 threshold), 2 inaccurate, 2 invisible, 0 weak
2021-03-11 08:41:10 optimized in 0.873 seconds
2021-03-11 08:41:10 f 8863.4, cx 27.5, cy -1.32353, k1 -0.072948, k2 0.0864567, k3 -0.0213253
2021-03-11 08:41:10 f 8862.78, cx 27.5, cy -1.32353, k1 -0.0729651, k2 0.086042, k3 -0.0230285
2021-03-11 08:41:10 f 8863.4, cx 27.5, cy -1.32353, k1 -0.072312, k2 0.0833413, k3 -0.0185212
2021-03-11 08:41:11 adjusting: xxxx 0.295707 -> 0.287377
2021-03-11 08:41:12 loaded projections in 0.003 sec
2021-03-11 08:41:12 tracks initialized in 0.072 sec
2021-03-11 08:41:12 adding 331034 points, 0 far (13.1678 threshold), 1 inaccurate, 3 invisible, 0 weak
2021-03-11 08:41:12 block: 1 sensors, 28 cameras, 106716 points, 0 projections
2021-03-11 08:41:12 block_sensors: 0.000816345 MB (0.000816345 MB allocated)
2021-03-11 08:41:12 block_cameras: 0.0108948 MB (0.0108948 MB allocated)
2021-03-11 08:41:12 block_points: 4.88507 MB (4.88507 MB allocated)
2021-03-11 08:41:12 block_tracks: 0.407089 MB (0.407089 MB allocated)
2021-03-11 08:41:12 block_obs: 0 MB (0 MB allocated)
2021-03-11 08:41:12 block_ofs: 0 MB (0 MB allocated)
2021-03-11 08:41:12 block_fre: 0 MB (0 MB allocated)
2021-03-11 08:41:12 block: 2 sensors, 47 cameras, 192836 points, 0 projections
2021-03-11 08:41:12 block_sensors: 0.00163269 MB (0.00163269 MB allocated)
2021-03-11 08:41:12 block_cameras: 0.0182877 MB (0.0182877 MB allocated)
2021-03-11 08:41:12 block_points: 8.82733 MB (8.82733 MB allocated)
2021-03-11 08:41:12 block_tracks: 0.735611 MB (0.735611 MB allocated)
2021-03-11 08:41:12 block_obs: 0 MB (0 MB allocated)
2021-03-11 08:41:12 block_ofs: 0 MB (0 MB allocated)
2021-03-11 08:41:12 block_fre: 0 MB (0 MB allocated)

36
I would love, both in the GUI and in the python API, to be able to copy a certain selected area of a chunk into a new chunk, similar to this post. At the moment I have to duplicate the whole chunk, then prune it to a certain area. With tens of thousands of images and hundreds of millions of tiepoints this is quite tedious. If I could select tie points and markers by area (lat/lon) or within a shapefile boundary, then select photos by tiepoints, then copy selected to a new chunk (and/or do the same via manual selection in the GUI) it would make me even happier than I already am :-)

37
I'm running gradual selection on a 386 million point cloud (from 82,000 cameras) and the initial "Analyzing Point Cloud" step after I select which parameter to use and before I select the gradual selection threshold is taking quite a while. It looks like it's going to be about 5-6 hours on my workstation (80% done @ 4h), and it's going to take > 8 hours on a HPC login node with slower RAM. Wondering if there's anything I can do to speed this up (headless? 1.7.x?).

On the workstation (Threadripper 3960x with 256GB 3000MHz RAM and 2x RTX 2080 Super GPUs) it appears to be using minimal CPU resources (maybe a single core, resource monitor says ~4% CPU) and  it's using about 60GB of 256GB total RAM (total system usage is about 72GB).

38
I am wondering about the AlignCameras.finalize step in network processing. I see that it's limited to one node, and that if the node dies it has to restart the finalize step. I am wondering if ~24h for the finalize step is reasonable on Metashape 1.6.5 with ~82000 cameras (36 megapixel). I notice that this project has some "merging failed" messages, and it seems like a similar run (same project with different camera accuracies and no GCPs) finished much faster, but then I did have to restart this node ~20h in because the job expired. I also have two copies of the chunk in the same project, but I'm only operating on one of them.

Node is dual 18-core CPUs @ 2.3GHz with 376GB RAM

Andy

39
This is a 2-part feature request:

1) - to add the ability to output Cloud-optimized geotiffs (COGs) in Export Orthomosaic/Tiff/Export TIF/JPG/PNG... and Export DEM/Export TIF/BIL/XYZ...
2) - to add/combine tiled export options to be able to produce one set of tiles (and associated template html & kml) that can be used with multiple viewing options including KML Superoverlay, leaflet, openlayers, google maps, and other tile map services.

Currently I do this with gdal, but it requires multiple passes and is MUCH slower than metashape's efficient raster outputs. We are shifting to cloud-optimized geotif format for DEMs and orthos and starting to use tiled datasets more for serving ortho products to end users, so exporting a temporary raster or raster tiles that I postprocess into something else using slow tools is starting to become a significantly inefficient part of my workflow.

Right now I export TIFFs in metashape either as a single tiff or tiles, then use gdal_translate -of COG to generate COGs (slow), gdalbuildvrt and gdalwarp to make VRTs in EPSG:4326 (fast), and gdal_translate -of KMLSUPEROVERLAY -co FORMAT=AUTO (verrrry sloow) or gdal2tiles -k -processes=[NumCores] (fast but sometimes buggy with gdalwarp generating a LOT of empty tiles) to make the KML superoverlay.

gdal2tiles is nice because it automagically creates viewers for leaflet, openlayers, google maps, and kml (with the -k option). Also it uses all cores for building the KML Superoverlay where gdal_translate doesn't. But gdal_translate supports hybrid JPG and PNG (for edge transparency) tiles. If I could do all of this within metashape, I would jump for joy.

I would love to be able to do these things within Metashape in the export dialog/API. Especially the part where I can create the KML superoverlay not in one giant zip but in a folder heirarchy, and where I can use the same tiles for KML/Google Maps/OpenLayers/Leaflet - lots of flexibility there. If that also generated a VRT or something I could generate a VRT from (point gdal at a product recognized as a raster with gdal) - then I could use those tiles for my COG even if that feature wasn't implemented and it would be much more streamlined.

[edit] - the "-co FORMAT=auto" option with gdal_translate is nice for optimizing size of KML layers, but I'm not sure how it would work with sharing multiple tiles with tile services - I guess probably not well, so maybe that wouldn't make sense to do..

Also adding the KML superoverlay with network links as an option instead of KMZ would be nice because then the whole tile heirarchy could be more easily moved to online services.

Andy


40
Feature Requests / crop raster to nodata extent?
« on: December 10, 2020, 04:55:51 AM »
For some reason I was under the impression that rasters were built only to the data extent. Is it actually the bounding box?

If so, it would be nice to have a "crop to data extent" option. I have a project where I have three sets of images overlapping but covering different sub-parts of the total aligned project. I iterated through aligned sets of images disabling all but one to generate three different dense clouds, then built DEMs for each dense cloud. THe DEMs all have roughly the same extent (as the total cloud) even though one of them only occupies about 10% of the area. It makes the raster size and export time much larger/longer than if they were cropped to the data extent. Now I have to process to trim to the data extent in external software. To do this with a shapefile is complicated because I am trying to maintain integer bounding boxes, and it would be nice to be able to do it procedurally (python script) or in batch.

41
I attempted to save out four approximately equal subset regions from a large (~80,000 image) project as separate chunks to process more efficiently. The original project was 467 GB Most of the size of the project was saved key points (437.14 GB), and after trimming it to sub-regions, and running the Tools/Preferences/Advanced/Clean up project... task (no junk files found), The smaller chunks were still around 440GB - and had exactly the same size key point file (437.14 GB).

To trim the project extent, I selected the region(s) to delete, then right-click/Remove Cameras, reset the region, and saved out the new chunk.

Is there a way to delete the key points that are no longer referenced by any photo in the project?

Andy

42
I am trying to figure out how - or if - I can write a script to select or de-select only tie points/matches that are shared between camera groups (camera folders) in a chunk. I want to treat tie points between cameras differently than tie points for the same camera while performing gradual selection filtering, so after I select points with a certain threshold, I want to be able to deselect points that either do or don't have cameras for multiple groups (I want to do both at different times). If I knew how to do one I could do the other, but I'm confused where to start.

Is this possible to do? I'm not sure if I should start with chunk.point_cloud.projections or chunk.point_cloud.points or chunk.point_cloud.cameras. It looks like I can access chunk.point_cloud.projections by camera, but there's not a straightforward way to get a list of cameras for a given projection (tie point?).

Then I guess I'd have to parse that list and see if it had more than one camera group?

My brain wants to iterate through points (for point in chunk.point_cloud.points:) but maybe I have to iterate through cameras?

Any hints would be super-appreciated.

Andy

43
Feature Requests / Reuse depth maps - generate if not exist
« on: December 04, 2020, 05:29:37 AM »
Not sure about how other folks use this, but I often disable a subset of aligned images, generate a dense cloud, then enable some/disable some images, and generate another dense cloud.

"reuse depth maps" defaults to "checked" in the dialog box (advanced settings, hidden by default) (probably defaults to checked because of my "keep depth maps" setting), but if depth maps don't exist I get a "zero resolution" error, rather than (1) depth maps being created for images that don't have them, or (2) a prompt saying they don't exist.

I would prefer that when I have "keep depth maps" selected in advanced options, "reuse depth maps" does not automatically regenerate existing depth maps, but DOES generate depth maps that don't exist.

Also a quick note - Thank you Agisoft developers for being so responsive to user questions/requests/comments. It makes for a very nice user experience, and I think that it generally shows in the helpfulness and enthusiasm of the user community.

44
Bug Reports / Network Project completed, but won't open?
« on: December 03, 2020, 03:04:00 AM »
I finished and closed a project yesterday in network processing and tried to open it today in Metashape and it's stuck on "processing..." and "untitled" in metashape (tried to open it a good half-hour ago). The task is #9, visible in the monitoring console in the screenshot..

The storage is high-speed storage on a HPC optimized for machine learning, so file access speed shouldn't be a problem.

[edit] - I tried this with an idle node in case there was cleanup work needing to be done, but the log shows that cleanup completed.

45
General / Network vs Non-network alignment performance
« on: December 02, 2020, 10:28:00 AM »
I was comparing alignment time on a relatively large project on network vs non-network, and was surprised that the non-network machine seems to be going much faster (~4x). Especially in alignment finalization. One thing I noticed is that the node is only processing 7 images at a time, while the workstation is processing  ~20. The workstation (Threadripper 3960x/256GB RAM/2x RTX2080 Super) takes 5-6 minutes to adjust points from each 20-image batch, while the network node (2x 18-core 2.3GHz Skylake CPU/384GB RAM/4x NVidia V100) takes 7-8 minutes to adjust points from each 7-image batch. I understand that the 3960x is higher frequency, but not why the machine with more RAM/cores is taking fewer images. The project is the same, just copied to the network and paths changed on images. Network nodes have faster disk/network access than my workstation.

log excerpts below:

Code: [Select]
...
2020-12-01 22:52:00 adding camera 75068 (77523 of 80065), 1128 of 1132 used
2020-12-01 22:52:00 adding camera 76188 (77524 of 80065), 1056 of 1056 used
2020-12-01 22:52:43 adding 116073 points, 45 far (12.272 threshold), 311 inaccurate, 346 invisible, 129 weak
2020-12-01 22:53:48 adjusting: xxxxxxxxxx 0.268141 -> 0.267689
2020-12-01 22:59:16 adding 964 points, 413 far (12.272 threshold), 322 inaccurate, 352 invisible, 131 weak
2020-12-01 22:59:16 optimized in 393.425 seconds
2020-12-01 22:59:36 adding camera 77283 (77525 of 80065), 7480 of 7500 used
2020-12-01 22:59:36 adding camera 78352 (77526 of 80065), 5854 of 5858 used
2020-12-01 22:59:36 adding camera 76647 (77527 of 80065), 4961 of 4964 used
2020-12-01 22:59:36 adding camera 77607 (77528 of 80065), 4405 of 4415 used
2020-12-01 22:59:36 adding camera 77284 (77529 of 80065), 3959 of 3986 used
2020-12-01 22:59:36 adding camera 77323 (77530 of 80065), 3047 of 3059 used
2020-12-01 22:59:36 adding camera 76345 (77531 of 80065), 2832 of 2832 used
2020-12-01 22:59:36 adding camera 77427 (77532 of 80065), 2829 of 2833 used
2020-12-01 22:59:36 adding camera 76648 (77533 of 80065), 2721 of 2727 used
2020-12-01 22:59:36 adding camera 78353 (77534 of 80065), 2613 of 2620 used
2020-12-01 22:59:36 adding camera 78543 (77535 of 80065), 2494 of 2504 used
2020-12-01 22:59:36 adding camera 76189 (77536 of 80065), 2260 of 2260 used
2020-12-01 22:59:36 adding camera 77608 (77537 of 80065), 2020 of 2030 used
2020-12-01 22:59:36 adding camera 77285 (77538 of 80065), 1845 of 1859 used
2020-12-01 22:59:36 adding camera 76344 (77539 of 80065), 1408 of 1408 used
2020-12-01 22:59:36 adding camera 76649 (77540 of 80065), 1401 of 1413 used
2020-12-01 22:59:36 adding camera 75067 (77541 of 80065), 1349 of 1349 used
2020-12-01 22:59:36 adding camera 77322 (77542 of 80065), 1299 of 1310 used
2020-12-01 22:59:36 adding camera 77426 (77543 of 80065), 1145 of 1145 used
2020-12-01 22:59:36 adding camera 76190 (77544 of 80065), 1016 of 1016 used
2020-12-01 23:00:18 adding 101126 points, 36 far (12.272 threshold), 314 inaccurate, 352 invisible, 133 weak
2020-12-01 23:01:24 adjusting: xxxxxxxxxx 0.26763 -> 0.267341
2020-12-01 23:06:53 adding 916 points, 422 far (12.272 threshold), 324 inaccurate, 351 invisible, 134 weak
2020-12-01 23:06:53 optimized in 395.194 seconds
2020-12-01 23:07:13 adding camera 77286 (77545 of 80065), 7435 of 7447 used
2020-12-01 23:07:13 adding camera 77609 (77546 of 80065), 6284 of 6292 used
2020-12-01 23:07:13 adding camera 77321 (77547 of 80065), 6004 of 6017 used
...

and the network version:

Code: [Select]
...
2020-12-02 01:09:59 adding camera 76868 (78524 of 80065), 2417 of 2423 used
2020-12-02 01:09:59 adding camera 76869 (78525 of 80065), 1439 of 1449 used
2020-12-02 01:10:19 adding 32960 points, 8 far (12.272 threshold), 314 inaccurate, 364 invisible, 153 weak
2020-12-02 01:11:11 adjusting: xxxxxxxxxx 0.254763 -> 0.25469
2020-12-02 01:18:11 adding 832 points, 303 far (12.272 threshold), 323 inaccurate, 364 invisible, 153 weak
2020-12-02 01:18:11 optimized in 472.695 seconds
2020-12-02 01:18:32 adding camera 76870 (78526 of 80065), 4766 of 4783 used
2020-12-02 01:18:32 adding camera 77833 (78527 of 80065), 4613 of 4626 used
2020-12-02 01:18:32 adding camera 76871 (78528 of 80065), 2639 of 2651 used
2020-12-02 01:18:32 adding camera 77834 (78529 of 80065), 2339 of 2347 used
2020-12-02 01:18:32 adding camera 76872 (78530 of 80065), 1394 of 1406 used
2020-12-02 01:18:52 adding 32871 points, 18 far (12.272 threshold), 314 inaccurate, 364 invisible, 152 weak
2020-12-02 01:19:44 adjusting: xxxxxxxxxx 0.254728 -> 0.254651
2020-12-02 01:26:29 adding 830 points, 304 far (12.272 threshold), 323 inaccurate, 364 invisible, 152 weak
2020-12-02 01:26:29 optimized in 457.407 seconds
2020-12-02 01:26:50 adding camera 77835 (78531 of 80065), 8946 of 8949 used
2020-12-02 01:26:50 adding camera 76873 (78532 of 80065), 4739 of 4748 used
...

[edit] 15 hours in on the workstation and it's within 5% of the network, which has been running for about 60h. The workstation is at 83% complete, network is at 88%. I see that the workstation also takes smaller bites sometimes (like now, where it's at about the same stage as the network alignment), but finishes those much more quickly, around 1 minute, instead of 7:

Code: [Select]
2020-12-02 09:05:59 adding camera 76833 (78428 of 80065), 1185 of 1188 used
2020-12-02 09:05:59 adding camera 76211 (78429 of 80065), 1010 of 1012 used
2020-12-02 09:06:42 adding 39366 points, 18 far (12.272 threshold), 319 inaccurate, 362 invisible, 134 weak
2020-12-02 09:07:49 adjusting: xxxxxxxxxx 0.252005 -> 0.251927
2020-12-02 09:13:54 adding 815 points, 317 far (12.272 threshold), 324 inaccurate, 362 invisible, 134 weak
2020-12-02 09:13:54 optimized in 432.302 seconds
2020-12-02 09:14:13 adding camera 77799 (78430 of 80065), 10228 of 10235 used
2020-12-02 09:14:13 adding camera 77800 (78431 of 80065), 4966 of 4976 used
2020-12-02 09:14:13 adding camera 76834 (78432 of 80065), 3857 of 3859 used
2020-12-02 09:14:13 adding camera 76210 (78433 of 80065), 2683 of 2686 used
2020-12-02 09:14:13 adding camera 77801 (78434 of 80065), 2278 of 2290 used
2020-12-02 09:14:13 adding camera 76835 (78435 of 80065), 2180 of 2182 used
2020-12-02 09:14:13 adding camera 76209 (78436 of 80065), 1538 of 1542 used
2020-12-02 09:14:13 adding camera 74924 (78437 of 80065), 1521 of 1521 used
2020-12-02 09:14:13 adding camera 76836 (78438 of 80065), 1321 of 1325 used
2020-12-02 09:14:55 adding 49143 points, 9 far (12.272 threshold), 319 inaccurate, 362 invisible, 132 weak
2020-12-02 09:16:03 adjusting: xxxxxxxxxx 0.252002 -> 0.251906
2020-12-02 09:22:48 adding 813 points, 317 far (12.272 threshold), 323 inaccurate, 362 invisible, 132 weak
2020-12-02 09:22:48 optimized in 472.756 seconds
2020-12-02 09:23:07 adding camera 77802 (78439 of 80065), 10655 of 10665 used
2020-12-02 09:23:07 adding camera 77803 (78440 of 80065), 5295 of 5308 used

Pages: 1 2 [3] 4 5 ... 12