Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - andyroo

Pages: 1 [2] 3 4 ... 30
16
Build Tie Points is a useful tool to densify the tie point population before gradual selection or dense cloud building. It would be nice to be able to direct it to only densify certain chunks of a project in batch rather than working with names and loops so that it works in python. I tend to use a combo of scripts that check for the existence of a product before they run, and it's difficult to do with this algorithm, but if I had it in the batch dialog it would be grand. As I understand, this tool is the same as running chunk,matchPhotos, then chunk.triangulateTiePoints in the python API?

17
Feature Requests / Re: COPC .laz support for exporting point cloud
« on: July 13, 2023, 11:38:52 PM »
Just wanted to re-raise this request. It's how I'm planning to publish several hundred aerial surveys, and it would be nice to be able to export as a COPC instead of writing out several TB of data then rewriting it as a COPC.


QGIS 3.32 now has much-expanded support for pointclouds, and COPC is what they're internally converted to if they don't already exist, from what I can tell.

@JRM I'm confused how laszip doesn't support copc - the spec makes it seem like it does:

"Data organization of COPC is modeled after the EPT data format, but COPC clusters the storage of the octree as variably-chunked LAZ data in a single file. This allows the data to be consumed sequentially by any reader than can handle variably-chunked LAZ 1.4 (LASzip, for example), or as a spatial subset for readers that interpret the COPC hierarchy. More information about the differences between EPT data and COPC can be found below."

18
In both places the images read the same dimensions - 4000x3000. Screenshot attached.

19
I am getting these errors trying to build a dense cloud on only one set of images and wondering if somehow my alignment is corrupt - this is in a psx with multiple chunks and dense clouds generated fine from other chunks (all derivatives of a "master" chunk aligned with multiple photosets).

Initially I thought that I had corrupt images but they read fine in other software. When I synced them from an archive anyway with checksum nothing was overwritten, and if I force-sync them one-at-a-time the dense cloud building exits with an error (very quickly 1-2s) referring to various images -

Error: Assertion 239101010894 failed. Image size mismatch: 20140828_SN000_IMG_4607.JPG
Error: Assertion 239101010894 failed. Image size mismatch: 20140828_SN000_IMG_4902.JPG
Error: Assertion 239101010894 failed. Image size mismatch: 20140828_SN000_IMG_4900.JPG
Error: Assertion 239101010894 failed. Image size mismatch: 20140828_SN000_IMG_4904.JPG
Error: Assertion 239101010894 failed. Image size mismatch: 20140828_SN000_IMG_4902.JPG

Anything I can do to potentially fix this?

20
I'm finding around >100GB of files that look like they're "orphaned" after crashes in 2.0.2 while building dense clouds on a AMD R9 7950X with dual RX 7900 XTX GPUs running win11.

I am filling out the crash reporter and reporting the crashes using AMD's bug report tool, but just want to make sure these are files I should delete, and ask - what's the best way to delete them? should I leave an empty dir or replace the dir? Is there any way I can make metashape reuse these files (e.g. by running in network mode with host/client/monitor all on this machine)?

I see that in projects where I later successfully generated the dense cloud, the "leftover" files are in /depth_maps and the completed files are in /depth_maps.1 - can I just delete /depth_maps if it has the *unfiltered* and *inliers* files?

Below is an excerpt of a dir listing for my latest crash:

Code: [Select]
07/03/2023  10:26 AM    <DIR>          .
07/03/2023  08:30 AM    <DIR>          ..
07/03/2023  10:23 AM       288,485,718 data0.zip
07/03/2023  10:24 AM       564,512,732 data1.zip
07/03/2023  10:25 AM       501,141,795 data2.zip
07/03/2023  10:26 AM       220,401,350 data3.zip
07/03/2023  08:32 AM       314,578,131 data_unfiltered0.zip
07/03/2023  08:34 AM       581,660,929 data_unfiltered1.zip
...
07/03/2023  10:22 AM       223,559,401 data_unfiltered65.zip
07/03/2023  08:44 AM       307,905,838 data_unfiltered7.zip
07/03/2023  08:46 AM       461,265,241 data_unfiltered8.zip
07/03/2023  08:47 AM       606,145,517 data_unfiltered9.zip
07/03/2023  08:32 AM       180,486,450 inliers0.zip
07/03/2023  08:34 AM       379,902,421 inliers1.zip
...
07/03/2023  10:22 AM        80,761,792 inliers65.zip
07/03/2023  08:44 AM       102,940,373 inliers7.zip
07/03/2023  08:46 AM       262,037,387 inliers8.zip
07/03/2023  08:47 AM       387,124,437 inliers9.zip
07/03/2023  08:31 AM       150,178,344 pm_cameras_info.data
07/03/2023  08:31 AM            26,992 pm_cameras_partitioning.grp
             138 File(s) 60,588,842,406 bytes
               2 Dir(s)  5,667,775,660,032 bytes free

21
General / Re: PC for Metashape in 2023
« on: June 24, 2023, 11:25:16 PM »
These images are also 18MP. I didn't look at individual GPU usage. Both machines are busy for the next several weeks at least.

22
General / Re: GeForce v RTX (ex Quadro)
« on: June 24, 2023, 10:24:16 AM »
Not sure if it's relevant for your work but I just benched an otherwise mostly identical dual RTX 4090 build with a dual RX 7900 XTX and posted about it on this thread. I'm doing point clouds, orthos, dems, and they seem pretty comparable for those tasks except the RX 7900 XTX is half the price.

23
General / Re: PC for Metashape in 2023
« on: June 24, 2023, 10:20:47 AM »
Seems like worth posting here. I recently did a couple builds - both Ryzen 9 7500x:
  • 64GB 6GHz DDR5 RAM, 2x AMD RX 7900 XTX
  • 96GB 5.2GHz DDR5 RAM*, 2x NVidia RTX 4090
*will be 192GB 5.2GHz RAM when my RMA is done, I had one bad stick.[/list]

They benchmarked pretty similarly (within 1%) - except for on tiled model and texture because I don't know how to get SPIR support with AMD drivers yet (but that's not a big deal for me because that's not in my workflow).

I benchmarked using Puget Systems' extended Metashape benchmark because it actually stress-tests the cards and the CPU a bit and does everything I care about and writes to a nice file so I can compare machines.

I attached a chart and the modified benchmarking script where I fork things depending on metashape versions. It covers 1.5.something to 1.6.6 then skips to 2.0 because I didn't build any new workstations for a couple years.

24
I've been running incremental alignment on a new machine using dual AMD RX 7900 XTX GPUs and have incrementally aligned 20 photo sets so far running two at a time (two cameras on the aircraft). Matching just failed trying to align sets 21 and 22 and threw this error:

Error: ciErrNum: CL_OUT_OF_HOST_MEMORY (-6) at line 212

This is the first time I came across this error after doing incremental alignment on 20 other photo sets in the same project on this machine. I've been doing the same thing on a HPC and another workstation using NVidia CUDA GPUs for several weeks without encountering anything like this.  Log excerpt before the error is shown below, and the whole log for this run is attached as a zip. I'm going to reboot in case there's a GPU memory leak or something and will report back if it happens again.

Other details:

Driver Version
22.40.57.05-230523a-392837C-AMD-Software-Adrenalin-Edition

OS    Microsoft Windows 11 Pro N
Version   10.0.22621 Build 22621
Processor   AMD Ryzen 9 7950X
BaseBoard   ProArt X670E-CREATOR WIFI
Installed RAM   64.0 GB

Log excerpt:
Code: [Select]
2023-06-22 17:13:14 filtered 3261150 out of 3307520 matches (98.598%) in 0.448 sec
2023-06-22 17:13:16 saved matches in 0.007 sec
2023-06-22 17:13:18 loaded matching partition in 0.002 sec
2023-06-22 17:13:18 loaded keypoint partition in 0.001 sec
2023-06-22 17:13:46 loaded keypoints in 27.724 sec
2023-06-22 17:13:46 loaded matching data in 0.001 sec
2023-06-22 17:13:46 Matching points...
2023-06-22 17:13:48 AMD Radeon RX 7900 XTX (gfx1100): no SPIR support
2023-06-22 17:13:48 AMD Radeon(TM) Graphics (gfx1036): no SPIR support
2023-06-22 17:13:48 AMD Radeon RX 7900 XTX (gfx1100): no SPIR support
2023-06-22 17:13:48 Found 3 GPUs in 0 sec (CUDA: 0 sec, OpenCL: 0 sec)
2023-06-22 17:13:48 Using device: AMD Radeon RX 7900 XTX (gfx1100), 48 compute units, free memory: 24557/24560 MB, OpenCL 2.0
2023-06-22 17:13:48   driver version: 3516.0 (PAL,LC), platform version: OpenCL 2.1 AMD-APP (3516.0)
2023-06-22 17:13:48   max work group size 256
2023-06-22 17:13:48   max work item sizes [1024, 1024, 1024]
2023-06-22 17:13:48   max mem alloc size 20876 MB
2023-06-22 17:13:48   wavefront width 32
2023-06-22 17:13:48 Using device: AMD Radeon RX 7900 XTX (gfx1100), 48 compute units, free memory: 24557/24560 MB, OpenCL 2.0
2023-06-22 17:13:48   driver version: 3516.0 (PAL,LC), platform version: OpenCL 2.1 AMD-APP (3516.0)
2023-06-22 17:13:48   max work group size 256
2023-06-22 17:13:48   max work item sizes [1024, 1024, 1024]
2023-06-22 17:13:48   max mem alloc size 20876 MB
2023-06-22 17:13:48   wavefront width 32
2023-06-22 17:13:48 Loading kernels for AMD Radeon RX 7900 XTX (gfx1100)...
2023-06-22 17:13:48 Kernel loaded in 0.016 seconds
2023-06-22 17:13:49 Loading kernels for AMD Radeon RX 7900 XTX (gfx1100)...
2023-06-22 17:13:49 Kernel loaded in 0.017 seconds
2023-06-22 17:16:22 4156962 matches found in 154.515 sec
2023-06-22 17:16:23 matches combined in 0.35 sec
2023-06-22 17:16:23 filtered 3525814 out of 3579945 matches (98.4879%) in 0.487 sec
2023-06-22 17:16:25 saved matches in 0.006 sec
2023-06-22 17:16:27 loaded matching partition in 0.001 sec
2023-06-22 17:16:27 loaded keypoint partition in 0 sec
2023-06-22 17:16:56 loaded keypoints in 28.814 sec
2023-06-22 17:16:56 loaded matching data in 0 sec
2023-06-22 17:16:56 Matching points...
2023-06-22 17:16:59 AMD Radeon RX 7900 XTX (gfx1100): no SPIR support
2023-06-22 17:16:59 AMD Radeon(TM) Graphics (gfx1036): no SPIR support
2023-06-22 17:16:59 AMD Radeon RX 7900 XTX (gfx1100): no SPIR support
2023-06-22 17:16:59 Found 3 GPUs in 0 sec (CUDA: 0 sec, OpenCL: 0 sec)
2023-06-22 17:16:59 Using device: AMD Radeon RX 7900 XTX (gfx1100), 48 compute units, free memory: 24557/24560 MB, OpenCL 2.0
2023-06-22 17:16:59   driver version: 3516.0 (PAL,LC), platform version: OpenCL 2.1 AMD-APP (3516.0)
2023-06-22 17:16:59   max work group size 256
2023-06-22 17:16:59   max work item sizes [1024, 1024, 1024]
2023-06-22 17:16:59   max mem alloc size 20876 MB
2023-06-22 17:16:59   wavefront width 32
2023-06-22 17:16:59 Using device: AMD Radeon RX 7900 XTX (gfx1100), 48 compute units, free memory: 24557/24560 MB, OpenCL 2.0
2023-06-22 17:16:59   driver version: 3516.0 (PAL,LC), platform version: OpenCL 2.1 AMD-APP (3516.0)
2023-06-22 17:16:59   max work group size 256
2023-06-22 17:16:59   max work item sizes [1024, 1024, 1024]
2023-06-22 17:16:59   max mem alloc size 20876 MB
2023-06-22 17:16:59   wavefront width 32
2023-06-22 17:16:59 Loading kernels for AMD Radeon RX 7900 XTX (gfx1100)...
2023-06-22 17:16:59 Kernel loaded in 0.016 seconds
2023-06-22 17:17:04 loaded keypoint partition in 0 sec
2023-06-22 17:17:04 loaded matching partition in 0.033 sec
2023-06-22 17:17:05 loaded matching partition in 0.773 sec
2023-06-22 17:17:05 Error: ciErrNum: CL_OUT_OF_HOST_MEMORY (-6) at line 212
2023-06-22 17:17:05 Saving project...
2023-06-22 17:17:05 saved project in 0.111 sec
2023-06-22 17:17:05 Finished batch processing in 35851.5 sec (exit code 1)

25
I've recently started working on this again and I found Paul's code useful, but it's very slow on a large point cloud with many camera groups (85 million tie points, 40 groups). Is there a way to use concurrent.futures with this code? I'm trying with this version but I don't think I did it right - it looks like I'm still using just a single core. If I understand the code right, it's going to have to iterate through all 85 million tie points ~40 times, which seems inefficient...

Code: [Select]
import Metashape
import concurrent.futures
import multiprocessing

# Get the document, chunk, point cloud, points, and projections
doc = Metashape.app.document
chunk = doc.chunk
point_cloud = chunk.point_cloud
points = point_cloud.points
projections = point_cloud.projections
npoints = len(points)


# Create a dictionary of selected camera groups
selected_groups = dict()
for camera in chunk.cameras:
    if not camera.group:
        continue
    if camera.group.selected:
        if chunk.camera_groups.index(camera.group) in selected_groups.keys():
            selected_groups[chunk.camera_groups.index(camera.group)].append(camera)
        else:
            selected_groups[chunk.camera_groups.index(camera.group)] = [camera]
nselgrps = len(selected_groups.keys())


# Create a dictionary of selected points
selected_points = dict()


# Define a function to process each camera group
def process_group(group):
    #nonlocal npoints, points, selected_points, selected_groups, nselgrps
    for camera in group:
        point_index = 0
        for proj in projections[camera]:
            track_id = proj.track_id
            while point_index < npoints and points[point_index].track_id < track_id:
                point_index += 1
            if point_index < npoints and points[point_index].track_id == track_id:
                if not points[point_index].valid:
                    continue
                else:
                    if points[point_index].selected:
                        if points[point_index].track_id in selected_points.keys():
                            selected_points[points[point_index].track_id][1] += 1
                        else:
                            selected_points[points[point_index].track_id] = [points[point_index], 1]

 # Use concurrent.futures to execute the process_group function for each camera group with a lambda function
with concurrent.futures.ThreadPoolExecutor(multiprocessing.cpu_count()) as executor:
    executor.map(lambda group: process_group(group), selected_groups.values())

# Unselect any points that are visible in more than one camera group
ndsel = 0
for point in selected_points.keys():
    if selected_points[point][1] > 1:
        selected_points[point][0].selected = False
        ndsel += 1

# Print the number of points unselected and remaining
print("Number pts unselected: ",ndsel,"Number remaining: ",len(selected_points.keys())-ndsel)
print("done")

26
Feature Requests / Re: COPC .laz support for exporting point cloud
« on: January 28, 2023, 01:40:01 AM »
+1*10^6 for this feature request!!!

We are producing these as our standard .laz outputs now because they are SO easy folks to view compared to "normal" LAZ.

COPC is just a "flavor" of laz spec 1.4 organized as variable chunks in a clustered octree with with a couple of COPC-specific VLRs describing the data structure. LAStools is implementing COPC support, and it's super useful for rapid releases of products everyone can view.

(source is this emergency response data release).

Anyway, it gets me excited, and I hope I can export COPC natively from metashape soon instead of converting with PDAL, Untwine, or LAStools (soon).

27
I reset an aligned project (~16k images) after correcting an error in image positions, by selecting "reset alignment",  zeroing out the adjusted camera calibration parameters, and resetting the transform (trying to be thorough so I didn't corrupt camera models with previous alignment).

I was (pleasantly) surprised when I realigned, and matching completed very quickly (minutes instead of hours). It was clear that key points were saved, rather than regenerated, and I'm curious how key point behavior changed from this discussion:

Hello andyroo,

Currently it is not possible to split key point detection and image matching stages, they are grouped into Match Photos task. Keep key points feature has been introduced to allow the incremental matching, when new images are added to the already matched and aligned set of images.

As for the name of "reset current alignment" is meant to reset all the results obtained with running Align Photos operation, which include key points, tie points and EO/IO parameters.

Do I now need to explicitly delete keypoints or will they be automatically deleted if I change the tie point criteria? will they be automagically generated for new images if I add them, but kept for the old ones? Just trying to understand keypoints in detail because I frequently align multiple image collections together and this is a useful feature for me.

28
I saw a bunch of error messages in the Metashape console/log after saving a large project. The storage location is a lustre file system powered by a Cray ClusterStor L300. Doesn't appear to be related to filesystem performance according to the admin. I'm trying to figure out if I have a corrupt (60,000 image alignment) project that I need to realign or if this was a momentary hiccup and metashape recovered, and properly saved projections. I saw this in the log after saving the project:

<many previous errors saying the same>
...
...
2022-12-16 11:06:09 Error: Bad local file header signature
2022-12-16 11:06:09 Error: Can't load projections: <secret-path>/<secret-filename>_align.files/1/0/point_cloud/point_cloud.zip
2022-12-16 11:06:09 Error: Bad local file header signature
2022-12-16 11:06:09 Error: Can't load projections: <secret-path>/<secret-filename>_align.files/1/0/point_cloud/point_cloud.zip
2022-12-16 11:06:09 Error: Bad local file header signature
2022-12-16 11:06:09 Error: Can't load projections: <secret-path>/<secret-filename>_align.files/1/0/point_cloud/point_cloud.zip
2022-12-16 11:06:45 saved project in 195.219 sec


Tested file integrity after the errors and it looked fine:

No errors detected in compressed data of <secret-path>/<secret-filename>_align.files/1/0/point_cloud/point_cloud.zip

Seems to be a similar issue as this post by freetec1 from February, but I have run a couple big projects with 1.8.4 already and not seen this error before. I checked logs and this is the first time it's reported.

29
Bug Reports / Re: Missing GEOID12B_AK GEOID file for Alaska/United States
« on: December 12, 2022, 10:39:29 AM »
Hi Alexey, zones EPSG 3338 and EPSG 6330 to EPSG 6339.

30
Bug Reports / Re: Missing GEOID12B_AK GEOID file for Alaska/United States
« on: December 09, 2022, 03:52:51 AM »
Tried to install from the above location - we get this error:

libtiff error: Not a TIFF or MDI file, bad magic number 2570 (0xa0a)
TIFFClientOpen: unexpected error: <snip>/apps/metashape/1.8.4-pro/geoids/us_noaa_g2012ba0.tif

Pages: 1 [2] 3 4 ... 30