Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - djyoung

Pages: [1] 2
1
OK, thank you Alexey.

2
Thank you Alexey! Would I be correct to assume that the DEMs are indexed in the order they were created?

3
OK, thanks. How do I change the default selected one via the Python API?

4
Hello -- I believe that multiple DEMs generated by the `buildDem` step (such as a DTM and DSM) can exist simultaneously in the same project (correct?). How do I specify which one I would like to be used as the basis for the `buildOrthomosaic` step? The options for the `surfaceData` parameter of `buldDem` are `Metashape.DataSource` objects, which include, e.g., `ElevationData`, but not a way to specify *which* elevation data product to use. Is there a way to "activate" the desired `ElevationData` product so that it is used for `buildOrthomosaic`?

Thanks,
Derek

5
General / Re: Incremental images alignement not working
« on: April 22, 2024, 08:36:50 PM »
Hello Agisoft -- I'm just following up on this issue (potential bug). I'd appreciate your insight when you have a moment. Thank you.

6
General / Re: Incremental images alignement not working
« on: April 15, 2024, 06:05:15 PM »
Hi Alexey (or Agisoft Support generally) -- I'm following up to see if you can help me resolve this issue. If you follow my workflow, with the data files I shared, you'll see that incremental alignment isn't working as expected, even with your suggestion of unchecking the cameras. Thank you in advance for your help with this.

7
General / Re: Incremental images alignement not working
« on: April 04, 2024, 06:22:45 PM »
Hi Alexey -- I'm just following up on this inquiry. I'd appreciate any insight you could provide. Thank you.

8
General / Re: Incremental images alignement not working
« on: March 27, 2024, 07:47:55 AM »
Hi Alexey -- No, I was leaving the checks as their default values (all checked, for first and second set of images). However, I just tried unchecking all images in the second set before aligning, and I got the same result (objects in the second DEM shifted by about 3.5 m relative to the first DEM. What can I do to force the initial tie points (and their locations) to be retained? Thank you.

(For others reading this post, my workflow is outlined here: https://www.agisoft.com/forum/index.php?topic=15638.0)

9
General / Re: Incremental alignment
« on: March 26, 2024, 03:28:12 AM »
Hi Alexey -- sure:
  • in preferences/advanced, check "Keep key points"
  • Workflow/Add folder
  • Workflow/Align photos -- Accuracy Medium, Generic Preselection, Reference Preselection (Source), Key points 40,000, Tie points 4,000, exclude stationary tie points checked, adaptive fitting and guided matching unchecked
  • Workflow/Build DEM (source: tie points), File/Export DEM
  • Workflow/Add folder
  • Workflow/Align photos -- (same settings as above)
  • Workflow/Build DEM (source: tie points), File/Export DEM

Does this sound right to you? Please also see my recent post here (https://www.agisoft.com/forum/index.php?topic=16329.0), where I shared the image datasets I use in this workflow. Thank you for helping resolve this.

10
General / Incremental images alignement not working
« on: March 24, 2024, 07:55:43 PM »
Hello -- I am attempting to follow the "incremental image alignment" procedure to align a new set of images to an existing set of tie points (sparse cloud). I have selected the "Keep key points" preference, and when I align the second set of images, I disable "Reset current alignment" option.

The behavior is easy to reproduce:
1. Add this folder (https://ucdavis.box.com/s/1ms3sqr13t001k0poiylspt4hl4cn9us) of 174 photos and align.
2. Build and export DEM based on tie points.
3. Add this folder (https://ucdavis.box.com/s/mq9woeeffua83t9688oa81pl6lnak9zz) of 145 photos and align.
4. Build and export DEM based on tie points.

The DEM from Step 4 is shifted relative to the DEM from Step 2, indicating the tie points have been reset and recreated from scratch in Step 3.

I am running this on Ubuntu 22.10. I align using quality Medium, with adaptive camera model fitting. The same thing happens to me in versions 2.1.1, 2.1.0, 1.8.4, and 1.7.1. I've seen multiple reports in the forum over the years of the same issue. I'd appreciate it if Metashape support could attempt to reproduce this issue with my dataset to identify a solution to this bug. Thank you!

11
General / Re: Incremental alignment
« on: March 24, 2024, 07:24:02 PM »
I'm observing the same behavior on 2.1.1, 2.1.0, and also just tested in 1.8.4 and 1.7.1 and it's the same with these versions. The original set of tie points seems to be lost even when "Keep key points" is checked and all other incremental alignment instructions are followed.

12
Bug Reports / Re: Assertion "3478972301441" failed at line 720
« on: March 19, 2023, 12:14:18 PM »
Thanks Alexey, I can confirm that the processing errors I was experiencing using v2.0.0 are now resolved in v2.0.2. Thanks for fixing it.

13
Bug Reports / Re: Assertion "3478972301441" failed at line 720
« on: March 16, 2023, 06:50:14 AM »
Thanks Alexey -- but I am experiencing this error using the Metashape python module. Can you send a download link to v2.0.2 of the python module? Thanks.

14
Bug Reports / Re: Assertion "3478972301441" failed at line 720
« on: March 12, 2023, 08:55:51 PM »
I am experiencing this same frustrating error in v2.0, also with a workflow that includes removal of tie points using gradual selection. My workflow uses the python API.

I'm not able to send project files to let others reproduce this, because when I try to reproduce it myself using a project file from a previously failed workflow (saved after building depth maps) and running the "build point cloud" step (using the GUI), it works.

I wonder if it has to do with the fact that some cameras have no neighbors? Here's the progress text from the start of the "build point cloud" step in case it helps.

Quote
Generating depth maps...
Preparing 2006 cameras info...
cameras data loaded in 1.06719 s
cameras graph built in 0.692246 s
filtering neighbors with too low common points, threshold=50...
Camera 577 has no neighbors
Camera 1184 has no neighbors
avg neighbors before -> after filtering: 294.991 -> 112.525 (62% filtered out)
limiting neighbors to 16 best...
avg neighbors before -> after filtering: 112.525 -> 15.984 (86% filtered out)
neighbors number min/1%/10%/median/90%/99%/max: 0, 16, 16, median=16, 16, 16, 16
cameras info prepared in 16.8615 s
saved cameras info in 0.035684
Partitioning 2006 cameras...
number of mini clusters: 41
41 groups: avg_ref=48.9268 avg_neighb=92.3659 total_io=289%
max_ref=50 max_neighb=160 max_total=207
cameras partitioned in 0.072118 s
saved depth map partition in 0.001658 sec
loaded cameras info in 0.025856
loaded depth map partition in 6.8e-05 sec
already partitioned (47<=50 ref cameras, 100<=200 neighb cameras)
group 1/1: preparing 147 cameras images...
tie points loaded in 0.010106 s
Found 1 GPUs in 0.000266 sec (CUDA: 0.000134 sec, OpenCL: 0.00011 sec)
Using device: GRID A100X-40C, 108 compute units, free memory: 36887/40955 MB, compute capability 8.0
  driver/runtime CUDA: 12000/10010
  max work group size 1024
  max work item sizes [1024, 1024, 64]
group 1/1: cameras images prepared in 13.2945 s
group 1/1: 147 x frame
group 1/1: 147 x uint8
group 1/1: expected peak VRAM usage: 236 MB (64 MB max alloc, 2736x2736 mipmap texture, 16 max neighbors)
Found 1 GPUs in 0.000276 sec (CUDA: 0.000139 sec, OpenCL: 0.000119 sec)
Using device: GRID A100X-40C, 108 compute units, free memory: 36887/40955 MB, compute capability 8.0
  driver/runtime CUDA: 12000/10010
  max work group size 1024
  max work item sizes [1024, 1024, 64]
Using device 'GRID A100X-40C' in concurrent. (2 times)
[GPU 1] group 1/1: estimating depth map for 1/47 camera 6 (16 neighbs)...
[GPU 2] group 1/1: estimating depth map for 2/47 camera 7 (16 neighbs)...
[GPU 1] Camera 6 samples after final filtering: 85% (6.81738 avg inliers) = 100% - 0% (not matched) - 4% (bad matched) - 0% (no neighbors) - 1% (no cost neighbors) - 6% (inconsistent normal) - 0% (estimated bad angle) - 0% (found bad angle) - 3% (speckles filtering)
[GPU 1] Camera 6: level #4/4 (x4 downscale: 1368x912, image blowup: 2736x1824) done in 0.435811 s = 30% propagation + 34% refinement + 22% filtering + 0% smoothing
Peak VRAM usage updated: Camera 6 (16 neihbs): 235 MB = 100 MB gpu_neighbImages (43%) + 64 MB gpu_tmp_hypo_ni_cost (27%) + 12 MB gpu_tmp_normal (5%) + 9 MB gpu_neighbMasks (4%) + 7 MB gpu_mipmapNeighbImage (3%) + 4 MB gpu_refImage (2%) + 4 MB gpu_depth_map (2%) + 4 MB gpu_cost_map (2%) + 4 MB gpu_coarse_depth_map_radius (2%) + 4 MB gpu_coarse_depth_map (2%)
[GPU 2] Camera 7 samples after final filtering: 86% (6.84724 avg inliers) = 100% - 0% (not matched) - 4% (bad matched) - 0% (no neighbors) - 1% (no cost neighbors) - 6% (inconsistent normal) - 0% (estimated bad angle) - 0% (found bad angle) - 3% (speckles filtering)

15
Hello and happy new year!

I do forest surveys by drone, and at some of my sites I will collect imagery from two drones of the same model (including the same model camera). With two cameras of the same model, is it necessary to use separate calibrations (and thus separate calibration groups)? The photo XMP data from both cameras will specify the same model number, so I believe that Metashape will put them in the same calibration group. I wonder if this is OK, or if I have to separate them into separate groups.

Thank you!

Pages: [1] 2