Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - andyroo

Pages: 1 2 [3] 4 5 ... 11
31
I am trying to adapt a modified (only because we've been independently adding version updates) of William George's Puget Systems Metashape Benchmark script to work with network processing (because otherwise it will only run on one node) and I don't understand how the dialog boxes are being treated in network processing, or why the script is crashing at the end, or why I can't cancel it by cancelling from the GUI/client, or by selecting stop/quit from the network monitor, or by running client.abortBatch or client.abortNode from the client once it's connected.

When the script completes, it throws an error in the monitor and starts again from the beginning:

2020-11-09 12:56:43 [127.0.0.1:54773] failed #0 RunScript: Unexpected process termination

and in the node details it looks like it displays the results (like it would at the end), errors, then restarts. So what is the proper way to have the script terminate? I have attached the version I'm running in network and standalone version of 1.6.5. I've tried it both on a HPC and on a local workstation setup to be the server/monitor/node/client all-at-once, and I get the same behavior.

I *think* that this is exaample below (for align photos) is how I would modify the script to make it run distributed processing, but not sure how to properly end it instead of having it loop, which I am guessing has to do with the way it treats the Metashape.app.quit() command and/or the Metashape.app.messageBox(<message>) command.

Code: [Select]
# ALIGN PHOTOS
# open the first version of the project we want to test, in its unprocessed state
doc.open(folderpath + '/'+projectname+'/'+projectname+' Align Photos.psx' , False , True )
chunk = doc.chunk

# get a beginning time stamp
timer1a = time.time()

# match photos
# THIS HAS TO BE HANDLED DIFFERENTLY FOR 1.6.1 and later vs EARLIER VERSIONS
if Metashape.app.version in ( '1.6.1', '1.6.2', '1.6.3', '1.6.4', '1.6.5' ):

# for versions 1.6.1 and later, matching accuracy was changed to scaling. Values taken from Alexey's (dev) post at metashape forum. Metashape.HighAccuracy changes to downscale=1

#check if network processing is enabled
if Metashape.app.settings.network_enable:
network_tasks = list()
task = Metashape.Tasks.MatchPhotos()
task.downscale = 1
task.keypoint_limit = 40000
task.tiepoint_limit = 0
task.generic_preselection = True
task.reference_preselection = False
task.subdivide_task = True

n_task = Metashape.NetworkTask()
n_task.name = task.name
n_task.params = task.encode()
n_task.frames.append((chunk.key, 0))
network_tasks.append(n_task)
else:
chunk.matchPhotos(downscale=1, generic_preselection=True,reference_preselection=False, subdivide_task=True)

else:

# for version 1.5.1 or earlier this is unchanged from original script.
chunk.matchPhotos(accuracy=Metashape.HighAccuracy, generic_preselection=True,reference_preselection=False)

# align cameras
# THIS HAS TO BE HANDLED DIFFERENTLY FOR 1.6.1 and later vs EARLIER VERSIONS
if Metashape.app.version in ( '1.6.1', '1.6.2', '1.6.3', '1.6.4', '1.6.5' ):

# note v1.6.1 added subdivide_task=True to see how good it gets
chunk.alignCameras(subdivide_task=True)

else:

# for version 1.5.1 or earlier this is unchanged from original script.
chunk.alignCameras()

# get an ending time stamp
timer1b = time.time()

- Also it seems like to modify this to run properly with the network, I probably need to get rid of the GUI messagebox, and somehow control that one task is completed before any others begin (for benchmark consistency assuming it is mimicking a workflow that requires sequential ordering). And I'm a little unclear on how to properly control logging for each step of the task - or how I can launch a script from the gui (client) that spins tasks off to however many nodes want them, to be done in order, and reported back to the client so it can write the elapsed time down for the benchmark...

32
General / "Keep keypoints" confusion - not working as expected
« on: September 23, 2020, 07:08:55 PM »
I understood the "keep keypoints" option to work with incremental alignment, but when we try to align a subset of photos (e.g. adjacent lines) with a previous subset, we don't get as good alignment as if we re-run from scratch.

I also expected that with "keep keypoints" and the same keypoint threshold, we would not need to regenerate keypoints with a new alignment, but I see that through the GUI that step is performed anyway.

What are the best practices for working with "keep keypoints" enabled, both for incremental image alignment, and if we want to restart the alignment process, but with the same keypoints?

33
I noticed that the manual (Page 25, Alignment Parameters/Reference Preselection) says,

"For oblique imagery it is necessary to set Ground altitude value (average
ground height in the same coordinate system which is set for camera coordinates data)
in the Settings
dialog of the Reference pane to make the preselection procedure work efficiently."

But the dialog box in Reference/Settings calls the value I think it is referring to "Capture distance"

Which value is the dialog box expecting? For example if I am flying at an altitude of 1000 m above sea level and a 45° oblique angle, the capture distance is 1414m, but if I am imaging land at sea level, the ground altitude value is 0

34
Hi Agisoft,

I've noticed in 1.6.3 that when I edit a particular NAD83(2011) UTM XX projected coordinate system to make a compound coordinate system in NAVD88 height that it overwrites other projected coordinate systems I've made. For example if I start with the NAD83(2011) UTM Zone 10 North PCS and edit it to add NAVD88 height as a compound coordinate system, then when I try to add one for NAD83(2011) UTM Zone 18 North, it overwrites the NAD83 Zone 10 north one (or at least it looks like it does in the available CS menu). I've seen this with NAD83(2011) in UTM 10, 17, and 18.

Typically I am choosing to edit in the dropdown as I am creating a DEM/Ortho, although some times when I am exporting. My standard practice to to process with geoid heights in the appropriate GCS (NAD83(flavor) or WGS(flavor)). If I'm doing something wrong, sorry for the erroneous bug report.

Andy

35
This might be better characterized as a utilization/optimization/performance report or a feature request.

My system is an AMD Threadripper 3960x, ASRock Creator mobo, 256GB RAM, 2x RTX 2080 Super GPUs.

I have a project with a DEM built on high at 802510 x 916747, maybe 5% data 95% nodata. When I build an ortho, at the first step, if I click "set up boundaries" the code runs at about 8% capacity on my Threadripper - 50% on one core and 100% on another (bouncing between 2 each running 100% or idle actually). My other 21 cores are idle and there's minimal disk activity. It takes about 20 minutes to estimate the tile boundaries (according to what it says in the log).

Estimated dimensions are 640558 x 1702318 (which seems too small given that the ortho is 2x the resolution of the DEM) and when I hit go it says it's going to take around 18 hours.

Then it goes back to "estimating tile boundaries" with three cores doing their thing but more jaggedly(screenshot attached). It looks like the first core (going at half-capacity) dips when the full-throttle hand-off occurs in the two cores going gangbusters. This happens for another 20 minutes, then usage goes up to a more acceptable level on all cores and the time starts dropping. Currently down to 12h, and looks like I'll end up in the ~8-10h range.

I am a little surprised at the overall CPU and HDD usage (second screenshot). I expected one of them to be the bottleneck, but it seems like possibly the code is. Only about half of the cores are running at full capacity, and the other half about 50%. Also data are only being written to drive around half the time.

oh, and GPU1 is going about 5%, GPU2 not at all. and only 8% of RAM is being used.

36
I'm trying to update a script based on William George's awesome Puget Systems Extended Benchmark for 1.6.3 and getting the error "Reference model has no diffuse texture" in the following line, in a chunk that has only tie points, depth maps, and dense cloud (built in 1.5.1):

Code: [Select]
chunk.buildTiledModel(keep_depth=False,  transfer_texture=False, subdivide_task=False)

I can duplicate by loading Park Map Build Tiled Model.psx and running this (tried to be more explicit and specify source_data=Metashape.DenseCloudData and transfer_texture=False in case some default changed):

Code: [Select]
chunk=Metashape.app.document.chunk
chunk.buildTiledModel(keep_depth=False, source_data=Metashape.DenseCloudData, transfer_texture=False, subdivide_task=False)

result:

Code: [Select]
2020-06-15 19:00:58 *********
2020-06-15 19:00:58 estimating quality done. (in 0.145 sec)
2020-06-15 19:00:58 Avg camera fetch time: 0 sec
2020-06-15 19:00:58 Avg camera processing time: 0.0158889 sec
2020-06-15 19:00:58 blending textures...
2020-06-15 19:00:58 *********
2020-06-15 19:00:58 blending textures done in 0.467 seconds.
2020-06-15 19:00:58 Avg camera fetch time: 0.209889 sec
2020-06-15 19:00:58 Avg camera processing time: 0.0296667 sec
2020-06-15 19:00:58 Processed cameras
2020-06-15 19:00:59 postprocessing atlas... done in 0.063 sec
2020-06-15 19:00:59 applying textures... done in 0.003 sec
2020-06-15 19:00:59 saved textures in 0.004 sec
2020-06-15 19:00:59 loaded tiled model data in 0.001 sec
2020-06-15 19:00:59 loaded block index in 0 sec
2020-06-15 19:00:59 loaded block index in 0 sec
2020-06-15 19:00:59 loaded block index in 0 sec
2020-06-15 19:00:59 loaded camera partition in 0.001 sec
2020-06-15 19:00:59 loaded selector in 0.002 sec
2020-06-15 19:00:59 selected 24 blocks in 0 sec
2020-06-15 19:00:59 loaded model blocks in 0.001 sec
2020-06-15 19:00:59 loaded uv blocks in 0 sec
2020-06-15 19:00:59 loaded block index in 0.001 sec
2020-06-15 19:00:59 loaded block index in 0 sec
2020-06-15 19:00:59 loaded block index in 0 sec
2020-06-15 19:00:59 loaded model blocks in 0.006 sec
2020-06-15 19:00:59 loaded uv blocks in 0.002 sec
2020-06-15 19:00:59 loaded texture blocks in 0.002 sec
2020-06-15 19:00:59 Baking textures...
2020-06-15 19:00:59 saved textures in 0.004 sec
2020-06-15 19:00:59 loaded tiled model data in 0 sec
2020-06-15 19:00:59 loaded block index in 0 sec
2020-06-15 19:00:59 loaded block index in 0.001 sec
2020-06-15 19:00:59 loaded block index in 0 sec
2020-06-15 19:00:59 loaded camera partition in 0 sec
2020-06-15 19:00:59 loaded selector in 0.002 sec
2020-06-15 19:00:59 selected 170 blocks in 0 sec
2020-06-15 19:00:59 loaded model blocks in 0.004 sec
2020-06-15 19:00:59 loaded uv blocks in 0.002 sec
2020-06-15 19:00:59 loaded block index in 0.001 sec
2020-06-15 19:00:59 loaded block index in 0 sec
2020-06-15 19:00:59 loaded block index in 0 sec
2020-06-15 19:00:59 loaded model blocks in 0.021 sec
2020-06-15 19:00:59 loaded uv blocks in 0.009 sec
2020-06-15 19:00:59 loaded texture blocks in 0.017 sec
2020-06-15 19:01:00 Baking textures...
2020-06-15 19:01:01 saved textures in 0.006 sec
2020-06-15 19:01:01 loaded tiled model data in 0.001 sec
2020-06-15 19:01:01 loaded block index in 0 sec
2020-06-15 19:01:01 loaded block index in 0 sec
2020-06-15 19:01:01 loaded block index in 0.001 sec
2020-06-15 19:01:01 loaded camera partition in 0 sec
2020-06-15 19:01:01 loaded selector in 0.002 sec
2020-06-15 19:01:01 selected 1 blocks in 0 sec
2020-06-15 19:01:01 loaded model blocks in 0 sec
2020-06-15 19:01:01 loaded uv blocks in 0.001 sec
2020-06-15 19:01:01 loaded block index in 0 sec
2020-06-15 19:01:01 loaded block index in 0.001 sec
2020-06-15 19:01:01 loaded block index in 0 sec
2020-06-15 19:01:01 loaded model blocks in 0.001 sec
2020-06-15 19:01:01 loaded uv blocks in 0 sec
2020-06-15 19:01:01 loaded texture blocks in 0 sec
2020-06-15 19:01:02 Finished processing in 4161.25 sec (exit code 0)
---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
<ipython-input-3-dfe855b9c63f> in <module>()
----> 1 chunk.buildTiledModel(keep_depth=False, source_data=Metashape.DenseCloudData, transfer_texture=False, subdivide_task=False)

Exception: Reference model has no diffuse texture

I guess there is some change that means the old dataset (pre-built model) won't work for 1.63 and on?

37
I've found that it can be useful to tweak the Confidence filter incrementally (e.g. when does the water noise go away while I still maximize number of points on cliff faces) - right now I have to manually mouse-click through a bunch of menus even though the filter dialog is extremely responsive.

Would it be possible to add an "apply" button that doesn't immediately close the dialog window but applies the filter settings?

38
I just read this question, and was going to reply saying I wish Metashape could export confidence too (I guess it can in PLY format) but I would love to be able to export confidence as intensity or as a user value in las/laz files.

I love the "confidence" addition though, thank you!

39
Bug Reports / Correlation matrix disappeared?
« on: May 23, 2020, 03:25:28 AM »
I've been running a series of tests with the same imagery (multiple conversion-to-jpeg parameters) and in one case, the correlation matrix disappeared in the chunks that were copied (via script) from the initial alignment chunk. I can't see anything different from the other projects - the alignment and optimization and script are all run in batch - (1) align, (2) optimize, (3) run script.

The script copies the original chunk and runs gradual selection, saving a new chunk each time it optimizes to a threshold value for first Reconstruction Uncertainty, then Projection accuracy, then Reprojection Error. All of the copied chunks have the correlation matrix grayed out.

I just tried copying manually and the correlation matrix survives the copy. I'll try running just the script again, guessing it will work, but wondering what would cause this...

40
I can get a lot of useful comparison metrics from the chunk info, and right now I have to manually right-click and mouse into the window and click before I can hit ctrl-A to select all, then Ctrl-c to copy, then alt-tab to switch to calc and ctrl-v to paste and several more keyboard only commands to extract the right column and copy-paste again, transpose, and go down a column for next row of input.

I could script all of this with keyboard-only and iterate through a project with dozens of chunks (like the one I'm doing painfully by hand and wrist now) if I had one hotkey that worked to pull up info on the highlighted (gray background) chunk, not the active bolded chunk. Also it would need to shift the active focus to that window so a ctrl-A would select all.

If I had the ability to set a custom hotkey for chunk info, it would be most wonderful.

41
General / Fine-level task subdivision performance
« on: May 01, 2020, 01:06:30 AM »
Has anyone benchmarked a good-sized project with fine-level task subdivision enabled vs disabled? Using 1.6.2 I'm just curious if it speeds up or slows down processing if you're otherwise not limited by RAM. Running on AMD 3960x 256GB RAM 2 2080 Super GPUs. Could try re-running same ~7k image project if nobody has an answer.

42
Feature Requests / reduce bit depth in ortho generation (or export)
« on: January 14, 2020, 09:59:42 PM »
It would be nice to have an option to reduce bit depth for ortho generation. We are using 16-bit color for alignment, but don't need that color depth for orthoimagery. Image size is dramatically increased with 16-bit color and platform compatibility is also reduced. Also the internal metashape ortho files are huge compared to 8-bit and we aren't able to specify a minimum ortho resolution until the export stage so unnecessary HD space is used.

[edit] one way to do this (wouldn't address the issue of "build ortho" size but would enable 8-bit export) would be to add a colors_rgb_8bit parameter to the exportOrthophotos method, like has been added to exportModel and exportPoints in the API.

43
I stopped a tiled dense cloud export this AM (3 billion points) that had created ~10 thousand 1km x 1km tiles on a ~500km x ~500m section of coast that's sort of v-shaped. It was 16 hours in and reported >16 hours to go.

Exporting the entire cloud as a single file took ~34 minutes. It looks like maybe there's a "blind" tiling of the region before anything else that generated the 10 thousand tiles when it should only be 500 - 1000 tiles at most? I'm not sure why everything slowed down so much, but wanted to report. Happy to provide more info if it helps.

Andy

44
I'm building out a 10G network to start network processing with our several (Windows x64) metashape machines, and wondering if there's any disadvantage to having the SERVER node be a computer that hosts the shared STORAGE for network processing.

I noticed that in the "How to configure network processing" guidance, Agisoft says that the SERVER doesn't need to be a powerful machine and the shared STORAGE should be accessible to all machines.

I was going to add a SSD NAS to the network for the shared storage, but then I realized I could just put a SSD RAID card in my SERVER computer and save a switch port (maybe for another worker node :-) as long as the SSD-network file xfer bandwidth wouldn't interfere with the server commands.

Any thoughts on if there are disadvantages to this?

45
I notice on a ~45k image project that during dense reconstruction the initial processing runs through chunks of ~235 images at a time.

For each step, the processing loop usually spends about 1/3 of the time (per chunk) "loading images"  with relatively low CPU and disk usage (CPU in bursts of up to about 35%, disk in  9-10 "sawtooth" bursts per minute - see screenshots).

Then the other 1/2-3/4 of the time things proceed as I would expect "estimating disparity" with CPU and GPU at pretty much maximum, minimal (I think no) disk access.

I'm wondering what the bottleneck is during the 1/3 of the time I'm loading images, since Metashape is so good for most of the workflow at maximizing at least read access if not CPU or CPU+GPU.

I didn't compare TIF/JPG performance with DNG yet, and wondering if there's a difference in what image format I use or bit depth the project is processed (current chunk is processed as DNG from ARW) or if I could make one by changing up hardware (SSD or M2 or RAMDISK) or other system settings. I am suspicious that storage media would make a difference since HDD access is so sporadic, and I was thinking maybe there's an issue with file index optimization that's slowing things down.

The one reason I think file indexing might play a role is that I noticed that if I disable photos in a chunk with 45K images it takes about 2 seconds, but if I delete photos from those chunks instead (trying to make a better organized/more efficient project structure) then it takes 5-10 minutes in this project.

Pages: 1 2 [3] 4 5 ... 11