Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Lesca

Pages: [1]
1
Feature Requests / Align/snap model and region to axes
« on: March 03, 2021, 02:03:45 PM »
Hello all!

I've done some reasearch on this issue before, and it seems to have been a problem for some users for quite some time, but wasn't really addressed or fixed until now. I'm also not sure if there is a Phython solution to this problem. If so, please let me know. A user-friendly integration to the ui would then of course very good!

In my experience processing hundreds of scans in Photoscan/Metashape, there is not a single model that I remember that was correctly aligned.
It is frustrating and time consuming as well as unprecise to do this by hand. I've seen Meshlab and Cura be able to do this by selecting a bunch of faces and automatically aligning them to the axes. Capturing Reality seems to have had this process built in for years (but I don't use their software for various reasons).

Right now, I'm manually using a method similar to this older forum post: https://www.agisoft.com/forum/index.php?topic=5463.0
Use the below technique to orient your scans upright. To set a real world scale and/or specific location (such as 0,0,0) within photoscan you will need the pro version so that you can set markers/scalebars, otherwise you will have to do it externally.

You can use Rotate Object tool for unrefrenced chunks to change their orientation in the coordinate system.

My preferred method of using it is:

1. hit numpad 7 to go to top view
2. then invoke the rotate object tool to rotate the model to a top view.
3. hit space bar to return to 'normal' navigation mode
4. hit numpad 1 to go to front view
5. hit space again to return to rotate object mode and fine tune the rotation in front view
6. hit space to return to normal nav mode again
7. hit numpad 3 to get a side view
8. hit space to get back to rotate object mode and further fine tune the rotation if required
9. hit space to get back to normal navigation mode
10. hit numpad 7, 1 and 3 in sequence to check the model is aligned and repeat the above if necessary.

I would therefore like to see a semi-automatic or automatic process for aligning the model and region on the axes.

I understand that sometimes it can be difficult to guess the alignment of images taken upside down or rotated 90°.
But in my case, I often use the turntable where the orientation is very clear based on the images. Even using the images as a reference (rather than the sparse point cloud) for orientation would be sufficient for me. But a sophisticated solution that can be applied to the mesh or dense point would be greatly appreciated!

Also a function to rotate the model in e.g. 90° steps while holding down e.g. the ALT key would be great!

Thanks a lot in advance!

2
General / max processing threads used
« on: March 03, 2021, 01:13:41 PM »
Hello everyone!

I am using Metashape 1.7.1 on a Ryzen 3950x which has 16 cores / 32 threads.
Even though the 1.7 update has brought in almost 2x time reduction in the mesh generation process with depth maps, I think there is still some performance unused.

When the "generate mesh" process starts, the depth maps are loaded in groups. Since my maps are quite large, this takes about 30 seconds per group.
Code: [Select]
2021-03-03 09:39:56 Generating mesh...
2021-03-03 09:39:56 Compression level: 1
2021-03-03 09:39:56 Preparing depth maps...
2021-03-03 09:39:56 605 depth maps
2021-03-03 09:39:56 scheduled 31 depth map groups (605 cameras)
2021-03-03 09:39:56 saved camera partition in 0.011773 sec
2021-03-03 09:39:56 loaded camera partition in 0.000437 sec
2021-03-03 09:39:56 14/32 threads used (58 MP depthmap and 32768 MB target memory)
2021-03-03 09:40:25 saved group #1/31: done in 28.6717 s, 20 cameras, 890.035 MB data, 42.3438 KB registry
2021-03-03 09:40:25 loaded camera partition in 0.000426 sec
2021-03-03 09:40:25 14/32 threads used (58 MP depthmap and 32768 MB target memory)
2021-03-03 09:40:59 saved group #2/31: done in 33.972 s, 20 cameras, 1226.11 MB data, 42.3438 KB registry
2021-03-03 09:40:59 loaded camera partition in 0.00028 sec
2021-03-03 09:40:59 14/32 threads used (58 MP depthmap and 32768 MB target memory)
2021-03-03 09:41:37 saved group #3/31: done in 38.3107 s, 20 cameras, 1337.53 MB data, 42.3438 KB registry

Now it seems that only 14 out of 32 threads are used for this. What is the reason for this? Is there a tweak to change the limit?

Another thing I noticed is that some processes like loading and saving images seem to be single threaded only, leaving the 1.4 GB/s bandwidth of my NVMe SSD almost unused. But I see no reason why multi-threading shouldn't be possible here. Can we expect to see some updates here?

Pages: [1]