Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Bzuco

Pages: [1] 2 3 ... 18
1
General / Re: Managing non-aligned photos
« on: July 09, 2025, 01:50:38 PM »
Hi, 81 and 71 points are pretty low numbers of total matches. You should have hundreds of points on 20Mpix photos. What alignment settings are you using? Did you try also Guided image matching, which is designed for forest scenarios?

2
General / Re: Processing Hung up
« on: July 05, 2025, 11:52:13 AM »
If you see zero activity on gpu/cpu on depth maps stage, then no need to wait 7 hours. Just restart this stage and it will be probably finished without issue.
Something similar happened to me at another stage and after restart MS it was OK.

3
General / Re: Tiled model generation stalls
« on: July 02, 2025, 10:30:14 PM »
Yes, using crossover ethernet cable, or if you do not have crossover cable, then use normal and set something like Auto-MDIX option on network card(maybe the cards can set it automatically).

4
Here is the tip for processing large projects https://www.youtube.com/watch?v=BYRIC-qkZJ8
Instead of buying threadripper, you can buy another PC(similar performance you have) with another MS pro license and you can split the project between two computers. And not only that, you can run more than one local worker node for better utilization(CPU/GPU) during certain stages.

7995 is more expensive, because except for the fact that it has more cores, it is also manufactured on newer process, so its performance per watt is better.
You can buy RTX 5090, its vram is worth and also performance.
Best advice before buy new HW is to monitor CPU/GPU/RAM/VRAM utilization during whole process on your current PC. Then you will know if you need more CPU cores, more ram/vram, faster GPU... it depends also of your project settings you are using.

5
General / Re: Tiled model generation stalls
« on: July 02, 2025, 12:53:40 PM »
Paulo, here is the video describing how to use local network processing https://www.youtube.com/watch?v=BYRIC-qkZJ8  - commands has changed a little bit, so check MS manual
- this should for you get rid of the bad allocation error
- for connecting 2 laptops use at least 1Gbit switch/router
- you can spawn/run more than one worker node on one laptop, it depends on task(and its memory requirements)

6
General / Re: Tiled model generation stalls
« on: July 01, 2025, 01:45:59 PM »
Is it true that Vexcel Osprey has 247 MPix sensor?
CPU texture blending will be veeeeery slow.
I don't remember how much resource hungry is building tiled model.
Try local network processing on your laptop.

7
General / Re: Ryzen AI max+ pro 395 compatibility/performance?
« on: June 25, 2025, 12:22:50 PM »
If you will run it at 120W and with LPDDR5x-8000 memory modules, then it will be really good choice. CPU performance according Cinebench is outstanding and GPU performance is on ~RTX2070 performance level.
I did not test it, but it should be supported out of the box without problems.

8
How do the scans look if you change the View Mode to Model Confidence?
I'm guessing it'll be a series of 'good' hotspots along the walls corresponding to the shooting positions?
Oblique angles contribute to several nearby shooting positions, so whole facade is easily in dark blur color in confidence view mode, which means ~100 combined depth maps.

9
From manual:
Interior(Correct) picture - there is one big downside - this will work only for small rooms e.g. 3x4m and only if you need just basic shape of room. If you want to capture also details from walls you need to be closer to that wall you are shooting, not on the opposite side of the room(that is too far for getting details from walls).

Facade(Correct) - this approach is advised, because it comes mostly from drone scenario where the most important is capture position of objects and how tall they are is slightly less importatn. Keeping 60% overlap is critical. If you miss that, you lose continuity...small downside.

Facade(Incorrect) - angles which are parallel with wall are useless(for facade itself). But angles 30-45° from nadir angle are very useful for capturing holes/window deep niches, because those angles can capture better the depth. These angles are also great as a fix for 60% overlap, which is in this case not needed. All shots are in focus and with aperture f7 and more are also sharp in corners, so not problem at all, it is faster solution and also more comfortable. Better than use wide angle lens.

Interior(Incorrect) - that example with only 2 positions and shots at 45°steps will also work in small room, but is not advised, because people could start to think that several shots in 360° from one place is enough for photogrammetry  :).

There are also scenarios, where 360° shots from several positions are the best choice. It depends on the environment, the objects, and also the time you have for shooting. Metashape is very powerful and flexible, so it is not always necessary to follow the recommended practices exactly :).

10
@jkova96
Check this video  https://drive.google.com/file/d/1xYyYWlCdZf_9hIYI_eBCAQwtl9y5Jk-T  , it is from my project. Most of the time I am shoothing facades. There is 1m grid, so ~2m distance between positions and ~2m far from facade. I always use this approach. If you can't get 2m far from facade in narrow street, then distances between positions will be shorter ~1.5m...it depends also on the lens you are using. I am using 24mm lens.

11
In user manual the 360 degree method is marked as incorrect mostly from reason that it cannot produce enough photos from different angles from only few shooting positions.
There is universal rule: every surface part you want to have in pointcloud in good quality should be taken from at least 3 different angles. Distance from surface also matters and it depends on how big is the resolution of photos, what is the lens focal distance and how precision final pointcloud you need.
In your case you are limited by the narrow street between buildings. If you need only one building, then set the drone position as close as possible to the opposite building facade and take only 5 shots in range <90°(at 18°steps), another 5 shots in the same range but looking slightly up, and another 5 looking slightly down. Then move horizontaly and verticaly 1-1.5m to next shooting position.

According to the screenshots you provided, F2.8 and 1/15 - 1/30s might be not ideal. Try to increase ISO to 200 or 400 and set F to 4.0 for sharper images in corners. Sparse pointcloud is already looking good so far.

12
General / Re: How to best scan leaves/fern laying on a surface
« on: May 05, 2025, 08:56:30 AM »
Put that fern leaf on something like black/white news paper, so Metashape can better calculate camera positions and automatic lens distortion, because now the misalignment comes mostly from white grid uniform pattern surface and very simillar structure of fern leaves.

13
General / Re: Points in point cloud disappear when zooming in and out
« on: March 31, 2025, 07:33:23 PM »
@sbond  shift + right mouse button you can adjust near clipping plane, I am using this very often, it is useful in orthographic views, when you want to hide something from certain distance.

14
General / Re: Depth maps generation performance tip
« on: February 08, 2025, 12:18:48 PM »
Updated original post, added tip2.

15
General / Depth maps generation performance tip
« on: February 07, 2025, 03:04:00 PM »
Tip 1:
Recently I switched from rtx 2060 super to rtx 4070 ti super(core@3060MHz) and I noticed immediately that my CPU(Intel 11700F@8c16t@4.4GHz) is not enough performant for new GPU during generating depth maps phase.

So I played a little bit with one tweak "BuildDepthMaps/max_gpu_multiplier". If I am correct, this parameter means how many concurrent kernels will be running on GPU. Default value is 2...you can see this as [GPU 1], [GPU 2] in logs.

I quickly measured time of one or two batches on medium, high and ultra high quality with different parameter values(2/4/6/8)...[GPU 1]..[GPU 8].
For my hardware and project(ground photogrammetry from hand, 435x 18Mpix), on medium quality it make sense to increase the value to 4. On high to 6 and on ultra high to 8.
My depth maps processing times decreesed to ~92.8% for medium, ~86.3% for high and ~84.8% for ultra high.

In attachment(one batch ultra high depth maps quality) you can see the CPU and GPU utilization, where GPU graph is more compact and with higher bars. Also higher tweak values means more vram allocation.

I think higher values than 6-8 can make some difference only in specific cases like very unbalanced CPU/GPU systems or low/very low depth maps quality or in projects with very high Mpix photos.

Let me/us know what speed increasements you see in your projects with different tweak values, you can post also what CPU and GPU do you have.

UPDATE:
Tip 2:
I tried another tweak "main/gpu_enable_opencl" set to true and "main/gpu_enable_cuda" set to false.
This gives me another performance boost. I tested only ultra high depth maps quality with 2 and 8 max_gpu_multiplier. Here are the results:
CUDA, 2x - 77.6s ...default Metashape, without tweeks
CUDA, 8x - 69.1s
OpenCL, 8x - 54.7s ...time decreased to 70.48% :)
...second screenshot in attachment.
...what is interesting in my case is GPU utilization which fluctuated only between ~45-75%. GPU memory allocation was also much lower than using CUDA.
I will also try selecting pairs and matching points stages, if I see some boost.

Pages: [1] 2 3 ... 18