Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Bzuco

Pages: [1] 2 3 ... 13
I have concerns that metashape code is not using L3 cache at all in terms of vectorizing data using modern processor instructions. The processed data are large and in uncompressed formats stored in RAM and cannot fits even in large L3 cache. Maybe if the data are processed at smaller pieces, this is question for devs.

7900x has 32MB cache for each 8-core chiplet so 2x32MB. X3D variant has the whole L3 cache shared between all cores of all chiplets. The X3D cache is consumming significant amount of energy, not ideal for all core CPU frequency if L3 cache is not used at all.

Instead of 5800x you should grab 7700x - better manufacturing process, much higher performance per watt. If you have larger budget, then 7950x is the best what you can have.

If your projects are using at least 16Mpix photos and you will be processing depth maps at highest quality(full photo resolution), then it makes a lot of sense buy 5800x/7700x and spend more money on faster GPU.

I am on similar computer(but on INTEL and Nvidia platform) and 300-1000 photos are totaly OK. Better CPUs will help you if you will be mostly creating only pointclouds, where CPU cores make a lot of sense.

General / Re: Mac M3 chip
« on: May 21, 2024, 07:57:16 PM »
If the MSI model is GT77, then it has good cooling solution, 4 fans, 8 heat pipes and plenty of exhausts. That is very good starting point for performance and especially if you do undervolting.
With process of undervolting(very useful for notebooks) you can set the lowest voltage for CPU and GPU chip which will result in much lower power consumption and that means you can set CPU/GPU at higher frequencies, because the only limit for modern chips is the temperature.
Undervolting GPU is easy with MSI Afterburner utility. For Intel CPUs you can use Intel Extreme Tuning Utility.
Why I am mentioning undervoltig is the fact, that manufacturers are keeping voltages on higher values than is necessary which is resulting in throttling and low performance on notebook.

What RHenriques is mentioning about not constant processing and freezing on PC, that is also caused by low photo resolutions during generating depth maps. I mentioned the problem in this topic
It is also caused by loading small piece of data at once to GPU(GPU memory can hold much more data at once, devs could look on this). MAC M3 system is better on this, because GPU shares with the CPU the same memory, so no transfer data needed. But as you can see from my screenshot in that topic, higher resolution can utilize much better the GPU.
Second help if you are using metashpe pro is, you can create several local network instances and utilize all components almost all the time. So it is not something which cannot be resolved :). More in this topic:

If you do not want to touch that undervolting, at least for GPU, then maybe it would be better for you Macbook, but you know, 2000+$ more is a lot.

General / Re: Mac M3 chip
« on: May 18, 2024, 11:01:09 AM »
I was taking into account only desktop PC as the topic was more universal about Mac M3 chip.
If portability is very important, then I agree, products with Mac M3 chip are great, thanks to RAM bandwidth which is ~3-5.5x higher than on any PC(desktop/notebook). But the prices are  :-\.
At the cost of one M3 max 128GB, two custom desktop PCs can be built.

General / Re: Decimating and Smoothing
« on: May 16, 2024, 11:14:55 PM »
There is no order of these operations, because they are unrelated.
Smooth the model make sense only if you have some unwanted noise on the surface(e.g. caused by high camera ISO) or you want to suppress some elevation differences(spikes) on model for some reason.
Purpose of decimating the model is to achieve the original model shape with as low as possible polygon count.

General / Re: Mac M3 chip
« on: May 15, 2024, 02:28:06 PM »
At the cost of M3 max 64GB you can have at least twice as fast PC.
Otherwise, M3 chips have much better Memory bandwidth for CPU tasks in comparison with PC, but in GPU tasks are behind.
Projects, which would need 128GB RAM, are better to compute on PCs.
It would be waste of money invest another 64GB of RAM in macbook.

General / Re: optimise textures
« on: May 09, 2024, 08:56:52 PM »
For this purpose I am using RizomUV, where you can manually, semi/fully automaticcally design UV model layout.
Their packing algorithm is great, so you can utilize as much as possible space in texture. It will also tell you what will be the pixel density and if you will be not satisfied, then you can increase texture resolution or add another texture(UDIM).
Also you can set for some model part certain pixel density and freeze that part of UV layout. RizomUV has a lot of functionality, it is swiss knife  :).

Example of packing UV islands, 7x 8k texture

General / Re: How to Set Scale/Level?
« on: May 08, 2024, 10:19:07 PM »
Yes, in PRO version you can set distance between two points, so easy scale the whole model.

For me PRO ver. is too expensive and I need to scale also pointcloud and cameras. It take me same amount of time scale everything in Blender, so I decided to scale everything at once in metashape. It is work for just few minutes.

General / Re: How to Set Scale/Level?
« on: May 08, 2024, 12:42:29 PM »
Yes, it is possible also with standard version, but not so easy as in PRO  :).
What you will need for scaling object:
- create some object(thin long box) in Rhino/Blender certain size (e.g. width of building is 15.4m, so your object will be also 15.4m long) and place it starting at position[0,0,0] and along e.g. X axis... so left side of box object will be [0,0,0] and right side [15.4, 0, 0]
-in metashape manually rotate model as best as you can according reality. Use ortographic view, num pad 5 key. Before rotating, move center of the region to the one corner of the building.
- enable show grid in top menu Model - Show/Hide items - Show grid. You can change grid plane orientation(XY/XZ/YZ) in Metashape Preferences - Appearance, if needed.
- move model(left bottom of the building) to center of the grid[0,0] in top and side ortographics views. You can use numpad 4,6,8,2 keys to rotate view in 15° steps.
- now you can import box object from Rhino/Blender. Move center of the region to the [0,0,0] position. This will be your center for scaling and rotating.
- with middle mouse button and scroll set the view of the model, so the left side of the box object will be on the left side of your viewport and the right side of the box object will touch right side of the viewport
- switch model from box object to building object(in right workspace)
-now scale the model, so the right side of the building will touch the right edge of the viewport. Everything in orthogonal view.
-now width of your building is 15.4m

In orthogonal view when you will be scrolling, your model can start to disapearing. It is caused by near camera clip plane. With Shift + right mouse button(up/down move) you can adjust distance of this plane. This way you can set very precisely position of the model[0,0,0] and also the scale, because you can zoom the view very close to the left and right point of the building.

This tutorial may sound like too many steps, but it's actually easy and worth the price difference between standard/pro if you need just scale and orient model. Maybe I will record video how to do this.

General / Re: Processing time DEM (depth maps vs Point cloud)
« on: May 07, 2024, 07:32:07 PM »
I guess point cloud already has height information... so easy to create DEM.
Height values needs to be calculated from depth maps... so more calculations, ...but that is only my guess.

General / Re: NVIDIA A100 or H100 in Metashape
« on: April 27, 2024, 11:55:45 AM »
These cards can benefit from the amount of memory in case of generating textures for very large 3D models, but the single precision performance is absolutely not worth the price.

General / Re: Hardware for Large Scale Projects
« on: April 24, 2024, 11:03:46 PM »
Hi Bzuco,

Thank you for the very useful information.

Just one question out of curiosity, is there a reason why the AMD Ryzen Threadripper PRO 7995WX (96 Cores | 192 Threads | 5.1GHz Boost) is not a good choice? Is it purely due to the cost of the product?
The price is of course one of the reason, but not the only one.

1. all those 79x5WX threadrippers(16/24/32/64/96 cores variants) have thermal design and ~power limit set to 350W. If you don't touch anything in terms of decreasing power consumption and make CPU more efficient(undervolting processor), then in case 7995WX you will really stuck on base frequency, which is just case all your cores will be utilized.
CPUs have also 3 levels of internal caches which speeds are dependent on CPU core speed. Cache memory bandwidth is lower if core clock is just 2.5GHz than e.g. @4.0GHz... from that reason is better spent those 350W on less cores with higher clock speed.

2. I am not sure if all those 96 cores would be utilized enough, because you will be still limited with RAM memory bandwidth.

3. at the cost of one 96c CPU you can buy two PCs with two GPUs, which will give you more performance.

General / Re: Hardware for Large Scale Projects
« on: April 24, 2024, 03:12:02 PM »
Intel Xeon are on older manufacturing process, so AMD Threadrippers are slightly better options. I would stay with Ryzen 7950x(16c32t), but yeah, if monney are not problem, then threadrippers another 16 cores are very useful :).
Intel i9 24c32t is just (8 performance cores/16t) + 16 efficient slower cores without additional threads, so it is only good choice, if price/performance and performance/watt is better then AMD in your case not good options at all.

Here is guide for network processing locally I found on YT.
Also check Metashape PRO PDF manual for the latest command line syntax, in the video is used a little bit older.
For running more instances on one PC just, run node.bat more times  ;) and then you can pause and resume workers in build-in monitor application according your needs at every stage of processing.

General / Re: Hardware for Large Scale Projects
« on: April 24, 2024, 12:19:30 PM »
With 180k photos the key will be good utilization of HW. I see the fastest way by using network processing locally on single machine(maybe two).
If you create several local instances of running metashape, you can e.g. greatly speed up phase of detection points from photos, which is important in case of 180000 photos.
These local instances are helping to keep CPU and GPU utilized ~100% all the time without drops.

Instead of buying AMD Ryzen Threadripper PRO 7995WX you can spend much better money and buy two computers with 32c64t Threadripper 7975WX(higher multicore frequency) and RTX4090.

What will be important during processing is manually assigning the total number of nodes in each processing phase.
For example:
First phase - detecting points needs maybe 10-20 concurently running metashape instances to feed enough one you enable 10-20 nodes.
After this pahse, matching points is more excesive task for GPU, so you will keep only ~3 nodes active, or maybe few more if you see GPU is not utilized without % drops. More instances would feed too much GPU.
During depth maps generation, which is pure GPU task, maybe two GPUs would be useful.

So my final advise is, buy one computer and if it will be fully utilized locally during network processing, then you can buy exactly another one to half the processing time...if needed. Or you can buy second GPU if you will see, that depth maps gen. is still slow.

It could be beneficial to buy two RTX4080 instead of one 4090, from performance reason, but that is only my assumption.

Correct. Or you can try Agisoft delighter, or some 3D painting software and its filters(e.g. substance painter).

Pages: [1] 2 3 ... 13