Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Bzuco

Pages: 1 2 3 [4] 5 6 ... 12
46
General / Re: Agisoft Metashape 2.0.0 pre-release
« on: November 06, 2022, 11:05:08 AM »
I wanted to know if you have been able to solve the problem that processors with many cores have, in the tests carried out by Pubget Systems https://www.pugetsystems.com/labs/articles/Metashape-Pro-1-7-2-CPU-Performance-Roundup-2261/  it is clear that a ryzen 7 5800x (8 cores) has more performance than a ryzen 9 5950x (16 cores) which it doesn't make any kind of sense, from what I saw the problem was seen in the tile model but there have also been problems with the dense point cloud, it is assumed that the 5950x will tend to take less than twice as long as the 5800x or something approximate but no is the case, even the ryzen threadripper 3975wx(32 cores) is slower than the 5800x. I have done my own tests but only on my laptop that barely has 4 cores, so I could not draw any conclusions about it, it would be very helpful if you can answer that question, and the last question would be, how recommendable is it to use memories DDR5 in metashape, are they worth it?

5800X/5950X/3975wx  has default TDP limit 105W/105W/280W.
TDP limit is causing lower clocks on higher core count procesors. E.g. 105W on 5800x means, that during processing that procesor can run on ~4.0-4.4GHz, but the same limit on 5950x means, that clock are lowered during processing, way below 4.0GHz...from that reason build dense cloud task does not scale linearly with core count. It is even worse for 3975wx, because 280W limit is not enough for high frequencies for all 32 cores(just ~3.0GHz or lower).

The solution is unlock cpu power limit in motherboard bios and set properly PBO settings and also CPU core voltage. Then all higher count processors will scale up much much better...but I am almost 100% sure that this kind of fine tuning settings was not case in puget testing methodology....everything was set on default.

Lower CPU core frequencies are also causing lower frequency of memory controller which increases processing time.
DDR5 can speed up a bit processing, but if they are more expensive than DDR4, buying better CPU or GPU make more sense.

47
Can you provide some screenshots of building mesh and also original aerial photo with that building?

If high quality of depthmaps does not gives you enough quality, then ultra high will need a lot of processing power.
I am just metashape standard user, so I am not familiar with tiled builds and SLPK file format.

48
Depth maps quality:
Ultra High - full photo resolution is used,  14204x10652
High - 1/2 of original res. - 7102x5326
Medium - 1/4 of orig. res. - 3551x2663
...
...
From you photos you can estimate(or quite preciselly calculate) how big area covers one pixel in reality...e.g. if 1pix is  3x3cm, your maximum precision is 3cm. If 10cm precision would be enough for you, then you can set High(6cm precision) or Medium(12cm precision) quality.

Build mesh:
Face count - you can't really tell if high/med/low or custom value will be enough until you see the final result. So it is trial/error task. City area would need more polys, and flat areas less....every project is different.
Try create mesh on some smaller project and you will be able guess better if e.g. 500k polygons is enought for some object/area. Then try to guess what polygon count will be enough for your big project. The bad is, that metashape is still creating much denser model, which is in the last step decimated to your desired polycount.

I am often using just point cloud for presentation( https://github.com/potree/potree ), because size of points can be adjusted and I still have option to measure everything.

49
Then it seems 3 hours time for depth maps correct. During this process CPU is also quite high utilized.
Do you have all the project files on SSD disk 0 or on the external disk 1?
How was utilized GPU/CPU and disk read/writes during depth maps calculation? can you show some larger/longer graph from task manager or some other monitoring tools?

In mesh generation process, as I said, it is mostly CPU task, and only some part are GPU accelerated. If I am correct, metashape is generating much denser mesh during generation and then it is decimated, which is in my eyes a bit waste of time.

I am using cloudcompare for generating mesh from metashape pointcloud, where I can set the density of mesh in metric units.

Do you need High quality of depth maps and mesh model? Wouldn't be also Medium quality enough?

50
Be sure you have enabled GPU in settings and disabled the bottom option(attachment).
670 depth maps(37Mpix if this is correct info from your screen) on RTX 3090 should be calculated under one hour(my estimation).
Mesh generation process is GPU accelerated only in few subtasks, the rest is dependant only on CPU.

What is your original photos resolution in pixels?

EDIT: I forgot screen in attachment

51
https://www.youtube.com/watch?v=cqVnkEeISUE

Try to download older version of iXTU, that guy in video is running 7.4.0.26 and it is working for him.
Hi is also showing how to undervolt in XTU, in HWiNFO monitoring tool the temperatures and also CPU package power consumption max. ~82W at 4.2GHz...my desktop 11700F procesor is pushing at that frequency 122W after undervolting...that is what I was mentioning(desktop version is on 14nm older process-not worth to buy for desktop, notebook version is on better 10nm.)

Cinebench is just short test. You will be probably able tu run long metashape CPU tasks at all core 3.6GHz(normal FAN speed) or ~3.8GHz(maximum FAN speed). In both cases without hitting 90+°C and without frequency throttling. You can also compare what is the CPU frequency now and then after undervolting.

EDIT: in video comment section somebody is mentioning that on some machines you can undervolt also in notebook bios...if XTU does not work.

52
11900xx is not good choice, because 11xxx generation for PC is technologically designed for 10nm fabrication process, but it was realized on old 14nm fabrication process => higher power consumption and temperatures.
Better choice is 12xxx/13xxx gen of intel CPUs or AMD 5xxx/7xxx procesors.

Quick instructions how to undervolt CPU. Download Intel Extreme tuning utility and there you will find voltage offset option. Start to decreasing it with 0.02steps untill your system freeze. Maybe your notebook will freeze at -0.08V offset. Then you need to go back one or two steps to value, when it will be stable (e.g. -0.06V). After each step you should test the system few minutes(2-4) if it is stable. That utility has build in stability stress test.

Undervolting GPU. Download MSI afterburner utility. Set power limit to 70% and increase core frequency +80-120MHz(this value depends on chip generation and exact model). Maybe you will be not able to change power limit so much or at all...it depends on GPU generation and certain model. I had never quadro card, so it can be locked by manufacturer. You will see.

...this is just quick advise how to undervolt. Better solution is lock GPU/CPU frequencies at some value and set much precise voltage value, not just offseting it.

If you have very fast network you can add PC as a node, but I never run metashape network processing, so I do not know how the data are distributed and how effective it is in comparisson with just one workstation.
In one PC workstation you are limited only by system RAM memory bandwidth and maybe old HDD clasic disk. With modern SSD disk it should be no problem. The rest like CPU and GPU components behave mostly in linear dependence...more CPU cores and more CUDA GPU cores or higher frequencies means linearly more performance....onyl some metashape single thread processing subtasks can disrupt this linear dependance.

53
If all your nodes are notebooks, the only solution is undervolt both CPU and GPU and have room with cold air. But that will be still not enough...for you it would be better to have one PC for 2000€ than 4 weak notebook nodes connected with network.

We do not know what kind of photos format are you using, what is the resolution of photos, ...even then it is hard to tell how long your nodes would take finished the project.
If you will be buying something new, choice graphics card from nvidia RTX 3xxx series, they have even more raw FLOAT performacne than my example RTX2060super.  E.g. RTX 3080 10GB version has 20.3TFLOP for 800€. With 16cores/32threads procesor and 64/128GB of RAM you can finish the project much sooner.

Roughly depth maps calculation could be 3-4 times faster, maybe more...I do not know how fast your network is and how long are your nodes waiting for data...etc.

54
You are probably using hires photos and in combination with T600 quadro, the time needed will be really high.
T600 has only 1.6TFLOP of performance. One desktop low/mid range RTX 2060super gaming card has 7-9TFLOP.
11800H mobile procesor if it is not cooled enough, can not produce fast enough processing times.

55
General / Re: texture glitch on low poly model
« on: October 31, 2022, 10:02:27 AM »
I see the glitches in many places where vertexes/edges are doubled. Some weld vertex function could fix this issue.
It could also help to use Quadric edge collapse decimation in meshlab to avoid glitches when transfering high to low poly model.

56
General / Re: Shooting interior with Nikon D5300
« on: October 30, 2022, 10:56:39 AM »
Hi Paulo again,

I had never done pure interior pointcloud, but it is interesting for me too, so I made another test in my room.

I think the key for success will be the focus point distance. When I set the focus point somewhere in the middle of the camera-wall distance, I got slightly higher image quality score(thanks to slightly sharper objects, which are closer to camera). And what surprise me is that difference in score between ISO 100/3200 is very small or almost none. Also image features are still recognizable in both cases.
I think, ISO 800-1200 would be fully usable if you decide to shoot from hand and lighting conditions will be sufficient. If the Paper Plant object is not large(thousands of photos needed) or you will have plenty of time for shooting, then tripod solution with ISO 100 would be best choice. What do you think?  :)

57
General / Re: Shooting interior with Nikon D5300
« on: October 30, 2022, 02:43:26 AM »
I hope the results from Ricoh GR 16 Mpix  will be much sharper, because on your 3 example photos from google drive , whole image is slightly out of focus, monitor text logo, light bulb, yellow cable, markers on table, sofa pattern, outdoor part behind glass...

58
General / Re: Shooting interior with Nikon D5300
« on: October 29, 2022, 08:12:44 PM »
Your bottom corner is captured only on two photos and with very similar angles -> not enough information to calculate correct depth of points. You need to care more about corners :P
The aproach is the same for concave and convex corners.

59
General / Re: Shooting interior with Nikon D5300
« on: October 29, 2022, 08:04:34 PM »
At some point if you increase aperture value, overall image started to be less sharper...could be this the reason of low image quality?
Some sweet spot between image sharpness/depth of field and image quality is ~F7-11 , but it depends on lens. Can you also check that your lens and camera can produce sharp image(no focus issue) in day light condition?

Make some test: shoot series of images(e.g. news paper page with small letters) and change the  F number from min. to e.g. 15. Check the sharpness in corners(and also chromatic aberation) and overall image sharpness. Choice the F number when you will be satisfied with corners(no chromatic aberation) but also overall sharpnes in the middle.

My test is on business card and lens is able to still achieve sharpnes in the middle and also in corners around F number 7-8. After F11 and more, overall image is less sharp.

60
General / Re: Shooting interior with Nikon D5300
« on: October 29, 2022, 05:21:35 PM »
On canon you can set ISO to auto or manual any value, it is independent from any priority shooting mode. It is different on your Nikon?

Pages: 1 2 3 [4] 5 6 ... 12