Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Bzuco

Pages: 1 [2] 3 4 ... 12
16
General / Re: AMD Ryzen 9 7950X3D vs Intel Core i9-13900K ?
« on: July 12, 2023, 11:15:12 AM »
I have read the scientific article and checked their methods and graphs and this is my observation:

1. They tested GPU, not CPU.
There are almost no OS freezes, because graphics driver is restarted and recovered in case of GPU computation errors caused by low voltage.
In case of CPU computation errors, you may notice immediatelly blue screen/system restart, because CPU is managing whole system.

2. They choice GTX 980 which has 1126MHz base and 1216MHz boost frequency. That is how are set from Nvidia. Each Nvidia's partner is setting the boost frequencies even higher. In the article they started to check errors from core frequency 1404MHz up to 1806 which is considered as pretty high overclocking on default cooling solution.
You can clearly see(graphs figure 3 and 4), that if you do ~7.44% reduction of voltage(nice decrease of power consumption) for max. boost frequency 1404MHz(set by partner) there are no errors on this scientific level of measurement. It is sad, they did not publish exact volatage values.
The higher you go with frequency the lower posibilities are to undervolt, because for these frequencies the voltage/frequency in GPU's bios is not set. That scientific undervolting test doest not have much sense for overclocked frequencies.

What I am advising to cyrilp is easy and safe way where this scientific testing methods are not needed and the OS freezing test plus small voltage reserve after finding correct voltage for CPU is 100% enough for stable system with correct calculations. Otherwise as I said, I would be noticing constantly errors and bad strange results on everything I am doing on my PC with undervolted CPUs and GPUs almost 15 years.

My advised CPU undervolting knowledge are not falsehood.

The RAM errors caused by electromagnetic radiation are common, when users put their notebooks to sleep state over night(RAM modules still under voltage) instead of power off state. After two or more days strange things happens.

17
General / Re: AMD Ryzen 9 7950X3D vs Intel Core i9-13900K ?
« on: July 11, 2023, 11:47:33 AM »
All of my CPUs were always undervolted and I never had problems with calculation errors even during  several hours of computing.

It is several times higher probabilty of errors in the system RAM modules affected with cosmic and electromagnetic radiation after several hours if modules don't have ECC correction. In that case money saved on CPU cooling can be used to buy RAM modules with ECC correction, like server computers have. Ryzen CPUs support this feature.

In case of exaggerated concerns and be 1000% sure, you can always leave a slightly higher voltage value than it is necessary for the proper operation of the processor. But it is still huge difference and big save in energy and heat compared to what we get from the factory with default settings. Manufacturers are keeping huge reserve because there is no time to test all chips to exact voltage needs.

Chips used in smartphones have much much less reserves in voltages compared to desktop CPUs, and are working several days without errors.

18
General / Re: AMD Ryzen 9 7950X3D vs Intel Core i9-13900K ?
« on: July 11, 2023, 09:52:14 AM »
Undervolting is safe procedure, because it is about to find the lowest voltage which is not causing freezing and not the first one which causes issues. It is also adviced keep 0.02V reserve after finding the right voltage value.

Using ECO mode only helps to decrease power consumption and temperatures, but it also masivelly decrease performance because it is not based on undervolting principles.
https://youtu.be/RlMq1VEWNIM?t=84   that cinenebch is good representation how multicore performance dropped after eco mode enable.

What is even better is lock multicore frequencies to certain fixed value, e.g. 4.4/4.6/4.8GHz and do not use automatic boost single core frequencies...this makes undervolting much easier and more effective.

Finding the lowest voltage for CPU is much easier in ryzen master app then adjusting values in BIOS and restarting computer after each change. So it is 100% worth to do, otherwise we are significantly overpaying the performance. With undervolting we can also save money because much expensive water cooling solutions is not needed.

19
General / Re: AMD Ryzen 9 7950X3D vs Intel Core i9-13900K ?
« on: July 10, 2023, 03:28:30 PM »
Hi,

AMD 7xxx is better choice, because 13900k is mix of 8 performance cores and 16 efficient cores(without multithreading) and to match AMD performance it needs to be clocked almost to max boost frequencies, which produces a lot of heat.
Even better choice is AMD 7950X(non 3D), because that extra 64MB cache will highly probably not improve performance in metashape at all.
That cache is good only for certain specific games, which are heavily vectorizing data in CPU caches.

To get maximum potential performance you will need to undervolt the CPU in bios or in AMD software, because on default you can easily stuck on ~base multicore frequencies, high temperatures and high power consumption ...and that means lower performance in the end. This applies to all modern CPUs in the last ~10 years.

20
Small lens focal length number used on small camera devices is much higher after multiplyng it with crop factor...my phone's wide camera is using 5.6mm lenses but after correcting it with crop factor it is 26mm in full frame equivalent. So for metashape it does not matter if phone or DSLR was used.

Only if you will be taking photos on your phone with ultra wide(e.g. 112° field of view ~14mm in full frame equivalent) or fisheye lenses, then you need to change camera type to fisheye in camera calibration dialog.

21
Hi JohnyJoe, small camera sensors don't have good image quality in terms of noise. Noise reduction used on mobile phones does not help to make cleaner and more detailed point cloud.
If you want to have better quality with these devices, you need to get much closer to objects when taking photos and build just medium quality dense cloud. This procedure will "eliminate" noise problem and image sharpness degraded thanks to denoising.

22
General / Re: PC for Metashape in 2023
« on: June 24, 2023, 08:08:12 PM »
@andyroo
Can you check how the GPUs were utilized during depth maps calculation?
In my attachment top graph is medium quality(half the time GPU is not utilized) and bottom is high quality(much much better utilization)...I am using rtx 2060 super and 18Mpix photos.
How big(resolution) photos are in the puget bench used?  ...my gpu graphs are from my project.

23
General / Re: Problem saving on external hard drive
« on: June 22, 2023, 03:03:31 PM »
Hi Cordially, you can try to disable "turn off this device to save power". Right click on USB key icon on task bar near clock and choice Open Device and Printers and find your external 2TB drive. Right click on it and choice properties.

24
General / Re: GeForce v RTX (ex Quadro)
« on: June 13, 2023, 01:25:06 PM »
In case of Metashape, you are overpricing the performance if you are using quadro cards.
You can compare geforce RTX 20/30/40 series here https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_30_series   . What is important is only single precision floating point performance column(num. of CUDA cores * gpu core frequency) in table and memory bandwidth column. These two gpu parameters means a lot for metashape calculations.
GPUs from RTX40 series have significantly larger L2 cache over RTX30 series, but only metashape programmers can tell you if that cache is utilized...otherwise that parameter from table is not important and in terms of memory speed the higher memory bandwidth number can increase calculation speed. So check first TFLOPS performance number(highest weight in decision) and then memory bandwidth number(lower weight in decision).

Your projects have 20-27Mpix images and if you are calculating ultra high or high depthmaps, it make sense to buy 4090. But if you choice medium quality, 4090 will be probably not utilized enough...check this post and image https://www.agisoft.com/forum/index.php?topic=15405.msg67021#msg67021  .

Quadro P2000 is very very very weak now for your needs.
I would advice rtx 3080/3080ti or 4070ti/4080. If money are not problem you can try 4090...but it is a little bit risky for that price.

25
General / Re: PC for Metashape in 2023
« on: June 09, 2023, 01:46:09 PM »
13700/13900 have 8 "big" cores with HT(hyperthreading) and rest "small" cores without HT.
You will benefit more from AMD Ryzen 9 5950X or never 7950X which both have 16 "big" cores with SMT(Simultaneous Multithreading).

GPU: more cuda cores matters, but you need to feed them enough with data. So 4070ti or better 4080 make sense if you will be calculating depth maps on ultra high or high. If you choice medium or lower quality, you will see GPU more in idle state(transferring data to GPU) then calculating state. It also depends on photo resolution you are taking.

26
4x 4096x4096 should be enough to cover previous one 8K texture.

27
In that case another option is create more smaller textures in metashape rather than fewer larger. How many textures your model has? one big or several smaller? What AR device are they using?...brand/model...
If I remember max. size limit per one texture was 4096x4096 pix on my older mobile phone. So several 4k or even less 2K would be better option for phones.

EDIT: some devices example from google https://i.stack.imgur.com/S6DZw.png

28
It can be low res texture in sketchfab with higher compression, or your mobile/mobile browser does not support fully webGL sketchfab is using.
Older mobile phones has much lower max. texture size limit. I checked your sketchfab model on PC and also on Zenphone 8 and the texture is clean, sharp and without any issues.
Your texture on screenshot seems to be half res only.

If you want to see better result in Cinema4D, you need to find averaging normals function somewhere. In attachment difference between default and averaged normals on non flat surface.
Second option is smooth/relax those polygons so they will be not bumpy/noisy.

29
What is causing the "issue" in Cinema4D are vertex normals. Check vertex normals geometry option in sketchfab and you will see "messed" up normals in that problematic areas. Those normals are correct, but maybe sketchfab shader is automatically averaging/ignoring them for preview.
Cinema4D is probably using one or two omni lights for perspective viewport...so the large differences in normals between nearby vertex results in unpleasant shadowing.

Btw. pretty good photogrammetry result  ;)

30
General / Re: Correct object size
« on: April 21, 2023, 10:35:03 AM »
In standard version you can show grid(menu Modfel - Show/hide items - show grid) for viewport and set size in meters and line counts(metashape preferences - appearance - model view - Grid).
Then scale the object with scale tool from top icon bar.

Pages: 1 [2] 3 4 ... 12