Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Bzuco

Pages: 1 [2] 3 4 ... 17
16
Guided image matching option is designed to help in areas with a lot of vegetation, so try to enable it and set Key point limit per
Mpx to ~2000-3000.

17
I am using czkawka to find and delete similar or duplicate images. It works base on image content and it is amazingly fast.
https://github.com/qarmin/czkawka

18
General / Re: Poor network processing performance
« on: November 15, 2024, 12:01:50 PM »
Until you resolve network processing performance:
As you are running both laptops with performant components, you should really consider undervolting CPU and GPU, otherwise you waste energy on heat instead of performance. Both CPU and GPU chips suffers from overvolting(default setting from facatory).
You can do undervolting CPU in razer utility, or using ThrottleStop utility, or Intel Extreme Tuning utility, ... and GPU using MSI Afterburner.
This will significantly speed up your local processing when all CPU cores are utilized(estimating camera locations, generating dense point cloud(filtering depth maps) ) and GPU tasks(matching points, depth maps calculation).
With undervolted components you can reach higher and more stable frequencies without attacking high temperatures, when chips start to throtte.
Here is video showing undervolting results https://www.youtube.com/watch?v=azGt-rH_8qc

19
General / Re: Poor network processing performance
« on: November 07, 2024, 08:29:29 PM »
Matching points is perfectly GPU accelerated, so any distribution of this task between two computers will cause probably slowness. Try to increase number of workers on each machines if it helps.

It is better to check times of each subtasks individually:
detecting points
selecting pairs
matching points
estimating camera locations

...and also monitor CPU and GPU usage - how much time are they computing and how much time waiting + how big is the data transfer between computers across LAN.


20
General / Re: RAM Usage - Building Textures
« on: November 03, 2024, 11:38:03 AM »
Few of those texture artifacts can be also just T-vertices(vertices lying on edge, they can occur after mesh decimation) causing dark shading.

21
General / Re: RAM Usage - Building Textures
« on: November 03, 2024, 12:19:02 AM »
Nira is using some kind of virtual texturing system, where one big texture is streamed in smaller chunks to client. For them it is probably easier to process one big texture, but should be not problem even several smaller. From web server perspective, every texture(or even smalle file) is creating one request from client to server. This is problem on web pages like forums where are many small images(e.g. icons, banners) and where a lot of people are reading pages at the same time. I don't think there is so much pressure on Nira server in comparisson some webhosting server with a lot of web portals, which are people visiting in hundreds/thousands every second.

Generaly, smaller textures are necessary for speeding up 3D rendering(games, game engines). GPUs have VRAM and caches. If texture(and its smaller versions called MIPs) can not fit in small cache(older GPUs just few MB, RTX 4xxx 24-72MB), then they are sampled only from VRAM, which is slower. For one model and simple shader it doest not matter much, but if you use normal maps, diffuse maps and other maps, then it start to slow down rendering, especially on low end desktop GPUs or mobile GPUs.

If you need model of building for inspection(cracks , etc.), I would use just point cloud. If you need high poly 3D model with textures, then you can try network local processing, where all tasks are split automatically on smaller chunks, which cost almost no RAM. Here is great video about that https://www.youtube.com/watch?v=BYRIC-qkZJ8

About high polycount model. If you have good viewer(e.g. tilled, or model is fully loaded in GPU VRAM, it is fast) and you do not need to edit that model, then you can use 200M poly model, or even with higher polycount. But if you need to edit model in DCC apps, then even 4M poly one chunk is problem(selecting polygons, moving them, using undo level system, ...)

RizomUV - I am using automatic model splitting and creating UV seams -> islands, where I can decide how big I want them based on the distortion. For 200M poly model in one chunk this would take several hours. So high poly model is good only for creating normal maps from high poly to low poly and in this case you need UV mapping only on low poly model. High poly one do not need UV mapping at all.
To learn RizomUV it is good to use their official tutorials on youtube.
This is my largest hobby project with info in first comment https://www.reddit.com/r/photogrammetry/comments/1c8rg4j/camera_fly_over_the_old_unfinished_cable_car/

22
General / Re: RAM Usage - Building Textures
« on: November 02, 2024, 10:59:14 PM »
And I think, 171M faces is too much for textured model, especially creating UV islands, maybe that is causing the memory issue. Try to decimate the model first to just several milion polygons.

23
General / Re: RAM Usage - Building Textures
« on: November 02, 2024, 10:44:42 PM »
It is always better to create several smaller textures than a one big. Consider 8k as max. Gues the number of textures is tricky, trial/error.
I am using for this purpose external program RizomUV, where I can manually unwrap 3D model and create UV islands and I also know what will be the final texel density and how many 8k textures I will need. I can also separate e.g. trees to one texture, because trees do not need such a texel density and more texture space will be assigned to more important objects.

Just for information, can you share the screenshot of how the masonry building and exterior looks? and how big it is and what was the camera positions?

24
General / Re: Processing Time - Build Point Cloud
« on: November 01, 2024, 07:19:15 PM »
@Tas, that number is just sum of extracted points info from each line in the end of log.
You don't have much options, Ultra High is original photo resolution, so Metashape is trying to make one point from each pixel in photo.
High is half of the photo resolution, Medium is quarter resolution, ...

About that ~559 seconds to load ~5121 MB of depth maps ...preload data may not only mean load from disk, but also extract from archive in which Metashape is storing it and maybe also a little bit of some processing.

25
General / Re: Processing Time - Build Point Cloud
« on: November 01, 2024, 10:50:52 AM »
Hi Tas, you should definitely lower your depth maps settings, because at the end of your log Metashape already extracted  1 059 126 305 points(which is a lot lot) and that is not the end.
Try to guess how many millimeters is covering one pixel on your 45 Mpix photo and decide if you need such a precision of your final point cloud.

26
General / Re: How to precisely scale a model with a standard license?
« on: October 30, 2024, 07:41:28 PM »
@Ucodia:

I am using this procedure, it seems to be a long list, but actually it is simple procedure :) :
1. enable grid in view
2. move the model so the one of your known point will be in [0,0,0]
3. move the center of the region also to [0,0,0]
4. rotate the model so the second of your known point will lay on X axis and scale the whole object roughly to the wanted size.
5. change four numbers (5.6 in my example on lines 6-9) to your distance value and save this text as e.g. scalebar.obj ...this is simple cube/block model in OBJ format.
Code: [Select]
o Cube
v 0.000000 0.000000 0.000000
v 0.000000 0.000000 0.100000
v 0.000000 0.100000 0.000000
v 0.000000 0.100000 0.100000
v 5.600000 0.000000 0.000000
v 5.600000 0.000000 0.100000
v 5.600000 0.100000 0.000000
v 5.600000 0.100000 0.100000
s 0
f 1 2 4 3
f 3 4 8 7
f 7 8 6 5
f 5 6 2 1
f 3 7 5 1
f 8 4 2 6
6. import scalebar.obj in metashape to your project, reset view, switch to othographics view using 5 numpad key.
7. zoom the view to the right side of the cube/block model so it touch right side of Metashape window
8. in workspace window switch from scalebar object to your original model and use scale model tool to scale the object so your second point also touches right side of Metashape window  :)
9. now your model is properly scaled and your two points have exactly the distance you want  :D

Tip: In orthographics view maybe your model will disappear during zooming. To avoid that hold shift + right mouse button and move mouse up/down...this will change camera clipping planes.

27
4Gbit is not 4Gbytes, its 0.5 so it is extremely slow and unusable for any work. As the motherboard limits the slot to PCI 2.0
If it limits that slot to 2.0 standard, then it is still 2GB/s which is enough speed for uploading data to VRAM.

28
The last slot you are running is at 4 Gbps speed. So pretty bottle necked.
Metashape doesn't need to transfer 4GB every second between RAM and VRAM , so it is still OK.

29
I think network processing log is not so detailed, but you can orient according camera number.
In my log I see 6 phases for each camera depthmap, so easier to know how long it took.

Maybe for your system overall 8 workers works, but the main point of using workers locally is during single threaded tasks like first detecting points and probably depth maps calculation, if there are usage drops. Otherwise for tasks which are multithreaded, one worker is enough because is able to fully saturate CPU(estimating camera locations, build point cloud). But OK, if 8 works for you, then keep it.

So now it's time to decide whether to buy a new CPU or just stick with the 7900XT.

30
If you have large photos and use highest depth map settings, then you do not need a lot of workers for building depth maps and 1-3 workers are enough to keep GPU saturated without % usage drops during loading/unloading data.
The best practise how to know if CPU is bottleneck is to check log where you can see how many seconds took to compute one depth map and what was the CPU/GPU usage during this phase. If you see GPU % drops between calculating two depth maps, then add another worker...but if the CPU usage was already 100%, then it is better to buy faster CPU.

Detecting point phase needs a lot of workers, but at the same time be sure you do not exceeded the VRAM usage, because then workers can not receive task.

Matching points is easy task for GPU, so no problem here.

For other non GPU tasks is better to keep only one active worker.

I also see in GPUZ that your nvidia GPUs are quite hot, which is not good for achieving the maximum core frequency, which GPU of 2xxx series can reach only under 50°C and the with increasing the temperature max. freqv. are decreasing.  Here you can help with MSI Afterburner and set the power limit to 70-80% and at the same time increase core clock to +80-100MHz. This tweak gives you less power consumption and higher frequencies and lower temperatures and fan noise. I am able to reach 2080-2150MHz on my RTX 2060 super during computation. The same can be done for 7900XT.

What model of CPU do yo have? You can also undervolt CPU to get more performance out of it.

Pages: 1 [2] 3 4 ... 17