Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Bzuco

Pages: [1] 2 3 ... 16
1
General / Re: GeForce RTX 4060 Ti 16 Go any good for Metashape Pro
« on: November 26, 2024, 01:15:06 PM »
Yeah, that card has good design, big 3 fan cooler, so no problem at all. https://www.techpowerup.com/review/asus-geforce-rtx-4070-ti-super-tuf/3.html
Your CPU is missing HyperThreading, which is giving +30% performance, so if you spawn several local workers to fully saturate GPU without seeing drops in utilization, then your CPU could be not fully able to keep GPU depth maps processing tempo and 4060TI would be better choice now. For the future definitely 4070ti super.

2
General / Re: GeForce RTX 4060 Ti 16 Go any good for Metashape Pro
« on: November 24, 2024, 11:49:26 AM »
The overall GPU performance in Metashape depends also on used CPU, how much worker threads are used in local network processing, how big is the dataset and how many Mpix photos are processed.
More performant CPU can better feed the GPU when it needs that.

RTX 4070 Ti has 12GB VRAM, so it would be better 4070 TI super variant with 16GB or stay with 4060 TI 16GB, which will be nice boost for processing in comparisson old 1060 super. 16GB VRAM is usefull for texture building task.

What is your current CPU?

3
General / Re: GeForce RTX 4060 Ti 16 Go any good for Metashape Pro
« on: November 23, 2024, 10:24:03 AM »
Hi, RTX 4060ti 16GB is good option or better Radeon RX 7800 XT in that price tag.

4
Guided image matching option is designed to help in areas with a lot of vegetation, so try to enable it and set Key point limit per
Mpx to ~2000-3000.

5
I am using czkawka to find and delete similar or duplicate images. It works base on image content and it is amazingly fast.
https://github.com/qarmin/czkawka

6
General / Re: Poor network processing performance
« on: November 15, 2024, 12:01:50 PM »
Until you resolve network processing performance:
As you are running both laptops with performant components, you should really consider undervolting CPU and GPU, otherwise you waste energy on heat instead of performance. Both CPU and GPU chips suffers from overvolting(default setting from facatory).
You can do undervolting CPU in razer utility, or using ThrottleStop utility, or Intel Extreme Tuning utility, ... and GPU using MSI Afterburner.
This will significantly speed up your local processing when all CPU cores are utilized(estimating camera locations, generating dense point cloud(filtering depth maps) ) and GPU tasks(matching points, depth maps calculation).
With undervolted components you can reach higher and more stable frequencies without attacking high temperatures, when chips start to throtte.
Here is video showing undervolting results https://www.youtube.com/watch?v=azGt-rH_8qc

7
General / Re: Poor network processing performance
« on: November 07, 2024, 08:29:29 PM »
Matching points is perfectly GPU accelerated, so any distribution of this task between two computers will cause probably slowness. Try to increase number of workers on each machines if it helps.

It is better to check times of each subtasks individually:
detecting points
selecting pairs
matching points
estimating camera locations

...and also monitor CPU and GPU usage - how much time are they computing and how much time waiting + how big is the data transfer between computers across LAN.


8
General / Re: RAM Usage - Building Textures
« on: November 03, 2024, 11:38:03 AM »
Few of those texture artifacts can be also just T-vertices(vertices lying on edge, they can occur after mesh decimation) causing dark shading.

9
General / Re: RAM Usage - Building Textures
« on: November 03, 2024, 12:19:02 AM »
Nira is using some kind of virtual texturing system, where one big texture is streamed in smaller chunks to client. For them it is probably easier to process one big texture, but should be not problem even several smaller. From web server perspective, every texture(or even smalle file) is creating one request from client to server. This is problem on web pages like forums where are many small images(e.g. icons, banners) and where a lot of people are reading pages at the same time. I don't think there is so much pressure on Nira server in comparisson some webhosting server with a lot of web portals, which are people visiting in hundreds/thousands every second.

Generaly, smaller textures are necessary for speeding up 3D rendering(games, game engines). GPUs have VRAM and caches. If texture(and its smaller versions called MIPs) can not fit in small cache(older GPUs just few MB, RTX 4xxx 24-72MB), then they are sampled only from VRAM, which is slower. For one model and simple shader it doest not matter much, but if you use normal maps, diffuse maps and other maps, then it start to slow down rendering, especially on low end desktop GPUs or mobile GPUs.

If you need model of building for inspection(cracks , etc.), I would use just point cloud. If you need high poly 3D model with textures, then you can try network local processing, where all tasks are split automatically on smaller chunks, which cost almost no RAM. Here is great video about that https://www.youtube.com/watch?v=BYRIC-qkZJ8

About high polycount model. If you have good viewer(e.g. tilled, or model is fully loaded in GPU VRAM, it is fast) and you do not need to edit that model, then you can use 200M poly model, or even with higher polycount. But if you need to edit model in DCC apps, then even 4M poly one chunk is problem(selecting polygons, moving them, using undo level system, ...)

RizomUV - I am using automatic model splitting and creating UV seams -> islands, where I can decide how big I want them based on the distortion. For 200M poly model in one chunk this would take several hours. So high poly model is good only for creating normal maps from high poly to low poly and in this case you need UV mapping only on low poly model. High poly one do not need UV mapping at all.
To learn RizomUV it is good to use their official tutorials on youtube.
This is my largest hobby project with info in first comment https://www.reddit.com/r/photogrammetry/comments/1c8rg4j/camera_fly_over_the_old_unfinished_cable_car/

10
General / Re: RAM Usage - Building Textures
« on: November 02, 2024, 10:59:14 PM »
And I think, 171M faces is too much for textured model, especially creating UV islands, maybe that is causing the memory issue. Try to decimate the model first to just several milion polygons.

11
General / Re: RAM Usage - Building Textures
« on: November 02, 2024, 10:44:42 PM »
It is always better to create several smaller textures than a one big. Consider 8k as max. Gues the number of textures is tricky, trial/error.
I am using for this purpose external program RizomUV, where I can manually unwrap 3D model and create UV islands and I also know what will be the final texel density and how many 8k textures I will need. I can also separate e.g. trees to one texture, because trees do not need such a texel density and more texture space will be assigned to more important objects.

Just for information, can you share the screenshot of how the masonry building and exterior looks? and how big it is and what was the camera positions?

12
General / Re: Processing Time - Build Point Cloud
« on: November 01, 2024, 07:19:15 PM »
@Tas, that number is just sum of extracted points info from each line in the end of log.
You don't have much options, Ultra High is original photo resolution, so Metashape is trying to make one point from each pixel in photo.
High is half of the photo resolution, Medium is quarter resolution, ...

About that ~559 seconds to load ~5121 MB of depth maps ...preload data may not only mean load from disk, but also extract from archive in which Metashape is storing it and maybe also a little bit of some processing.

13
General / Re: Processing Time - Build Point Cloud
« on: November 01, 2024, 10:50:52 AM »
Hi Tas, you should definitely lower your depth maps settings, because at the end of your log Metashape already extracted  1 059 126 305 points(which is a lot lot) and that is not the end.
Try to guess how many millimeters is covering one pixel on your 45 Mpix photo and decide if you need such a precision of your final point cloud.

14
General / Re: How to precisely scale a model with a standard license?
« on: October 30, 2024, 07:41:28 PM »
@Ucodia:

I am using this procedure, it seems to be a long list, but actually it is simple procedure :) :
1. enable grid in view
2. move the model so the one of your known point will be in [0,0,0]
3. move the center of the region also to [0,0,0]
4. rotate the model so the second of your known point will lay on X axis and scale the whole object roughly to the wanted size.
5. change four numbers (5.6 in my example on lines 6-9) to your distance value and save this text as e.g. scalebar.obj ...this is simple cube/block model in OBJ format.
Code: [Select]
o Cube
v 0.000000 0.000000 0.000000
v 0.000000 0.000000 0.100000
v 0.000000 0.100000 0.000000
v 0.000000 0.100000 0.100000
v 5.600000 0.000000 0.000000
v 5.600000 0.000000 0.100000
v 5.600000 0.100000 0.000000
v 5.600000 0.100000 0.100000
s 0
f 1 2 4 3
f 3 4 8 7
f 7 8 6 5
f 5 6 2 1
f 3 7 5 1
f 8 4 2 6
6. import scalebar.obj in metashape to your project, reset view, switch to othographics view using 5 numpad key.
7. zoom the view to the right side of the cube/block model so it touch right side of Metashape window
8. in workspace window switch from scalebar object to your original model and use scale model tool to scale the object so your second point also touches right side of Metashape window  :)
9. now your model is properly scaled and your two points have exactly the distance you want  :D

Tip: In orthographics view maybe your model will disappear during zooming. To avoid that hold shift + right mouse button and move mouse up/down...this will change camera clipping planes.

15
4Gbit is not 4Gbytes, its 0.5 so it is extremely slow and unusable for any work. As the motherboard limits the slot to PCI 2.0
If it limits that slot to 2.0 standard, then it is still 2GB/s which is enough speed for uploading data to VRAM.

Pages: [1] 2 3 ... 16