Forum

Author Topic: RAM Usage - Building Textures  (Read 7757 times)

Tas

  • Newbie
  • *
  • Posts: 49
    • View Profile
RAM Usage - Building Textures
« on: November 02, 2024, 04:58:22 PM »
Hi All,
I'm wondering if anyone has any insights regarding RAM usage during the "blending textures" phase of the texture generation process. The RAM usage seems to be incredibly high (e.g., 375GB for a single-building model), and I've run into a dead end in terms of researching relevant resources. I'm wondering if...
  • I need to start leveraging the option to generate models in blocks...presumably, so that textures can be generated in "partitions" that use less RAM (?)
  • If I'm simply doing something wrong
  • If there's something about the processing (e.g., loading and blending multiple texture pages at once) that would favor lower-resolution textures. I.e., perhaps less RAM would be required to generate 8K or 16K textures rather than 32K textures - I'm not sure how the "maximum page count per run" value is calculated in the processing log, but I'm guessing that it's going to strongly influence RAM usage and processing time.

Project Info:
  • Agisoft Metashape Professional, Version 2.2.0 Beta
  • 2924 photos (.jpg), most are 45MP
  • 69 colored laser scans (.e57)
  • Initial model size (for texturing, prior to simplification): 171,430,792 faces, 85,596,339 vertices. Edit:  revised to 44,015,316 faces 22,034,326 vertices
  • Model subject:  masonry building exterior, with trees/vegetation. Edit:  snapshots provided in a message below.

Texture Settings (Log attached)
  • Texture type:  Diffuse map
  • Source data:  Images
  • Mapping mode:  Generic
  • Blending mode:  Mosaic
  • Texture size:  32,768 Edit:  revised to 8192
  • Pixel size (m):  ~.0012, which closely matches the detectable feature size for the capture distance (2 x GSD)
  • Enable hole filling:  yes
  • Enable ghosting filter:  yes

PC Info
  • CPU:  AMD Ryzen Threadripper Pro 7975WX (4.0GHz 32 Core 350W), sitting at ~60-100% utilization
  • GPU:  NVIDIA GeForce RTX 4090, ~22-23GB free memory referenced in Metashape processing logs, sitting at 1-5% utilization
  • RAM:  512 GB DDR5 (8x Kingston DDR5-5600 ECC Reg. 2R 64GB), sitting at ~75% utilization when ~60% through blending texture process
  • Processing Drive:  SSD with >1TB free space (Sabrent 8TB Rocket 4 Plus Gen4 PCIe NVMe M.2). All files are saved locally.

Thanks!
« Last Edit: November 03, 2024, 02:22:30 AM by Tas »

Tas

  • Newbie
  • *
  • Posts: 49
    • View Profile
Re: RAM Usage - Building Textures
« Reply #1 on: November 02, 2024, 10:02:31 PM »
Update:  The texturing process failed after 8.5 hours, with the error message "bad allocation." My understanding is that this means I don't have enough RAM to produce 32K textures, which admittedly seems kind of wild when I'm sitting at 512 GB. Perhaps I'm missing something here.

I'm unsure how to estimate how much RAM is required prior to starting processing, but any tips are greatly appreciated!

I have read through the following article, which is helpful but might indicate that my RAM usage is unusually high (particularly for such a small project). https://agisoft.freshdesk.com/support/solutions/articles/31000157329-memory-requirements-for-processing-operations. Of note, the article states that "memory consumption for the texture blending operation (the second part of the Build Texture stage) is proportional to the number of CPU threads if performed on the CPU." If memory usage is truly uncapped, perhaps the number threads/logical processors on my CPU (64) is the root cause of my issues.

I'm hoping that there's a software setting that will help me work around this.
« Last Edit: November 02, 2024, 10:42:11 PM by Tas »

Bzuco

  • Full Member
  • ***
  • Posts: 244
    • View Profile
Re: RAM Usage - Building Textures
« Reply #2 on: November 02, 2024, 10:44:42 PM »
It is always better to create several smaller textures than a one big. Consider 8k as max. Gues the number of textures is tricky, trial/error.
I am using for this purpose external program RizomUV, where I can manually unwrap 3D model and create UV islands and I also know what will be the final texel density and how many 8k textures I will need. I can also separate e.g. trees to one texture, because trees do not need such a texel density and more texture space will be assigned to more important objects.

Just for information, can you share the screenshot of how the masonry building and exterior looks? and how big it is and what was the camera positions?

Bzuco

  • Full Member
  • ***
  • Posts: 244
    • View Profile
Re: RAM Usage - Building Textures
« Reply #3 on: November 02, 2024, 10:59:14 PM »
And I think, 171M faces is too much for textured model, especially creating UV islands, maybe that is causing the memory issue. Try to decimate the model first to just several milion polygons.

Mak11

  • Sr. Member
  • ****
  • Posts: 387
    • View Profile
Re: RAM Usage - Building Textures
« Reply #4 on: November 02, 2024, 11:04:21 PM »
32K texture? why ? This is absolutely unseeable. You should generate several 4K textures instead.
Also a 171M poly model is totally unnecessary. Decimate it to a reasonable level before unwrapping & texturing.

Tas

  • Newbie
  • *
  • Posts: 49
    • View Profile
Re: RAM Usage - Building Textures
« Reply #5 on: November 02, 2024, 11:22:38 PM »
Thanks again for your input - you guys are awesome. I've been at this (intermittently) for years, and I'm always learning. Very low res snapshots of the model are attached for reference. Trees/vegetation are not insignificant.

RE: texture resolution/size -  I don't disagree at all for using 8k (or even 4k) textures from a processing standpoint. I'm told that the large number of texture files is a burden for web-hosting/servers, though. I host my models on Nira, and they've strongly recommended 32k textures whenever possible. For example, texturing this model (without much geometry simplification or retopology), with 8K textures and ~.0012 m pixel size, would require 195 texture pages. I'll give it a try (perhaps after decreasing the face count...)!

RE: texture page count -  I've shifted to specifying the pixel size and letting Metashape calculate the number of pages.

RE:  RizomUV & UV optimization - I admit that unwrapping and UV editing isn't a process that I've mastered yet, as I am often pressed for time in post-processing. Bzuco, I've seen you mention your general process in several of your posts. Would you recommend any learning resources in particular? Many of my models are quite large (200M faces is usually a minimum) - do you think your workflow lends itself well to models of that size? Or are you perhaps splitting models in chunks (http://wiki.agisoft.com/wiki/Split_in_chunks.py) or generating models in blocks that are more reasonable to work with in third-party applications?

RE:  Poly count:  I'm trying to retain highly-detailed architectural geometry and generate highly-detailed textures that can be reprojected onto a simplified mesh. I'm also trying to maximize remote inspection capabilies for common defects (e.g., displaced masonry, cracks, etc.) Perhaps I'm underestimating Metashape's capability to maintain high face counts at ornate geometry when simplifying the model.
« Last Edit: November 02, 2024, 11:26:12 PM by Tas »

Tas

  • Newbie
  • *
  • Posts: 49
    • View Profile
Re: RAM Usage - Building Textures
« Reply #6 on: November 02, 2024, 11:35:34 PM »
I'll also add that there's always a very high number of faces directly adjacent to laser scan stations (when generating the mesh from depth maps + laser scans), and that high face count is maintained after model simplification. The attached snapshot shows what I'm talking about, even after reducing the model face count by 50% overall.

Maybe I'm being too conservative, but I think a compromise that I should implement more aggressively is isolating the vegetation and decimating it by selected faces.


All this being said, 200M faces is not a large model at all (for me), and I thought I would be done with RAM issues for small projects when I invested in 512GB of RAM. Regardless of the measures I take, I do feel like this shouldn't even be a conversation at this scale.

Bzuco

  • Full Member
  • ***
  • Posts: 244
    • View Profile
Re: RAM Usage - Building Textures
« Reply #7 on: November 03, 2024, 12:19:02 AM »
Nira is using some kind of virtual texturing system, where one big texture is streamed in smaller chunks to client. For them it is probably easier to process one big texture, but should be not problem even several smaller. From web server perspective, every texture(or even smalle file) is creating one request from client to server. This is problem on web pages like forums where are many small images(e.g. icons, banners) and where a lot of people are reading pages at the same time. I don't think there is so much pressure on Nira server in comparisson some webhosting server with a lot of web portals, which are people visiting in hundreds/thousands every second.

Generaly, smaller textures are necessary for speeding up 3D rendering(games, game engines). GPUs have VRAM and caches. If texture(and its smaller versions called MIPs) can not fit in small cache(older GPUs just few MB, RTX 4xxx 24-72MB), then they are sampled only from VRAM, which is slower. For one model and simple shader it doest not matter much, but if you use normal maps, diffuse maps and other maps, then it start to slow down rendering, especially on low end desktop GPUs or mobile GPUs.

If you need model of building for inspection(cracks , etc.), I would use just point cloud. If you need high poly 3D model with textures, then you can try network local processing, where all tasks are split automatically on smaller chunks, which cost almost no RAM. Here is great video about that https://www.youtube.com/watch?v=BYRIC-qkZJ8

About high polycount model. If you have good viewer(e.g. tilled, or model is fully loaded in GPU VRAM, it is fast) and you do not need to edit that model, then you can use 200M poly model, or even with higher polycount. But if you need to edit model in DCC apps, then even 4M poly one chunk is problem(selecting polygons, moving them, using undo level system, ...)

RizomUV - I am using automatic model splitting and creating UV seams -> islands, where I can decide how big I want them based on the distortion. For 200M poly model in one chunk this would take several hours. So high poly model is good only for creating normal maps from high poly to low poly and in this case you need UV mapping only on low poly model. High poly one do not need UV mapping at all.
To learn RizomUV it is good to use their official tutorials on youtube.
This is my largest hobby project with info in first comment https://www.reddit.com/r/photogrammetry/comments/1c8rg4j/camera_fly_over_the_old_unfinished_cable_car/

Tas

  • Newbie
  • *
  • Posts: 49
    • View Profile
Re: RAM Usage - Building Textures
« Reply #8 on: November 03, 2024, 12:34:32 AM »
Thanks, Bzuco.

I understand and agree that geometric evaluations, deflections, etc. are much better served by point clouds, and I don't disagree with you. I definitely don't want to use a point cloud for visual inspection though, as the photorealistic nature of the texture is critical for the type of inspections/assessment I do. Even the textures often aren't good enough, which is why I'm using Nira for streamlined viewing of the source photography. If I were doing this just for myself it might be a different story, but other folks and clients need a very user-friendly UI.

I'll follow up with a response to the rest of your message, but thank you for the detailed write-up in the meantime.


Processing update:  I'm now generating 8K textures for a model with ~44M faces. I left the pixel size at ~.0012m because I really don't want to compromise on that. The in-progress log still indicates the same number of 8K texture pages as it did for the 171M poly model (195), and it's rendering 65 pages at a time. I'm concerned that I'm going to run into the same error because the process is only using 10% less RAM during the "detecting outliers" step (in comparison to when it was rendering four 32K textures at a time). If anyone is knowledgeable enough to gauge whether this is going to fail 8+ hours from now, I'd be wildly appreciative. Unfortunately, I do not have time to reprocess the model in blocks.

I really wish I understood why I'm using 320GB of RAM for an initial processing step in texturing a 44M poly model (albeit at a somewhat high resolution).


Edit:  Peak RAM usage is now identical to the 170M poly, 32K texture blending. I'm at a complete loss.
« Last Edit: November 03, 2024, 02:07:02 AM by Tas »

Tas

  • Newbie
  • *
  • Posts: 49
    • View Profile
Re: RAM Usage - Building Textures
« Reply #9 on: November 03, 2024, 02:07:24 AM »
Thanks, Bzuco.

I understand and agree that geometric evaluations, deflections, etc. are much better served by point clouds, and I don't disagree with you. I definitely don't want to use a point cloud for visual inspection though, as the photorealistic nature of the texture is critical for the type of inspections/assessment I do. Even the textures often aren't good enough, which is why I'm using Nira for streamlined viewing of the source photography. If I were doing this just for myself it might be a different story, but other folks and clients need a very user-friendly UI.

I'll follow up with a response to the rest of your message, but thank you for the detailed write-up in the meantime.


Processing update:  I'm now generating 8K textures for a model with ~44M faces. I left the pixel size at ~.0012m because I really don't want to compromise on that. The in-progress log still indicates the same number of 8K texture pages as it did for the 171M poly model (195), and it's rendering 65 pages at a time. I'm concerned that I'm going to run into the same error because the process is only using 10% less RAM during the "detecting outliers" step (in comparison to when it was rendering four 32K textures at a time). If anyone is knowledgeable enough to gauge whether this is going to fail 8+ hours from now, I'd be wildly appreciative. Unfortunately, I do not have time to reprocess the model in blocks.

I really wish I understood why I'm using 320GB of RAM for an initial processing step in texturing a 44M poly model (albeit at a somewhat high resolution).


Edit:  Peak RAM usage is now identical to the 170M poly, 32K texture blending at this stage in processing. I'm at a complete loss.

Tas

  • Newbie
  • *
  • Posts: 49
    • View Profile
Re: RAM Usage - Building Textures
« Reply #10 on: November 03, 2024, 03:59:49 AM »
As I worried, I ran into the same bad allocation error with 8K textures and the 44M poly model...

Tas

  • Newbie
  • *
  • Posts: 49
    • View Profile
Re: RAM Usage - Building Textures
« Reply #11 on: November 03, 2024, 06:23:56 AM »
Update:  Deviated from project specifications and cut the pixel size in half. The texture process was successfully completed, but there are incorrectly mapped texture artifacts throughout the model. Fingers crossed that someone found a solution or a workaround for this issue, as it seems to have been previously reported here:  https://www.agisoft.com/forum/index.php?topic=15359.0
« Last Edit: November 03, 2024, 06:37:35 AM by Tas »

vineg

  • Newbie
  • *
  • Posts: 34
    • View Profile
Re: RAM Usage - Building Textures
« Reply #12 on: November 03, 2024, 10:43:47 AM »
Hi!

You may try to process original model. Number of triangles should not affect memory usage as much as texture resolution. Also disable ghosting filter to reduce usage a little.

Regards, Egor

Bzuco

  • Full Member
  • ***
  • Posts: 244
    • View Profile
Re: RAM Usage - Building Textures
« Reply #13 on: November 03, 2024, 11:38:03 AM »
Few of those texture artifacts can be also just T-vertices(vertices lying on edge, they can occur after mesh decimation) causing dark shading.

Tas

  • Newbie
  • *
  • Posts: 49
    • View Profile
Re: RAM Usage - Building Textures
« Reply #14 on: November 03, 2024, 04:21:15 PM »
The texture artifacts are present even when the texture is produced on the undecimated model...

Happens in both v2.1.3 and 2.2.0.
« Last Edit: November 03, 2024, 04:26:15 PM by Tas »