Forum

Author Topic: Reducing memory load for high/ultra quality settings  (Read 8728 times)

RalfH

  • Sr. Member
  • ****
  • Posts: 344
    • View Profile
Reducing memory load for high/ultra quality settings
« on: May 09, 2013, 02:20:12 PM »
When creating a dense point cloud, Photoscan appears to first load all images, leaves them untouched (for ultra) or resamples to 1/2 (for high), to 1/4 (for medium) etc. I assume that the resampled images are then undistorted (?). Judging from memory load, it looks as if the images are then all stored uncompressed in the RAM. This seems to be the main reason for the high memory load at high and ultra quality settings during point cloud creation.

Would it be possible to change memory use in a way that always only those photos are loaded into the RAM which are necessary for creating the current depth map? Of course this would slow down things when memory is not a limiting factor, so perhaps this functionality could be activated with a tick box. That way, creating high and ultra quality dense point clouds for large projects would be possible even with limited RAM (16 GB in my case).
« Last Edit: May 09, 2013, 02:29:40 PM by RalfH »

JRM

  • Jr. Member
  • **
  • Posts: 81
    • View Profile
Re: Reducing memory load for high/ultra quality settings
« Reply #1 on: May 16, 2013, 06:21:29 PM »
The TPIE lib may be the way to achieve this. It is used for interacting with large amount of data without chocking the memory such as kriging efficiently a billion points cloud with 80Mb. It is LGPLed so can be use by Photoscan, a proprietary software is already using it (scalgo.com/?)

www.madalgo.au.dk/tpie/

Quote
TPIE is a library for manipulating large datasets on a desktop computer. TPIE provides implementations of several external memory algorithms and data structures as well as a framework for efficient and portable data processing. Currently, TPIE provides implementations of the following algorithms and data structures:

    External memory merge sorting
    Internal parallel quick sort
    Implementation of Sanders' fast priority queue for cached memory
    Simple buffered stacks and queues

These implementations are backed by the memory manager of TPIE that allows the user to specify how much internal memory to use at most. The progress reporting framework enables reporting accurate progress metrics.

TPIE is written in C++ and depends on the Boost C++ libraries. TPIE may be compiled with GCC, Clang and MSVC, and due to its portable nature, it should be easy to port TPIE to other systems if this is required.

pedder

  • Newbie
  • *
  • Posts: 1
    • View Profile
Re: Reducing memory load for high/ultra quality settings
« Reply #2 on: June 21, 2013, 03:48:15 PM »
Yes, I have also only 16 GB-Ram availabel and often can't create a mesh in high quality mode...At the very end, after waiting several hours. I just get the message, that their is not enough ram. Is it possible to enable some disk caching? Or at least warn me, that the create geometry task is in danger to fail, because of the high ram usage? ...It's a little bit frustrating to have my machine blocked for 5 hours with no result.

Thanks a lot.
pedder

RalfH

  • Sr. Member
  • ****
  • Posts: 344
    • View Profile
Re: Reducing memory load for high/ultra quality settings
« Reply #3 on: June 21, 2013, 04:13:37 PM »
peddler,

mesh creation requires even more RAM than dense point cloud creation. When you want to create a meshed model, Photoscan will first compute depth maps, then a dense point cloud, and finally start meshing - only to find out at that point (after some hours) that it can't do it.

That's why I have stopped working with high quality meshes and use Photoscan mostly to create dense point clouds (using file >> export points) and process the point clouds in external software. Also, when working with Photoscan I always look at RAM usage - of course, Windows will use the swap file on the hard disc, but that makes processing so slow that there is no point letting it run.

As I have said, creating dense point clouds may still work for projects that would require much more RAM for meshing. But for large projects and/or ultra quality settings, RAM becomes a limiting factor even for creating dense point clouds. And at least at this point I think it would be relatively easy to change the software to require less RAM.

tweezlednutball

  • Newbie
  • *
  • Posts: 33
    • View Profile
Re: Reducing memory load for high/ultra quality settings
« Reply #4 on: August 11, 2013, 07:55:36 AM »
What we are doing is manually creating separate chunks and creating a mesh for smaller areas of the entire mesh, then combining meshes afterwards.  Its slow annoying and clunky but it works.

There could be an option for creating blocks inside the bounding box to mesh separately and combine at the end batch style.  Would probably be a lot easier to program rather than adding new API's and changing the meshing algorithm.  I also noticed that OpenCL 2.0 has support for depth maps now so maybe we could see some implementation of GPU acceleration for meshing?  ;)