Forum

Author Topic: dense cloud processing time with ultraHigh (full pixel) setting  (Read 7918 times)

chinnchilla84

  • Newbie
  • *
  • Posts: 4
    • View Profile
Dear all,

I am relatively new to Agisoft and photogrammetry and consider myself to still be in the experimental phase.

Since I was getting out of memory errors during ultra-high dense cloud processing of a model, I have upgraded my workstation to 64GB.

Now with a set of just 300 images (14MP each), the depth filtering step is taking very long. The whole dense cloud processing only took a couple of hours in high setting and now I am facing around 50h on the same machine with one level higher just at the filtering stage.

Here are some hardware specs:

Intel i7-5820K @4.3GHz 6/12 cores >> 1250 points in Cinebench 15
64GB DDR4 2400MHz
Intel SSC 2.200 MB/s read 900MB/s write
1x Geforce GTX 980

For the filtering I have used  the "moderate" setting. Even though the CPU is running at 100% constantly, the power draw is quite low at this stage and the GPU is not used during filtering at all unfortunately. It feels to me as if something is not very efficient here *correct me, if I'm wrong*

Does anyone have an idea how to speed this specific step (depth map filtering) up?
Or is this processing time normal and I just have to use more potent or special (i.e. Xeon) processor(s)?

Thanks and regards,
Bodo
« Last Edit: August 02, 2016, 11:06:17 PM by chinnchilla84 »

chinnchilla84

  • Newbie
  • *
  • Posts: 4
    • View Profile
Re: dense cloud processing time with ultraHigh (full pixel) setting
« Reply #1 on: August 03, 2016, 03:08:45 PM »
In total it took 64.6 hours to calculate the dense cloud in ultra-high mode, with moderate filter setting:

2016-07-31 08:03:25 Using device: GeForce GTX 980, 16 compute units, 4096 MB global memory
2016-07-31 08:03:25   max work group size 1024
2016-07-31 08:03:25   max work item sizes [1024, 1024, 64]
2016-07-31 08:03:25   max mem alloc size 1024 MB
2016-07-31 08:03:25 Initializing...
2016-07-31 08:03:25 initializing...
2016-07-31 08:03:25 selected 269 cameras from 294 in 0.459 sec
2016-07-31 08:03:29 Loading photos...
2016-07-31 08:04:57 loaded photos in 87.753 seconds
2016-07-31 08:04:57 Reconstructing depth...
2016-07-31 08:04:57 [CPU] estimating 3583x2882x992 disparity using 1195x961x8u tiles, offset -355
...
2016-07-31 14:34:12 [CPU] estimating 3332x4664x544 disparity using 1111x933x8u tiles, offset -261
2016-07-31 14:34:50 timings: rectify: 0.468 disparity: 37.921 borders: 0.205 filter: 0.594 fill: 0
2016-07-31 14:34:56 finished depth reconstruction in 23399 seconds
2016-07-31 14:34:56 Device 1 performance: 237.05 million samples/sec (CPU)
2016-07-31 14:34:56 Device 2 performance: 970.722 million samples/sec (GeForce GTX 980)
2016-07-31 14:34:56 Total performance: 1207.77 million samples/sec
2016-07-31 14:34:56 Generating dense point cloud...
2016-07-31 14:35:08 selected 269 cameras in 12.505 sec
2016-07-31 14:35:08 working volume: 6523x5978x6244
2016-07-31 14:35:08 tiles: 2x1x2
2016-07-31 14:35:08 selected 258 cameras
2016-07-31 14:35:08 preloading data... done in 24.356 sec
2016-07-31 14:35:33 filtering depth maps... done in 51463.9 sec
2016-08-01 04:53:18 preloading data... done in 1055.89 sec
2016-08-01 05:10:54 accumulating data... done in 58.68 sec
2016-08-01 05:11:54 building point cloud... done in 2.796 sec
2016-08-01 05:11:59 selected 247 cameras
2016-08-01 05:11:59 preloading data... done in 24.124 sec
2016-08-01 05:12:24 filtering depth maps... done in 48426.8 sec
2016-08-01 18:39:32 preloading data... done in 375.422 sec
2016-08-01 18:45:48 accumulating data... done in 58.9 sec
2016-08-01 18:46:49 building point cloud... done in 3.135 sec
2016-08-01 18:46:55 selected 264 cameras
2016-08-01 18:46:55 preloading data... done in 25.294 sec
2016-08-01 18:47:20 filtering depth maps... done in 52626.4 sec
2016-08-02 09:24:29 preloading data... done in 411.181 sec
2016-08-02 09:31:20 accumulating data... done in 63.145 sec
2016-08-02 09:32:25 building point cloud... done in 2.022 sec
2016-08-02 09:32:31 selected 262 cameras
2016-08-02 09:32:31 preloading data... done in 27.209 sec
2016-08-02 09:32:58 filtering depth maps... done in 53501.8 sec
2016-08-03 00:24:42 preloading data... done in 405.837 sec
2016-08-03 00:31:28 accumulating data... done in 62.69 sec
2016-08-03 00:32:32 building point cloud... done in 2.203 sec
2016-08-03 00:32:39 134365217 points extracted
2016-08-03 00:32:40 Finished processing in 232156 sec (exit code 1)

I wonder, why the filtering is done 4 times, as seen in the log above.

The resulting point cloud looks good and has ~90mio points.

Regarding the CPU power consumtion I am measuring around 50W less than "possible" :

Workstation Idle: ~80W
duringCPU-only depth filtering in photoscan: ~175W
during heavy CPU-only rendering task: ~225W

stihl

  • Sr. Member
  • ****
  • Posts: 410
    • View Profile
Re: dense cloud processing time with ultraHigh (full pixel) setting
« Reply #2 on: August 03, 2016, 03:30:18 PM »
I believe Photoscan splits the project location into tiles and then filters the tiles separately in order not to fill up the RAM and locking up the computer in result.

chinnchilla84

  • Newbie
  • *
  • Posts: 4
    • View Profile
Re: dense cloud processing time with ultraHigh (full pixel) setting
« Reply #3 on: August 04, 2016, 12:18:45 PM »
That would explain the 4 filtering stages (2x1x2 tiles = 4 tiles).

In Preferences >> Advanced >> Project files

I have found the option to "keep depth maps". Perhaps this would prevent the software from processing the same data 4 times.

Is there a way to manipulate or at least see the "tiling"?

Best

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 15276
    • View Profile
Re: dense cloud processing time with ultraHigh (full pixel) setting
« Reply #4 on: August 04, 2016, 02:33:42 PM »
Hello Bodo,

Long dense cloud generation (and depth filtering stage especially) looks to be related to the excessive overlap - according to the screenshot fragment there are many images that cover the same area (maybe the complete scene), therefore the filtering process takes long time.

I can suggest to disable "overview" images and images taken under big oblique angle - it should reduce the processing time considerably.

"Keep depth maps" option allows only to skip the depth maps estimation time, if you are using exactly the same processing parameters and camera positions.
Best regards,
Alexey Pasumansky,
Agisoft LLC

chinnchilla84

  • Newbie
  • *
  • Posts: 4
    • View Profile
Re: dense cloud processing time with ultraHigh (full pixel) setting
« Reply #5 on: August 05, 2016, 11:52:06 AM »
Thank you for the hint, Alexey.

When I started with photogrammetry several weeks ago, I made many mistakes during shooting. The resulting models were full of errors and hard to work with.

Now that i have learned the basic principles, I am still unsure about how many pictures are required and usefull and where the border is to too many pictures / not useful anymore or even counter-productive.

I will attach some more details to the miniature church model, which took so long during depth filtering. Perhaps there are some rules of thumb for the amount of images for a 360° oblique circle? For nadir images, I use 80/60% overlap. Here in this small scale test scene, I just took a tripod with wireless trigger on my DSL-R.

In a next step, I would like to reduce the noise from the point cloud, which is getting more prominent with higher settings as seen in the screenshots below. My attempts with meshlab failed (programm takes more than 24h without any progress bar visible and crashes). Cloud Compare seems to only classify points. Is there an affordable tool for this in between step (de-noise of dense point clouds)?

I have already created a highly reduced and optimized model for sketchfab with fast loading times in mind. Now i would like to create a more detailed version at around 200k poly and a normal map from the ultra high dense cloud.

Best regards,
Bodo
« Last Edit: August 05, 2016, 12:20:11 PM by chinnchilla84 »