Forum

Author Topic: Decreasing Dense Point Cloud Rendering Time  (Read 10690 times)

brando56894

  • Newbie
  • *
  • Posts: 2
    • View Profile
Decreasing Dense Point Cloud Rendering Time
« on: February 23, 2015, 08:12:31 AM »
TL;DR would a personal render farm significantly speed up dense cloud creation compared to a single hexacore Core i7 @ 4.5 GHz?

I have a friend that tasked me with helping him select the best configuration for his company that will be rendering 3D images of landscapes/buildings, after both of us doing a good amount of research we still can't come to a firm conclusion regarding what would be the most efficient setup to decrease his rendering time.

His current system takes about 4-4.5 hours to render 1.7 million points in a dense cloud (apologies if my terminology is incorrect, I just learned all about this a few days ago  ;) ) at medium quality, he would like to increase it to high quality without it taking an exorbitant amount of time.  He's using a rig with a Core i5 and 8 GB of DDR3 along with a mid-range Nvidia card. I told him to let me test it on my gaming rig since it's a lot more powerful than his since I have 20 GB DDR3 in Triple Channel (one in single), a Samsung 840 Pro SSD, an Nvidia 780 GTX and a Core i7 4820k clocked at 3.7 GHz. The data set is 197 pictures from a standard digital camera, each picture is about 4 MB.

It took me 5 hours to render 2.9 million points, which is far larger than what he was trying to render before. I have a quad-core and I'm assuming he does also, he didn't tell me what it's clocked at.

I've read the recommended performance specs for PhotoScan which is a hexacore i7, but AnandTech has the Xeons beating almost all of the Core i7s, with the exception of the Extreme Edition. I've also read that less cores and a higher clockspeed is better, yet AnandTech begs to differ showing that 12 core per CPU clocked at 2.6 GHz reigns supreme.

He essentially wants to cut the processing time for dense cloud computation down as much as possible, he originally told me that he would be willing to spend $10,000 now tells me that he's willing to spend more if it's worth it. He would like to buy a system from Boxx Tech and I have one config with dual hexacore Xeon E5-2643 v3 @ 3.5 GHz with 32 GB DDR4 and another with a hexacore Core i7 clocked at 4.5 GHz, but we can't decide which would be better. Seeing as how a highly clocked i7 probably wouldn't cut down the time as drastically as he would like, I started looking into their RenderBoxx Render Farms which have dual xeons per node and two nodes per module, fitting a total of 5 modules. The question is, would distributed computing work with photoscan and would it (let's say 4 nodes, 32 cores) significantly reduce the time compared to a single hexacore Core i7 @ 4.5 GHz?
« Last Edit: February 23, 2015, 09:26:20 AM by brando56894 »

Marcel

  • Sr. Member
  • ****
  • Posts: 309
    • View Profile
Re: Decreasing Dense Point Cloud Rendering Time
« Reply #1 on: February 23, 2015, 01:00:31 PM »
If the plan is to run scans at Medium or High then there is no need for a render farm. My computer is a i7 4930K@ 3.4Ghz with an AMD R9 290 GPU, and it takes about 4 hours for a point cloud with 250+ million points (50 large photos at Ultra setting).

The Dense Cloud processing depends mostly on the GPU.  A fast GPU will speed it up a lot (a dual GPU a bit more). The second half of the Dense Cloud reconstruction (depth map filtering) depends on the CPU, so the CPU also needs to be fast. Dual CPUs aren't necessarily much faster for big projects. Clock speed is very important.

Don't rely on the Anandtech benchmarks too much. The benchmark is using a very small project so the results are a bit skewed. For big projects that run multiple hours the performance might be different.

To run Photoscan in a Renderfarm you need Pro licenses for each computer, in which case the licenses equal the costs for the computers. If you are doing relatively small projects it's much better to just get a fast computer.




igor73

  • Full Member
  • ***
  • Posts: 228
    • View Profile
Re: Decreasing Dense Point Cloud Rendering Time
« Reply #2 on: February 23, 2015, 06:42:09 PM »

brando56894

  • Newbie
  • *
  • Posts: 2
    • View Profile
Re: Decreasing Dense Point Cloud Rendering Time
« Reply #3 on: February 23, 2015, 09:14:27 PM »
Thanks for the info guys, seeing how clockspeed is more important than cores, would something like this

Code: [Select]
Core i7 Hexacore @ 4.5 GHz
64 GB DDR3 1600 (8x8 GB)
2x GeForce 980 GTX in SLI

be better than this?

Code: [Select]
Dual Xeon E5-2643v3 3.4 GHz, 20 MB L3 cache, hexacore
64 GB DDR4-2133 Registered ECC, 8x8 GB, Quad-Channel
2x GeForce 980 GTX in SLI


Also I've read that there is marginal gain beyond 2 GPUs but in the thread that was linked above, someone said that they saw a pretty big decrease in rendering time going from 2 to 3 GPUs and more going from 3 to 4 GPUs. Since his main workload will be in PhotoScan would it be better to just get 4 GPUs? If that's the case, wouldn't it be better to get the Xeon config since it has 2 memory controller and will have 4 x16 PCI-E lanes?

igor73

  • Full Member
  • ***
  • Posts: 228
    • View Profile
Re: Decreasing Dense Point Cloud Rendering Time
« Reply #4 on: February 23, 2015, 10:23:07 PM »
Don´t forget that the GPU will only affect part of the dense cloud generation.  It won't help at all with aligning images and mesh generation.  Aligning images is often time consuming on large projects. 

As for the best rig to build i can´t help much.  Im trying to figure this out to.  Many things to consider as i will use my rig for other tasks as well  Looking forward to the advice you will get here. 

Marcel

  • Sr. Member
  • ****
  • Posts: 309
    • View Profile
Re: Decreasing Dense Point Cloud Rendering Time
« Reply #5 on: February 23, 2015, 11:33:29 PM »
More than 2 GPU's is really not worth it. This is in big part because the Dense Cloud reconstruction phase is only about 50% GPU accelerated. The other half (depth map filtering) is runs completely on the CPU. So going from 2 to 4 GPU only makes the processing 40% faster.

(This is apart from the fact that you would need a very big power supply, it's hard to fit more than 2 GPUs in a case, and you'll probably run into heat problems).

I can't tell exactly if the single or dual setup will be faster (and it is also a matter of cost vs performance). Our experience is that the fast single CPU system is faster, but we are not using the exact same CPUs as in your example.

Unless you are absolutely sure that you have a ton of Photoscan work coming up, I would start with a decent but not ridicilously overspecced machine. You can always buy a second computer later on when you know exactly what you need (and have it dedicated to processing).