TL;DR would a personal render farm significantly speed up dense cloud creation compared to a single hexacore Core i7 @ 4.5 GHz?
I have a friend that tasked me with helping him select the best configuration for his company that will be rendering 3D images of landscapes/buildings, after both of us doing a good amount of research we still can't come to a firm conclusion regarding what would be the most efficient setup to decrease his rendering time.
His current system takes about 4-4.5 hours to render 1.7 million points in a dense cloud (apologies if my terminology is incorrect, I just learned all about this a few days ago

) at medium quality, he would like to increase it to high quality without it taking an exorbitant amount of time. He's using a rig with a Core i5 and 8 GB of DDR3 along with a mid-range Nvidia card. I told him to let me test it on my gaming rig since it's a lot more powerful than his since I have 20 GB DDR3 in Triple Channel (one in single), a Samsung 840 Pro SSD, an Nvidia 780 GTX and a Core i7 4820k clocked at 3.7 GHz. The data set is 197 pictures from a standard digital camera, each picture is about 4 MB.
It took me 5 hours to render 2.9 million points, which is far larger than what he was trying to render before. I have a quad-core and I'm assuming he does also, he didn't tell me what it's clocked at.
I've read the recommended performance specs for PhotoScan which is a hexacore i7, but AnandTech has the Xeons beating almost all of the Core i7s, with the exception of the Extreme Edition. I've also read that less cores and a higher clockspeed is better, yet AnandTech begs to differ showing that 12 core per CPU clocked at 2.6 GHz reigns supreme.
He essentially wants to cut the processing time for dense cloud computation down as much as possible, he originally told me that he would be willing to spend $10,000 now tells me that he's willing to spend more if it's worth it. He would like to buy a system from Boxx Tech and I have one config with dual hexacore Xeon E5-2643 v3 @ 3.5 GHz with 32 GB DDR4 and another with a hexacore Core i7 clocked at 4.5 GHz, but we can't decide which would be better. Seeing as how a highly clocked i7 probably wouldn't cut down the time as drastically as he would like, I started looking into their
RenderBoxx Render Farms which have dual xeons per node and two nodes per module, fitting a total of 5 modules. The question is, would distributed computing work with photoscan and would it (let's say 4 nodes, 32 cores) significantly reduce the time compared to a single hexacore Core i7 @ 4.5 GHz?