Author Topic: System Performance improvement for high density models  (Read 1796 times)


  • Newbie
  • *
  • Posts: 14
    • View Profile
System Performance improvement for high density models
« on: June 15, 2015, 07:44:37 PM »
Hi all,

I have been slowly going through this forum and have basically given up trying to find an answer by search.

My question is this. Given that the typical orthophoto or DEM generated can cover a few square kilometers, the amount of points generated would be immense especially if the intended resolution of the model is to be high what parameters of a system is most important to handle these large models or is there a strategy to do this?

I have been generating rather dense models recently. A few billion points is typical. I would also like to do some editing ( remove some extra points to remove some tree overhang etc.) but the thing is clearly so immense that even rotating the model is a pain in the ass for my system.

A typical setup would be as such: I have a GPS locked D5300 flying and a typical photoset would be about 1200 images large. I then put them in the same chunk and align photos at 80k points, high accuracy, 0 tie points and no preselection.

Then for dense cloud, my typical setting would be medium with a moderate filter.

For mesh, arbitrary, dense cloud, 0 face count and the rest is default.

By this point, the output is so complicated it is even a problem to rotate much less edit it. My requires this kind of detail as I am doing modelling and calculations. My work is done on a Dell Precision T5610, dual E5-2650v2, 256GB RAM and SSDs. If I were to upgrade, what would help most? A GPU or a better CPU?

Sorry for the wall of text