Forum

Author Topic: Best practice - fine detail on low poly output  (Read 2309 times)

Kjellis85

  • Full Member
  • ***
  • Posts: 220
  • Archaeological field supervisor
    • View Profile
    • Contact information at University of Tromsø
Best practice - fine detail on low poly output
« on: June 22, 2018, 02:56:45 PM »
I am looking for a good workflow on mesh decimation without loosing fine details.

My subjects for this workflow are rock art sites. They are typical large areas, from 2x1 meters, up to 50x5 meters. Some of theses sites have carvings covering a large  percent of the surface, with typical carving depth of between 1-5 mm. If i process 200+ (24mpx) images on ultra high, I will end up killing my hardware. Even if I could manage to process it, it would take a lot of processing hours, and the end result would be useless because there are no way I could disseminate a 300M poly model to any user, let alone publish it on any standard platform. So what I need is a good workflow that A) reduces time spent modelling "un-interesting areas" at more than necessary quality whilst going ultra on points of interest, i.e carvings, and B) decimate mesh without loosing fine details.

I know this is a lot to ask, if at all possible, but I am hoping for some ideas that might solve some of my challenges.

HelloJens

  • Newbie
  • *
  • Posts: 10
    • View Profile
Re: Best practice - fine detail on low poly output
« Reply #1 on: June 22, 2018, 04:00:25 PM »
For heavy processing we usually rent additional hardware. For about 400 euros you can get something like a 28 core Xeon with a 1080ti. Not sure where in the world you are sitting, but I'm sure there are probably similar services available.

I am not sure if there is a real substitution for processing power. You can be econimic with your images to a degree, but after that it's down to how fast your machine can calculate.

Sometimes making a crappy model and then using that to mask your photos can reduce processing time. As well as making sure to crop your region.

Alternatively you could use a 3D scanner to make genereate a high quality mesh and only calculate a very basic mesh in photoscan, then swap those two out and project the texture in PS - best of two worlds.


In terms of a low poly workflow I'd take my high res geometry into something like zBrush. Fix, clean holes using dynames. Then use the zremesher to generate a low poly version. Generate UV's on that. And then use the High resolution geometry to gradually reproject your detail. After that you can create normal / displacement maps.

There are many tutorials for this kind od thing out there if you google 'photogrammetry to game asset workflow for example'

Alvaro L

  • Newbie
  • *
  • Posts: 40
    • View Profile
Re: Best practice - fine detail on low poly output
« Reply #2 on: June 24, 2018, 03:16:06 PM »
Hi

I think that economising strategies regarding data input are in order with software like Agisoft, you can not trust on brute force processing as the last resource to solve your photogrammetry problem because of the nature of beast, also called "exponential grow of computing resources needed to solve a arithmetic input progression" (remember the chess board problem?) any input doubling will mean 4 times more processing time, since in most of agisoft maths there are exponential factors, for instance image area in pixels.
« Last Edit: June 24, 2018, 03:26:39 PM by Alvaro L »