Forum

Author Topic: Workflow - Photoscan to game asset  (Read 20171 times)

simon

  • Newbie
  • *
  • Posts: 7
    • View Profile
Workflow - Photoscan to game asset
« on: April 22, 2014, 10:17:21 PM »
Hi Everyone,

This is my first post. I've had an increasing interest in photogrammetry for the last few years, and over the last few months I finally started diving in. I have found these forums to be an incredible resource, with some very helpful members. I would love to become a contributing member as well, once I learn enough to have something new to share. I've done my best not to jump in with all of my inexperienced questions, but I am now finding myself stumbling over attempting to develop a functional/feasible workflow for my needs.
I would like to use Photoscan for developing game assets/environments for a new game I'm working on, and am attempting to find a good workflow for efficiently creating optimized meshes and maps from my Photoscan results. My photography and graphics design skill set is much better than my 3D design capabilities. My Photoscan results are relatively good, but optimizing them is leaving me at a loss.
I've spent some time trying to learn Zbrush, and this may be the best solution, but I have not wrapped my head around it yet. The good news is, I'm an "empty cup" - I'll learn whatever software I need to. Now is also a good time for me to invest in the solution that best fits my needs (I'm using a Zbrush trial, which apparently they don't give out easily any more). I simply don't want to get lost in complicated and long workflows, that go beyond the detail level necessary for game assets.
So what I'm trying to do is:
Reduce and optimize mesh (along with minor touch up).
Create UV maps with a layout I can work with in Photoshop.
Extract/optimize any other details where possible, such as specular maps, bump maps, displacement maps.

Any suggestions on a simple/efficient process? Any tips on optimizing for a game engine? Any and all help would be greatly appreciated!

gamegoof

  • Newbie
  • *
  • Posts: 18
    • View Profile
Re: Workflow - Photoscan to game asset
« Reply #1 on: April 23, 2014, 06:59:50 AM »
Honestly I just got laid off from a large game studio that failed to see the potential in photogrammetry, simply because of the garbage PS outputs. Sorry to be blunt but the UV layout is horrible, tris instead of nice quad meshes, PS picks weird surfaces to project with so you end up with blurry patches unless you do reprojections in other packages or similar workarounds. I have mentioned this to Agisoft but was just told 'thats the way it is'.

I feel they are leaving the door wide open for someone (Like Autodesk) to come along with better output and just take over game and CG/VFX assets. That being said there are several threads with people talking about their processes and they are heavily involved, using high-end sculpting tools and techniques, currently there are no shortcuts. Maybe someone will have more information than me but hearing "My photography and graphics design skill set is much better than my 3D" is disheartening because that's where Im at and judging by my currently employments status, its not the best place to be...

Here is a breakdown of how a game studio uses the assets http://imgur.com/a/wq8JK
Zbrush tutorial using 123dcatch output of a rock https://www.youtube.com/watch?v=PMkWDDmO5A8
« Last Edit: April 23, 2014, 07:18:50 AM by gamegoof »

FoodMan

  • Sr. Member
  • ****
  • Posts: 477
    • View Profile
Re: Workflow - Photoscan to game asset
« Reply #2 on: April 23, 2014, 08:45:21 AM »
that rock is not good...  I have just made some rocks scans for a major studio in L.A., and Pscan really shined ...  8)



simon

  • Newbie
  • *
  • Posts: 7
    • View Profile
Re: Workflow - Photoscan to game asset
« Reply #3 on: April 23, 2014, 09:27:05 AM »
Honestly I just got laid off from a large game studio that failed to see the potential in photogrammetry, simply because of the garbage PS outputs. Sorry to be blunt but the UV layout is horrible, tris instead of nice quad meshes, PS picks weird surfaces to project with so you end up with blurry patches unless you do reprojections in other packages or similar workarounds. I have mentioned this to Agisoft but was just told 'thats the way it is'.

I feel they are leaving the door wide open for someone (Like Autodesk) to come along with better output and just take over game and CG/VFX assets. That being said there are several threads with people talking about their processes and they are heavily involved, using high-end sculpting tools and techniques, currently there are no shortcuts. Maybe someone will have more information than me but hearing "My photography and graphics design skill set is much better than my 3D" is disheartening because that's where Im at and judging by my currently employments status, its not the best place to be...

Here is a breakdown of how a game studio uses the assets http://imgur.com/a/wq8JK
Zbrush tutorial using 123dcatch output of a rock https://www.youtube.com/watch?v=PMkWDDmO5A8

Thanks for the response. Sorry to hear about your predicament, it's unfortunate that they were unable to see the potential. I think that this technology is going to become far more predominant in the industry, but it will take some refinement. Even so, it appears that where it is today is leaps ahead of where it was just a few years ago, I hope the rate of advancement continues. There is plenty of room for automation, refinements, and more tools geared towards modeling.

I had worried that my only current options would involve this manual process. The workflow is becoming clearer to me, and it appears that I may need to simply bite the bullet and dive deeper into Zbrush. It's a powerful program, but with great power comes greater... difficulty. They seem to have "re-invented the wheel" in many areas of their UI/workflow, but no doubt much of that is due to my own ignorance of the program. Of greater concern is that it appears I will be spending hours working on preparing  my assets for game usage.

simon

  • Newbie
  • *
  • Posts: 7
    • View Profile
Re: Workflow - Photoscan to game asset
« Reply #4 on: April 23, 2014, 09:29:26 AM »
that rock is not good...  I have just made some rocks scans for a major studio in L.A., and Pscan really shined ...  8)

What don't you like about those rocks? Want to share your results?

FoodMan

  • Sr. Member
  • ****
  • Posts: 477
    • View Profile
Re: Workflow - Photoscan to game asset
« Reply #5 on: April 23, 2014, 09:45:55 AM »
well for my taste, it looks too softy.. maybe more images would have done a better job though..

Just a personal pov..

I wish I could share mine, but atm I am not allowed to do so..  :P

Wishgranter

  • Hero Member
  • *****
  • Posts: 1202
    • View Profile
    • Museum of Historic Buildings
Re: Workflow - Photoscan to game asset
« Reply #6 on: April 23, 2014, 10:22:43 AM »
123Catch and RepCap Photo PRO ( Accute 3D ) - they both downsample the dephtmaps significantly, therefore results are "smooth", with downsampling they get a better speed, "cleaner" mesh, but without too much details.. Not to say about their SFM - ALIGN stage, its weak, and cannot align  so easy everything......   
----------------
www.mhb.sk

Andrew

  • Jr. Member
  • **
  • Posts: 77
    • View Profile
Re: Workflow - Photoscan to game asset
« Reply #7 on: April 23, 2014, 12:05:07 PM »
(Excuse any typos, tbumbtyping on my mobile)
Photogrammetry output can only be as good as photos you input. They need to be pin sharp, with good dynamic range (well exposed, shot in raw, low ISO), without DOF issues, with good coverage and generous overlap, without obstacles between you and scanned object (think leaves or grass blades next to scanned rock),  scanned object needs enough surface detail definition for PS to see and match features, it cant bee too reflective, cant be transparent, ypu need to avoid light flare etc.etc.

In these ideal conditions PS will usually deliver results that are good enough 'out of the box' so you can use resultibg mesh as source 'hires' model. Even then though, you have to generate actual target mesh yourself. This doesnt really differ that much from typical game art pipeline,building separate hires sculpt an separate low poly geo to transfer normalmap and textures to.Sometimes you can get away with automatic mesh optimization to get low poly directly from highres PS mesh  (most 3d packages have tools for that, plus there are a few standalones like Antageo Balancer) but auto tools never give you as good results as manual retopo. Problem with decimating raw scan is, it contains noise. The more occlusion (worse photo coverage), the more noise you get in your mesh. Usually parts with most noise are of low importance and barely visible, but auromatic decimation will retain highest number of triangles exactly in those noise areas, wasting your polygon budget. If you are bent on using automatic decimation, you MUST smooth out most noisy areas first. Also, keep in mind that decimation built into PS is NOT for game assets, it is not adaptive and wont attempt to be smart about triangle distribution.
Now lets get real - you will rarely get photos perfect enough and so you will have to apply various amounts of manual labour to get that usable hires source mesh. This is the biggest problem when making studio-wide decissions about adopting PS in art pipeline - you never know how much work each scan will require. Sometimes you only need to create masks for (hundreds of) photos and do some mesh smoothing afterwards, sometimes you need to fight to even get proper photo alignment, sometimes you need to search for one photo among dozens that causes blurry texture, sometimes you need to resculpt broken areas, sometimes.... the list goes on :)

gamegoof

  • Newbie
  • *
  • Posts: 18
    • View Profile
Re: Workflow - Photoscan to game asset
« Reply #8 on: April 23, 2014, 07:59:04 PM »
... sometimes you need to re-sculpt broken areas, sometimes.... the list goes on :)

I guess the question is: does it need to go on? Im telling you right now, Autodesk's mission is to overtake and crush Agisoft in this area. They will attempt to streamline this process into something that isn't daunting to game studios.

Ok  story time: At my former studios (and this was a giant studio that should have all the skills and smarts to figure this out), only character artists seemed to be comfortable with the post-processing of these assets, in fact it was only theoretical because they were too busy to be tasked with even trying it, but they seemed confident they could. Meanwhile the environment artists were not experts in high-poly workflow, they are used to dealing with blocky worlds, not sculpting multi-million polygon meshes and the workflow to bake and optimize everything down to game ready assets.

So in the end when one of the them tried a simple asset, it too 80% of the time it would have if it was done from scratch and it looked like SH*T. Next we tried to get outsourcing to process the assets but first we got an estimate on how long it would take to make the asset from scratch (we sent them a photo of the object). They said 10 days from scratch, so then we sent them the 5 million polygon asset, which looked gorgeous and asked "OK now how long?". They said 8 days, this was a suspicious disappointment to me, but the worst part is yet to come... in the end it took 10 f-ing days and once again, the asset looked like SH*T.

I was assured that outsourcing would never see scanning as a threat to their precious cash flow and they would never purposely take longer in order to lead us away from scanning, I didn't believe it for a moment. Sadly I had gambled my employment on this stuff and soon I was out the door.

So that's a story from the real world, while its possible to make game ready assets from PS outputs, its currently daunting and time-consuming, does it need to be? Does its difficulty keep the process exclusive enough so we dont get waves of interns getting tasked with scanning the world while talented artists and photographers cue up for unemployment? I believe scanning will actually bring home jobs from China because currently, most major studios are offloading their art assets to Asia, so I dont see we have much to lose.

simon

  • Newbie
  • *
  • Posts: 7
    • View Profile
Re: Workflow - Photoscan to game asset
« Reply #9 on: April 23, 2014, 09:50:46 PM »
(Excuse any typos, tbumbtyping on my mobile)
Photogrammetry output can only be as good as photos you input. They need to be pin sharp, with good dynamic range (well exposed, shot in raw, low ISO), without DOF issues, with good coverage and generous overlap, without obstacles between you and scanned object (think leaves or grass blades next to scanned rock),  scanned object needs enough surface detail definition for PS to see and match features, it cant bee too reflective, cant be transparent, ypu need to avoid light flare etc.etc.

In these ideal conditions PS will usually deliver results that are good enough 'out of the box' so you can use resultibg mesh as source 'hires' model. Even then though, you have to generate actual target mesh yourself. This doesnt really differ that much from typical game art pipeline,building separate hires sculpt an separate low poly geo to transfer normalmap and textures to.Sometimes you can get away with automatic mesh optimization to get low poly directly from highres PS mesh  (most 3d packages have tools for that, plus there are a few standalones like Antageo Balancer) but auto tools never give you as good results as manual retopo. Problem with decimating raw scan is, it contains noise. The more occlusion (worse photo coverage), the more noise you get in your mesh. Usually parts with most noise are of low importance and barely visible, but auromatic decimation will retain highest number of triangles exactly in those noise areas, wasting your polygon budget. If you are bent on using automatic decimation, you MUST smooth out most noisy areas first. Also, keep in mind that decimation built into PS is NOT for game assets, it is not adaptive and wont attempt to be smart about triangle distribution.
Now lets get real - you will rarely get photos perfect enough and so you will have to apply various amounts of manual labour to get that usable hires source mesh. This is the biggest problem when making studio-wide decissions about adopting PS in art pipeline - you never know how much work each scan will require. Sometimes you only need to create masks for (hundreds of) photos and do some mesh smoothing afterwards, sometimes you need to fight to even get proper photo alignment, sometimes you need to search for one photo among dozens that causes blurry texture, sometimes you need to resculpt broken areas, sometimes.... the list goes on :)

Andrew, thanks for sharing this information. I am inspired by your work, and your blog posts helped edge me towards working with Photoscan.
I'm getting a decent feel for the photography challenges and natural limitations (reading your blog I found myself continuously nodding... Nooo come back clouds!), and although I'm sure optimizing my results will be a continuous process, my current bottleneck is processing my scan results. This is far from a "oh great I'll take pics and make a game" process, and in many ways I am finding it far more challenging than creating assets from scratch... but they each have their advantages, and the final results could be fantastic when used appropriately. But who am I telling that to??
As gamegoof mentioned, not many studios yet appreciate these advantages. At the end of the day, it's the "bottom-line" (money) that often matters most to a large studio. On the other hand, I think those who develop a proficiency in this field now may see greater benefit in the next few years, as studios begin to adopt the technology more. It's nice to see you pushing this field, and also sharing information on your work and process.

Back to learning how to process scans. I see what you are saying about reducing noise, indeed other than for very specific assets (where you are OK with losing all of the great detail you get from PS), it seems that they will have to be processed manually. I had hoped that there might be some way to use normal maps to capture the majority of the fine detail, while basically re-topologizing (is that a word?) in a mostly automatic way, just controlling the density of the topology in a few key areas manually.
If you don't mind my asking, from your PS scans, do you essentially process until you end up with an optimized mesh + diffuse map and a normal map? Do you use it to extract any other data? Would you suggest sticking with Zbrush for everything after PS, or would you suggest other tools?
Thanks so much!
« Last Edit: April 23, 2014, 10:46:18 PM by simon »

simon

  • Newbie
  • *
  • Posts: 7
    • View Profile
Re: Workflow - Photoscan to game asset
« Reply #10 on: April 23, 2014, 09:54:23 PM »
123Catch and RepCap Photo PRO ( Accute 3D ) - they both downsample the dephtmaps significantly, therefore results are "smooth", with downsampling they get a better speed, "cleaner" mesh, but without too much details.. Not to say about their SFM - ALIGN stage, its weak, and cannot align  so easy everything......   

I agree - I personally wouldn't consider these tools powerful enough to be worth the effort for developing game assets. One of the primary advantages of PS is having all of the fine detail to work with. If you lose that, why bother with the extra challenges?
Then again, I can't say I have looked into them in detail, as PS looked like a much better solution early on. I have to largely thank Andrew and Lee for steering me in this direction, through the great work they have shared.
« Last Edit: April 23, 2014, 10:06:54 PM by simon »

simon

  • Newbie
  • *
  • Posts: 7
    • View Profile
Re: Workflow - Photoscan to game asset
« Reply #11 on: April 23, 2014, 10:03:04 PM »
... sometimes you need to re-sculpt broken areas, sometimes.... the list goes on :)

I guess the question is: does it need to go on? Im telling you right now, Autodesk's mission is to overtake and crush Agisoft in this area. They will attempt to streamline this process into something that isn't daunting to game studios.

Ok  story time: At my former studios (and this was a giant studio that should have all the skills and smarts to figure this out), only character artists seemed to be comfortable with the post-processing of these assets, in fact it was only theoretical because they were too busy to be tasked with even trying it, but they seemed confident they could. Meanwhile the environment artists were not experts in high-poly workflow, they are used to dealing with blocky worlds, not sculpting multi-million polygon meshes and the workflow to bake and optimize everything down to game ready assets.

So in the end when one of the them tried a simple asset, it too 80% of the time it would have if it was done from scratch and it looked like SH*T. Next we tried to get outsourcing to process the assets but first we got an estimate on how long it would take to make the asset from scratch (we sent them a photo of the object). They said 10 days from scratch, so then we sent them the 5 million polygon asset, which looked gorgeous and asked "OK now how long?". They said 8 days, this was a suspicious disappointment to me, but the worst part is yet to come... in the end it took 10 f-ing days and once again, the asset looked like SH*T.

I was assured that outsourcing would never see scanning as a threat to their precious cash flow and they would never purposely take longer in order to lead us away from scanning, I didn't believe it for a moment. Sadly I had gambled my employment on this stuff and soon I was out the door.

So that's a story from the real world, while its possible to make game ready assets from PS outputs, its currently daunting and time-consuming, does it need to be? Does its difficulty keep the process exclusive enough so we dont get waves of interns getting tasked with scanning the world while talented artists and photographers cue up for unemployment? I believe scanning will actually bring home jobs from China because currently, most major studios are offloading their art assets to Asia, so I dont see we have much to lose.

The majority of my career has been outside of game development. I'd venture to say that my understanding of business far exceeds my understanding of the intricacies of various game studios. I will say, however, that a common theme which holds to any business that intends to make a profit, is that emerging technologies are treated with a level of skepticism that often directly correlates to the size of the company. This changes depending on the company and it's size, but it's good to understand that emerging technologies are considered disruptive, and therefore they not only are treated with the skepticism based on their risk, they are also resisted by those who will be disrupted by them.
Being unemployed (unless you are successfully self-employed) is not a fun experience. I hope more come to appreciate your skill set in the future, and that you are able to find some positive change despite the short term challenges life throw's at you.

Andrew

  • Jr. Member
  • **
  • Posts: 77
    • View Profile
Re: Workflow - Photoscan to game asset
« Reply #12 on: April 23, 2014, 10:33:45 PM »
Gamegoof, I feel your pain with how your (previous) company approached photogrammetry :(

Having said that, I honestly believe Photoscan is not to blame here, what it does, it does wonderfully and last I checked (granted it was a while ago) it allowed A LOT more flexibility and a lot more tools for real artists than Autodesk solution. Surely Autodesk will not sit idle and will continue improving, but it won't spit out game-ready assets anytime soon, if ever.

You mentioned artists with no previous experience with photogrammetry cleanup, and with no previous experience with retopology/resculpting/baking, failed on their very first attempt. That is not surprising at all, and is hardly Photoscan's fault. I don't mean to be harsh, just trying to be fair with respect to Agisoft.

Let me break down main issues with photogrammetry-based pipeline, assuming person taking the photos did fantastic job and you have great highres source mesh:

- noise in mesh: unavoidable in real world, if you are going for very detailed mesh, you will get noise. It's not that hard to smooth out whatever needs smoothing, but surely requires some practice and, well, time. Alternatively, you can process scans at lower quality which produces much smoother results but retains a lot less detail.

- very high polycount and 'messy' triangulated mesh - I don't know of any software that automatically creates good topology and doesn't waste a single polygon. Some auto-retopology tools appear lately but all they do is generate 'pretty' regular topology, which may be of some use to animators, but for games you need adaptive decimation of your geometry (keep geometry only where you really need it, reduce polycount everywhere else), so that's really no good. I just don't see any alternative to manual optimization/retopology. No automatic solution will know where you need detail and where you don't.

- efficient UVs, this is something that Photoscan could improve, but there are so many UV solutions on the market that it's often more convenient to stick to one that you use on a daily basis.

- texture contains 'shading', by far the most problematic issue to overcome, unless you just want to keep geometry and create texture from scratch, or just don't care about in-game lighting/shaders. Few examples of this: any crack in the scanned rock contains 'shadow' in texture, and then lighting in engine puts shadow on top of that shadow, which results in unrealistic ubershadow :) If you get specularity or reflection, or Fresnel, in your scan-generated texture, it will appear static, while in reality those things change relative to viewing angle/light vector. All these things are just there in photos - this has very little to do with Photoscan. PS actually does try to reduce some of these issues (with enough photos from different angles, it will significantly reduce specular reflections on human face, etc.).

- geometry and texture is unique (non-tiling), which is great, but these assets eat up a lot of resources. I wouldn't recommend large-scale use of photogrammetry for games that need to run well/stream smoothly on previous gen consoles etc. Again, not something that Photoscan can or even should attempt to fix.

Games were always about a lot of work to create something that looks great and yet renders in fraction of a second. This doesn't change with photogrammetry - you need lots of experience and smarts in order to make use of scans, it's just that we collectively lack experience. But we'll get there! The really sad part is that big companies place so little value in research and learning of new tools and workflows...

-Andrew

gamegoof

  • Newbie
  • *
  • Posts: 18
    • View Profile
Re: Workflow - Photoscan to game asset
« Reply #13 on: April 23, 2014, 10:55:33 PM »
I realize Autodesk wont be spitting out game ready assets. Yet. I believe that is their intention (From speaking to someone there), they bought a company that was doing a similar photogrammetry applications but for industrial use, it was very expensive but that is their processing backbone, cloud-based.

For me the issue was convincing a studio that was satisfied with using outsourcing and local artists to create assets, the problem with the PS output is that it requires a very skilled artist to turn it into game ready assets. Those types of artists are in high demand and well paid, so trading a junior artists applying tiling textures to a deformed cube is much more attractive than scanning the objects, half a day compiling time (and who knows what else to export a mesh) then getting a guy who should be doing character faces to process a rock for a few days...

I still think some studios are going to see the benefit or have a game that requires such assets but for now Im concentrating on the VFX/CG field, I believe they are more ready to deal with this type of data and have realism requirements that far outweigh the game industry, which is still locked into highly optimized content.

I wasn't really blaming PS for this, although I will point out that making ANY of the issues better doesn't seem too high on their priorities list.. Ive seen research papers posted on here on noise reduction and more intelligent detail-retention (for lower poly output) go relatively ignored so Im not sure if there are any plans for this. I believe features like this would help sell many many more copies of PS, which I think could be good?

The tutorials section of agisoft.ru should have refined workflow suggestions to get people in the right direction, sifting through forums or youtube for (potentially) incorrect methods isnt really doing much for the cause.


Andrew

  • Jr. Member
  • **
  • Posts: 77
    • View Profile
Re: Workflow - Photoscan to game asset
« Reply #14 on: April 23, 2014, 11:09:51 PM »
well for my taste, it looks too softy.. maybe more images would have done a better job though..


Did you mean this rock is too soft? https://www.dropbox.com/s/yw01c6ngy2vowwc/Rock_Ethan_Carter_closeup.jpg

It neither seems soft, nor does it need to be sharper - the whole cluster is no more than 1meter wide :) Perhaps you were referring to the smooth oval shapes? I'll pass on your complaint to Mother Nature :) Not being an ass here, just smiling at your comment as I actually often get this kind of feedback about the nature of what is being scanned, with my favorite being complaints about how bad/unrealistic/non-proportional some of our characters are. Good thing our actors don't read internet forums, they would hate to learn they supposedly do a really bad job of looking like proper human beings :)

Cheers,
Andrew