Forum

Author Topic: Our Results Thus Far  (Read 11171 times)

John Root

  • Newbie
  • *
  • Posts: 9
    • View Profile
Our Results Thus Far
« on: April 19, 2011, 03:40:38 PM »
We stumbled upon this software and decided to give it a whirl, here are our findings:

Requirements:
We are wanting to reduce costs and increase quality when scanning actors for use as digital characters in our video game. Currently we use a combination of laser and photogrammetric methods to produce polygon models; they are expensive and not detailed enough.

We started with a simple point and shoot and did nothing special for lighting. The camera is on a tripod and the object is being rotated in roughly 22.5 degree increments.

Specifications:

Camera:
Olympus E-P1
40mm f1.7
12mega pixel

Images:
16 JPG

Reconstruction #1:
Apprx 5mins
143538 verts
200000 faces
http://i.imgur.com/Q1I8J.jpg
http://i.imgur.com/glg9q.jpg

Reconstruction #2:
Apprx 7mins
249971 verts
500000 faces
http://i.imgur.com/Re7ao.jpg
http://i.imgur.com/96pQD.jpg


The software performed admirably, but this would not be detailed enough for our project. There is a fair amount of noise. Having a bit of knowledge in the area of computer vision and stereo reconstructions, we decided to push the software further and see what it was capable of.

We set up 3 flashes in softboxes to try and get the most evenly lit environment we could obtain. We brought in a variety of cameras and lenses


Specifications:

Camera:
Canon 1000D
50mm
10mega pixel

Images:
9 PNG

Reconstruction #1:
Apprx 60mins
396534 verts
796459 faces
http://i.imgur.com/ybukT.jpg
http://i.imgur.com/KdBtf.jpg


Specifications:

Camera:
Canon 5D2
85mm
21mega pixel

Images:
8 PNG

Reconstruction #1:
Apprx 80mins
499875 verts
1000000 faces
http://i.imgur.com/gMqzd.jpg
http://i.imgur.com/ny8xB.jpg


The better camera and the improved lighting made a difference. It was starting to look like this might work. We knew we were still being sort of guerilla about this and decided to get a bit more scientific. With just a single camera, the subject is surely moving between shots, the lighting is still not perfectly even and we have very little control over the specular component. What if we could better lighting and take the pictures simultaneously?

We didn’t have enough flash or enough cameras, so we used a stationary object.

With a dense stereo reconstruction a fine random noise makes it easier for the software to find matching pairs. So we added some makueup that both diminished the specular hits and increased the detail in this regard. You can see the difference it makes at the border of where the makeup was applied.


Specifications:

Camera:
Canon 1000D
50mm
10mega pixel

Images:
23 PNG

Reconstruction #1:
Apprx 90mins
499992 verts
1000000 faces
http://i.imgur.com/W0rIp.jpg
http://i.imgur.com/TOWs3.jpg
http://i.imgur.com/I4TWi.jpg
http://i.imgur.com/9EFLF.jpg
http://imgur.com/7rsHR


This model is now of the resolution and detail we require. Was the software capable of better? There were still improvements that could be made in the lens, lighting and control over the specular component (polarized light and lenses?) We now had the data we required, but were curious to see how far we could push it …

…I can’t show you the final results as they contain sensitive IP. What we did was take an existing polygon head that’s around 30000 polygons. It’s nicely textured and fully rigged for our game. In Maya, we rendered that head from various angles and resolutions. We tried different lighting, resolutions, camera instrinsic and extrinsics. We were able to recover the model down to about .01 mm accuracy. The calibration was of course wrong resulting in some arbitrary 9dof transform difference but the model was indeed recovered. Given perfect inputs, the software seems capable of giving near perfect outputs.

Our next step is to purchase between 16-24 Canon 1100D with enough flash, softboxes, polarization, truss, etc to achieve as close as possible result to the simulated results we see in Maya.

Thanks AgiSoft!



PS- There were a number of smaller tests and objects scanned. Lots of data was collected. More details are available and we are happy to answer any questions.

John Root

  • Newbie
  • *
  • Posts: 9
    • View Profile
Re: Our Results Thus Far
« Reply #1 on: April 19, 2011, 05:55:46 PM »
Photo reconstruction against a laser scan:

In this round we felt the laser scan beat out the photoscan. These results we believe are typical for a single prosumer camera taking multiple photos in a somewhat controlled environment. Improvements were made that allowed the photoscan to achieve laser quality. In either case, the texture information of the photoscan was superior to the laser scanner we had access to.

http://i.imgur.com/xzrok.jpg
http://i.imgur.com/YyBcb.jpg
http://i.imgur.com/FCdvX.jpg
http://i.imgur.com/iStHI.jpg

puffball

  • Newbie
  • *
  • Posts: 4
    • View Profile
Re: Our Results Thus Far
« Reply #2 on: April 20, 2011, 02:48:49 AM »
Pretty interesting. The makeup certainly yelds good results and helps alignment.
I havnt had too much trouble gathering head/face data from people holding as still as possible or even laying down.
Interestnig to see you used greenscreen/flat background for your subjects - i didnt think this was advised? i wasnt able to get as good results when i shot against a uniform background.

Im using a 550d and 7D with calibrated lense info. A higher end DSLR and even lighting are certainly the key to good mesh generation.

www.angry-pixel.co.uk

John Root

  • Newbie
  • *
  • Posts: 9
    • View Profile
Re: Our Results Thus Far
« Reply #3 on: April 20, 2011, 10:03:39 AM »
RE: Green screen and flat background

We masked those areas out in the software. Having a green screen made that easier. However you can see a fair amount of green and miss alignment in albedo component of the reconstruction. In the end we plan to build out a white foam core stage to bounce the light in an effort to achieve a more evenly lit environment. Those walls might contain some sort of fiducials if it makes a difference.

I can imagine that the bundle finds more accurate camera extrinsics with more information IE background. Long term we would like to only calibrate the cameras once so that all the reconstructions come out in the same coordinate frame. This process would likely involve an object of known dimensions placed inside a stage filled with noise. We've not yet tried to carry a calibration across from one reconstruction to another as we are still dealing with single cameras.

Can you share you experience with saving and loading calibrations?

mala

  • Full Member
  • ***
  • Posts: 109
    • View Profile
Re: Our Results Thus Far
« Reply #4 on: April 22, 2011, 03:39:10 PM »
Nice results :)
 Good to see a a variety of tests and compare....and your logical approach to find the best results.

Experimenting with different light setups is one of things i've been doing...i have plenty of resources for that..but unfortunately only one 450D with which to take the pics...i'm jealous of all your cameras 8)

Cheers,
mala