We stumbled upon this software and decided to give it a whirl, here are our findings:
Requirements: We are wanting to reduce costs and increase quality when scanning actors for use as digital characters in our video game. Currently we use a combination of laser and photogrammetric methods to produce polygon models; they are expensive and not detailed enough.
We started with a simple point and shoot and did nothing special for lighting. The camera is on a tripod and the object is being rotated in roughly 22.5 degree increments.
Specifications:Camera:
Olympus E-P1
40mm f1.7
12mega pixel
Images:
16 JPG
Reconstruction #1:
Apprx 5mins
143538 verts
200000 faces
http://i.imgur.com/Q1I8J.jpghttp://i.imgur.com/glg9q.jpgReconstruction #2:
Apprx 7mins
249971 verts
500000 faces
http://i.imgur.com/Re7ao.jpghttp://i.imgur.com/96pQD.jpgThe software performed admirably, but this would not be detailed enough for our project. There is a fair amount of noise. Having a bit of knowledge in the area of computer vision and stereo reconstructions, we decided to push the software further and see what it was capable of.
We set up 3 flashes in softboxes to try and get the most evenly lit environment we could obtain. We brought in a variety of cameras and lenses
Specifications:Camera:
Canon 1000D
50mm
10mega pixel
Images:
9 PNG
Reconstruction #1:
Apprx 60mins
396534 verts
796459 faces
http://i.imgur.com/ybukT.jpghttp://i.imgur.com/KdBtf.jpgSpecifications:Camera:
Canon 5D2
85mm
21mega pixel
Images:
8 PNG
Reconstruction #1:
Apprx 80mins
499875 verts
1000000 faces
http://i.imgur.com/gMqzd.jpghttp://i.imgur.com/ny8xB.jpgThe better camera and the improved lighting made a difference. It was starting to look like this might work. We knew we were still being sort of guerilla about this and decided to get a bit more scientific. With just a single camera, the subject is surely moving between shots, the lighting is still not perfectly even and we have very little control over the specular component. What if we could better lighting and take the pictures simultaneously?
We didn’t have enough flash or enough cameras, so we used a stationary object.
With a dense stereo reconstruction a fine random noise makes it easier for the software to find matching pairs. So we added some makueup that both diminished the specular hits and increased the detail in this regard. You can see the difference it makes at the border of where the makeup was applied.
Specifications:Camera:
Canon 1000D
50mm
10mega pixel
Images:
23 PNG
Reconstruction #1:
Apprx 90mins
499992 verts
1000000 faces
http://i.imgur.com/W0rIp.jpghttp://i.imgur.com/TOWs3.jpghttp://i.imgur.com/I4TWi.jpghttp://i.imgur.com/9EFLF.jpghttp://imgur.com/7rsHRThis model is now of the resolution and detail we require. Was the software capable of better? There were still improvements that could be made in the lens, lighting and control over the specular component (polarized light and lenses?) We now had the data we required, but were curious to see how far we could push it …
…I can’t show you the final results as they contain sensitive IP. What we did was take an existing polygon head that’s around 30000 polygons. It’s nicely textured and fully rigged for our game. In Maya, we rendered that head from various angles and resolutions. We tried different lighting, resolutions, camera instrinsic and extrinsics. We were able to recover the model down to about .01 mm accuracy. The calibration was of course wrong resulting in some arbitrary 9dof transform difference but the model was indeed recovered. Given perfect inputs, the software seems capable of giving near perfect outputs.
Our next step is to purchase between 16-24 Canon 1100D with enough flash, softboxes, polarization, truss, etc to achieve as close as possible result to the simulated results we see in Maya.
Thanks AgiSoft!
PS- There were a number of smaller tests and objects scanned. Lots of data was collected. More details are available and we are happy to answer any questions.