Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - John Root

Pages: [1]
1
Face and Body Scanning / Esper Triggerbox + Smartshooter ?
« on: December 12, 2018, 02:48:31 AM »
I'm building out a head scanner that will be made from 50 Canon T7i

Seems like I want to be triggering them via Esper Trigger Box. However, that doesn't seem to provide any way to control cameras or download images. For that, it appears I want Smartshooter + a bunch of USB hubs.

My question is, how do these two systems work together?

Physically, how do they connect? Or do they?
Does Smartshooter tell Esper to trigger?
Or do I trigger from Esper, and then Smartshooter just sees the images and downloads?
People who are doing this, are you constantly alt+tab 'ing between the two pieces of software?
Does there exist a single solution that does both?

2
A strobe (flash) can get considerably brighter than a continuous light. That allows you to stop your cameras (DSLR) way down thereby increasing depth of field. In that case, I would say strobes are preferred. In my case however, I need continuous light because our scanner is made up of machine vision cameras for animated faces. For this we use Arri Sky Panels. 

3
Face and Body Scanning / Re: Head scans in the Los Angeles area?
« on: May 08, 2014, 06:54:05 PM »
Thanks for the suggestions thus far. We are aware of ICT and Giant, but we are specifically looking for a high end photogrammetry set up for its ability to capture a facial expression instantly. While Gentle Giant and ICT are both very reputable vendors, their acquisition times would be too slow for our needs.


4
Can you use a mannequin?

If so, you could do it with one camera.

5
Face and Body Scanning / Head scans in the Los Angeles area?
« on: May 08, 2014, 02:07:13 AM »
Does anyone know if there are any high end, photogrammetry, face scanning systems in the Los Angeles area that are open for business?

Thanks in advance!



 

6
General / Re: Way to follow with a white model?
« on: May 09, 2011, 05:38:56 PM »
In our tests we've found that crisp focus and a deep depth of field is critical to a good reconstruction. To obtain this, one needs a lot of light. We never tried projecting a noise pattern as it was theorized that when the flashes  triggered, it would simply blow out the projection. Instead we opted to paint the object with a high contrast matte noise. We've been getting really good reconstructions this way. Of course we are sacrificing the albedo for mesh quality.

How have you dealt with the projection getting drown by the flash?
How have you dealt with the projection stretching when collinear to a surface?
How have you dealt with the projection having limited resolution and diffusion?

7
Face and Body Scanning / Re: Our Results Thus Far
« on: April 20, 2011, 10:03:39 AM »
RE: Green screen and flat background

We masked those areas out in the software. Having a green screen made that easier. However you can see a fair amount of green and miss alignment in albedo component of the reconstruction. In the end we plan to build out a white foam core stage to bounce the light in an effort to achieve a more evenly lit environment. Those walls might contain some sort of fiducials if it makes a difference.

I can imagine that the bundle finds more accurate camera extrinsics with more information IE background. Long term we would like to only calibrate the cameras once so that all the reconstructions come out in the same coordinate frame. This process would likely involve an object of known dimensions placed inside a stage filled with noise. We've not yet tried to carry a calibration across from one reconstruction to another as we are still dealing with single cameras.

Can you share you experience with saving and loading calibrations?

8
Face and Body Scanning / Re: Our Results Thus Far
« on: April 19, 2011, 05:55:46 PM »
Photo reconstruction against a laser scan:

In this round we felt the laser scan beat out the photoscan. These results we believe are typical for a single prosumer camera taking multiple photos in a somewhat controlled environment. Improvements were made that allowed the photoscan to achieve laser quality. In either case, the texture information of the photoscan was superior to the laser scanner we had access to.

http://i.imgur.com/xzrok.jpg
http://i.imgur.com/YyBcb.jpg
http://i.imgur.com/FCdvX.jpg
http://i.imgur.com/iStHI.jpg

9
Face and Body Scanning / Our Results Thus Far
« on: April 19, 2011, 03:40:38 PM »
We stumbled upon this software and decided to give it a whirl, here are our findings:

Requirements:
We are wanting to reduce costs and increase quality when scanning actors for use as digital characters in our video game. Currently we use a combination of laser and photogrammetric methods to produce polygon models; they are expensive and not detailed enough.

We started with a simple point and shoot and did nothing special for lighting. The camera is on a tripod and the object is being rotated in roughly 22.5 degree increments.

Specifications:

Camera:
Olympus E-P1
40mm f1.7
12mega pixel

Images:
16 JPG

Reconstruction #1:
Apprx 5mins
143538 verts
200000 faces
http://i.imgur.com/Q1I8J.jpg
http://i.imgur.com/glg9q.jpg

Reconstruction #2:
Apprx 7mins
249971 verts
500000 faces
http://i.imgur.com/Re7ao.jpg
http://i.imgur.com/96pQD.jpg


The software performed admirably, but this would not be detailed enough for our project. There is a fair amount of noise. Having a bit of knowledge in the area of computer vision and stereo reconstructions, we decided to push the software further and see what it was capable of.

We set up 3 flashes in softboxes to try and get the most evenly lit environment we could obtain. We brought in a variety of cameras and lenses


Specifications:

Camera:
Canon 1000D
50mm
10mega pixel

Images:
9 PNG

Reconstruction #1:
Apprx 60mins
396534 verts
796459 faces
http://i.imgur.com/ybukT.jpg
http://i.imgur.com/KdBtf.jpg


Specifications:

Camera:
Canon 5D2
85mm
21mega pixel

Images:
8 PNG

Reconstruction #1:
Apprx 80mins
499875 verts
1000000 faces
http://i.imgur.com/gMqzd.jpg
http://i.imgur.com/ny8xB.jpg


The better camera and the improved lighting made a difference. It was starting to look like this might work. We knew we were still being sort of guerilla about this and decided to get a bit more scientific. With just a single camera, the subject is surely moving between shots, the lighting is still not perfectly even and we have very little control over the specular component. What if we could better lighting and take the pictures simultaneously?

We didn’t have enough flash or enough cameras, so we used a stationary object.

With a dense stereo reconstruction a fine random noise makes it easier for the software to find matching pairs. So we added some makueup that both diminished the specular hits and increased the detail in this regard. You can see the difference it makes at the border of where the makeup was applied.


Specifications:

Camera:
Canon 1000D
50mm
10mega pixel

Images:
23 PNG

Reconstruction #1:
Apprx 90mins
499992 verts
1000000 faces
http://i.imgur.com/W0rIp.jpg
http://i.imgur.com/TOWs3.jpg
http://i.imgur.com/I4TWi.jpg
http://i.imgur.com/9EFLF.jpg
http://imgur.com/7rsHR


This model is now of the resolution and detail we require. Was the software capable of better? There were still improvements that could be made in the lens, lighting and control over the specular component (polarized light and lenses?) We now had the data we required, but were curious to see how far we could push it …

…I can’t show you the final results as they contain sensitive IP. What we did was take an existing polygon head that’s around 30000 polygons. It’s nicely textured and fully rigged for our game. In Maya, we rendered that head from various angles and resolutions. We tried different lighting, resolutions, camera instrinsic and extrinsics. We were able to recover the model down to about .01 mm accuracy. The calibration was of course wrong resulting in some arbitrary 9dof transform difference but the model was indeed recovered. Given perfect inputs, the software seems capable of giving near perfect outputs.

Our next step is to purchase between 16-24 Canon 1100D with enough flash, softboxes, polarization, truss, etc to achieve as close as possible result to the simulated results we see in Maya.

Thanks AgiSoft!



PS- There were a number of smaller tests and objects scanned. Lots of data was collected. More details are available and we are happy to answer any questions.

Pages: [1]