Hi all. We've been using PhotoScan's built-in tool to get masks from background images (i.e. a second set of clean plate/empty background images taken right after the subject is photographed). It does quite a good job, but seems to have the following issues:
1) the edge of the mask is often too generous in letting in the background, which lets through the white background; this is particularly problematic when creating textures for hair and areas where 'webbing' is likely (fingers, armpit, crotch)
2) it often ends up masking out dark portions of the subject if they are in the same spot as a camera lens in the background image
3) due to diffuse reflection of the subject on our white floor, the contact point of their feet with often includes a significant portion of the floor around their feet
So far we have been manually cleaning up these problem features using the PhotoScan tools, which ends up taking a good 60-90 seconds per photograph on average. Obviously I'd like to take this number down to zero, so have been looking at alternative methods for background subtraction, which I could then feed into PhotoScan. This project looked like a good option, but ended up having its own problems, and being rather slow:
http://docs.opencv.org/trunk/doc/tutorials/video/background_subtraction/background_subtraction.htmlIs anyone else using external programs for mask generation? I should know the answer to this but for some reason it is eluding me...
Thanks!