1
General / Re: VideoScan ? Creating simple depthmaps from stereo videos (for compositing)
« on: August 16, 2011, 04:20:10 PM »
Hi Eric
Ive been using agisoft for many vfx projects and experiments such as performance capture.
The way i have been getting data is to setup 2 (somtimes 3) cameras (in my case a 7d and a 550d) at 30-40 degrees apart in front of my face and film a sequence/performance.
I then line up these scenes/sync and render out frame sequences.
Ive then batched these through photoscan and the resulting meshes saved out in sequence.
Ive then used 'supermesher' from Boomler labs to create a cache/pointcache file from the obj sequence.
You then render a depth render/pass from this performance and the resulting depth map can be used to drive a dense grid (non changing topology) - which in turn can have constraints and skin wrap added to it - which would drive another mesh...such as a character mesh or rig.
Ive doen this a few times. Il see if i can dig it all otu and make a video to put up on vimeo
cheers
matt
www.angry-pixel.co.uk
http://angry-pixel.blogspot.com/
Ive been using agisoft for many vfx projects and experiments such as performance capture.
The way i have been getting data is to setup 2 (somtimes 3) cameras (in my case a 7d and a 550d) at 30-40 degrees apart in front of my face and film a sequence/performance.
I then line up these scenes/sync and render out frame sequences.
Ive then batched these through photoscan and the resulting meshes saved out in sequence.
Ive then used 'supermesher' from Boomler labs to create a cache/pointcache file from the obj sequence.
You then render a depth render/pass from this performance and the resulting depth map can be used to drive a dense grid (non changing topology) - which in turn can have constraints and skin wrap added to it - which would drive another mesh...such as a character mesh or rig.
Ive doen this a few times. Il see if i can dig it all otu and make a video to put up on vimeo
cheers
matt
www.angry-pixel.co.uk
http://angry-pixel.blogspot.com/