Agisoft Metashape
Agisoft Metashape => General => Topic started by: EricM on July 21, 2011, 01:36:33 PM
-
I'm a happy user of Photoscan Standard, but also a video fx artist and I was wondering if there was anyway to create a special version/mode of Photoscan to specially deal with stereo videos and only outputting the depthmap/disparity as another video (3D mesh would probably be overkill at that stage).
I see the math and all the building blocks are here already in Photoscan : selecting stereo pairs and creating depth maps but the workflow isn't adapted to video (25fps x 2 eyes : 50 frames per second to match/solve/retrieve by hand)
Ideally a plugin for Aftereffects and such would be great, but a nice and simple standalone app (or a special mode in photoscan) would certainly be a great step forward.
With more and more affordable 3D camcorders and bigger 3D productions, there's definitely a market for that. I know I would pay a premium for that.
Thanks for your consideration.
-
Hello EricM!
Just thought about your idea and there could be a solution:
PhotoScan could take image-pairs of Stereo-Cams and produce height-field-maps. If you create (maybe) 100 hundred of them (in chunks) it could create an fascinating animation.
I don?t know if it takes high programming skills to combine these together into an animated displacement-map and texture-map them via undistorted footage. Within an real animated vertex object (like an mdd-file) it could be possible to generate real 3d-footage out of three or more Stereo-Cams in the same way. But that?s only future for now I think.
Greetz
tezen
-
i think you sholud try the NUKE http://www.youtube.com/watch?v=YGZExcoM3lQ&feature=feedf and try find solution there....
-
Nuke on its own cannot do that. You have to add the ocula plugin suite to get this functionality, and the plugin alone costs around 7000 Euros + price of Nuke... (which I don't have yet).
I've had a rather encouraging start by setting a couple of stereo pairs by hand and retrieving the depth maps in After effects for a sequence, but to do it wor a whole sequence is incredibly tedious.
It may be possible to do in the pro version through python scripting, but as I said I only have the standard version.
-
Hi Eric
Ive been using agisoft for many vfx projects and experiments such as performance capture.
The way i have been getting data is to setup 2 (somtimes 3) cameras (in my case a 7d and a 550d) at 30-40 degrees apart in front of my face and film a sequence/performance.
I then line up these scenes/sync and render out frame sequences.
Ive then batched these through photoscan and the resulting meshes saved out in sequence.
Ive then used 'supermesher' from Boomler labs to create a cache/pointcache file from the obj sequence.
You then render a depth render/pass from this performance and the resulting depth map can be used to drive a dense grid (non changing topology) - which in turn can have constraints and skin wrap added to it - which would drive another mesh...such as a character mesh or rig.
Ive doen this a few times. Il see if i can dig it all otu and make a video to put up on vimeo
cheers
matt
www.angry-pixel.co.uk
http://angry-pixel.blogspot.com/
-
Interesting... I'll have a shot at this...
Yet, since photoscan natively outputs a depth map, it would be great to be able to also batch output this depth sequence.
Actually, a video mode for stereoscan would be enough... :
- Select video sequence right
- Select video sequence left
- Compute disparity map and output depth map to use in After Effects, Nuke, whatever...
-
Hi Matt!
I would love to see a short video, or just a short description how you set up the batch process. I see no mentioning of this feature in the manual, so I dont really know how the batch process works. It would be very interesting to try out generating a 3D mesh sequence :)