Forum

Author Topic: tricky mesh per frame performance scanning  (Read 7931 times)

dan

  • Newbie
  • *
  • Posts: 4
    • View Profile
tricky mesh per frame performance scanning
« on: February 16, 2018, 09:22:09 AM »
Hi Guys,

I'm new to this forum so sorry if this has been discussed already - I tried to look at some tutorials but since it's a fairly specific issue some of you might have a solution more fitted to this.

I need to reconstruct an actors facial performance, I'd say around 35-40 minutes of footage split into shots. It's captured by an 8 camera rig, which is fixed, 4 camera pairs about 3 meters from the actor, not sure of the exact focal lenght ( I didn't get any exif data, only png's to make it more fun I guess )

Long story short, I can't change the cameras, I managed to read in the camera clusters, reconstruct the whole thing, but the reconstruction process is really inconsistent - sometimes it'd align all the cameras just fine, but 90% of the time it doesn't find 1-2 of the cameras, so the whole thing doesn't come out as nice as it could, given the circumstances.

The question is, is there any option to select the camera pairs manually (as each pair is close to each other it'd make sense to assign them as basis for the pixel correspondences) , or do a marker setup to assist without having to go through thousands of pictures manually?

Also, when I set the whole thing up as a batch process, it wouldn't process every frame, just one.

Any ideas are much appreciated.

Cheers,

Dan

SAV

  • Hero Member
  • *****
  • Posts: 710
    • View Profile
Re: tricky mesh per frame performance scanning
« Reply #1 on: February 16, 2018, 11:06:09 AM »
Hi dan,

You could try to insert a scale bar between camera stations given that you were using a rig and the distance between cameras is known. This additional information can help PhotoScan in the alignment process.

To generate such a scale bar, select two cameras in the reference pane, then right click and choose CREATE SCALE BAR. Then enter the distance between the cameras in the scale bars section within the reference pane. Repeat for all cameras/camera pairs and then run the alignment again.

If this should work, you might then want to check if you could automate this process using Python scripting.

Regards,
SAV

dan

  • Newbie
  • *
  • Posts: 4
    • View Profile
Re: tricky mesh per frame performance scanning
« Reply #2 on: February 18, 2018, 08:23:45 AM »
Hi SAV,

thanks for your reply!

A scale bar could work, but in this case, the outside vendor who provided the footage may have undistorted every camera with slightly different settings. Also may have applied a slightly different gamma shift on 1/4, 1/2 3/4 or the whole image sequence. While I could fix the later one I'm getting a divergence of around 20mm for the camera focal lenghts when reconstructing, which I'm fairly sure would be fixed focal length lenses on the rig. I don't have any reliable exact information about the camera setup.

In this scenario I don't have the option of measuring the exact distance from camera to camera (I can make an educated guess of the relative distance, that's about it) - I may not need it though, since I'm mostly interested in relative difference of each frame. It's a whole performance over several shots, I'm mostly interested in how far, and in which direction a point moves relative to its last position in space.

I have no marker setup either due to the requirements of the project - but what I could do is assign markers of - in this case - facial features such as pores, moles and such.

Do any of you know of a guide/tutorial/previous case dealing with a similar scenario, or had experience with stuff like this?

I attached shots of an approximate camera setup. Also, here are some quick renders of a random free 3d scan in place of the actor (in this case provided by ten24) to serve as an example of what I'd get for 1 frame looking through each camera (the whole project would be thousands of shots like these) :

https://drive.google.com/file/d/1nXY7-K_Ys-8LGjznZFrpE4a5zm_hF_-U/view?usp=sharing

 I didn't do the random undistort or gamma correct thingie I mentioned before since I figured out how to fix that in the vendor footage, and didn't want to further complicate things.

If anyone has any pointer, it'd be much appreciated.

Cheers,

Dan

SAV

  • Hero Member
  • *****
  • Posts: 710
    • View Profile
Re: tricky mesh per frame performance scanning
« Reply #3 on: February 19, 2018, 02:24:32 AM »
Hi dan,

If your footage has been edited by your vendor (i.e., undistorted, geometrically transformed or cropped), then PhotoScan will have issues with the internal camera calibration (lens calibration) and therefore might also struggle with external camera calibration (= pose estimation/camera alignment).

You might want to get in touch with the guys from Infinite-Realities (http://ir-ltd.net/). From what I've seen on their website they have been pushing the boundaries and are probably the best people to talk to when it comes to 4D reconstructions. Also have a look at this: http://ir-ltd.net/introducing-the-aeon-motion-scanning-system/

Next time try to be involved in the image acquisition step as well. It will make photogrammetric processing much easier/quicker and at the same time help you to deliver high quality outcomes.

The main controlling factor of photogrammetric processing is the data/imagery itself.

Not sure if you are aware of the 4D processing capabilities of PhotoScan using the timeline. Here is the description from the help/manual:

Quote
4D processing
Overview
PhotoScan supports reconstruction of dynamic scenes captured by a set of statically mounted synchronized cameras. For this purpose multiple image frames captured at different time moments can be loaded for each camera location, forming a multiframe chunk. In fact normal chunks capturing a static scene are multiframe chunks with only a single frame loaded. Navigation through the frame sequence is performed using Timeline pane.

Although a separate static chunk can be used to process photos for each time moment, aggregate multiframe chunks implementation has several advantages:

Coordinate systems for individual frames are guaranteed to match. There is no need to align chunks to each other after processing.
Each processing step can be applied to the entire sequence, with a user selectable frame range. There is no need to use batch processing, which simplifies the workflow.
Accuracy of photo alignment is better due to the joint processing of photos from the entire sequence.
Markers can be tracked automatically through the sequence.
Intuitive interface makes navigation through the sequence pretty simple and fast.

Multiframe chunks can be also efficient (with some limitations) for processing of disordered photo sets of the same object or even different objects, provided that cameras remain static throughout the sequence.

Managing multiframe chunks
Multiframe layout is formed at the moment of adding photos to the chunk. It will reflect the data layout used to store image files. Therefore it is necessary to organize files on the disk appropriately in advance. The following data layouts can be used with PhotoScan:

All frames from corresponding camera are contained in a separate subfolder. The number of subfolders is equal to the number of cameras.
Corresponding frames from all cameras are contained in a separate subfolder. The number of subfolders is equal to the number of frames.
All frames from corresponding camera are contained in a separate multilayer image. The number of multilayer images is equal to the number of cameras.
Corresponding frames from all cameras are contained in a separate multilayer image. The number of multilayer images is equal to the number of frames.

Once the data is properly organized, it can be loaded into PhotoScan to form a multiframe chunk. The exact procedure will depend on whether the multifolder layout (variants a and b) or multilayer (variants c and d) layout is used.

To create a chunk from multifolder layout

Select  Add Folder... command from the Workflow menu.
In the Add Folder dialog box browse to the parent folder containing subfolders with images. Then click Select Folder button.
In the Add Photos dialog select the suitable data layout. For layout a) above select "Create multiframe cameras from folders as cameras". For layout b) select "Create multiframe cameras from folders as frames".
Created multiframe chunk will appear on the Workspace pane.
To create a chunk from multilayer images

Select Add Photos... command from the Workflow menu or click  Add Photos toolbar button.
In the Add Photos dialog box browse to the folder containing multilayer images and select files to be processed. Then click Open button.
In the Add Photos dialog select the suitable data layout. For layout c) above select "Create multiframe cameras from files as cameras". For layout d) select "Create multiframe cameras from files as frames".
Created multiframe chunk will appear on the Workspace pane.
It is recommended to inspect the loaded frame sequence for errors. This can be done by scrolling the frame selector in the Timeline pane and inspecting thumbnails in the Photos pane during scrolling.

After multiframe chunk is created, it can be processed in the same way as normal chunks. For multiframe chunks additional processing parameters allowing to select the range of frames to be processed will be provided where appropriate.

Tracking markers
PhotoScan allows to automatically track marker projections through the frames sequence, provided that object position doesn't change significantly between frames. This greatly simplifies the task of labeling of a moving point if the number of frames is large.

To track markers through the frame sequence

Scroll frame selector in the Timeline pane to the 1st frame. Add markers for the 1st frame as described in the Setting coordinate system section.
Select Track Markers... command from the Tools menu.
Adjust the starting and ending frame indices if necessary. Default values correspond to tracking from the current frame to the end of sequence. Click OK button to start tracking.
Check tracked marker locations. Automatically tracked markers will be indicated with  icons. In case of a placement error at some frame, adjust the wrong marker location within the frame where the failure occurred. Once the marker location is refined by user, the marker icon will change to
Restart tracking from that frame using Track Markers... command again.
[Note] Note
If the ending frame index is smaller than the starting index, tracking will be performed in the backwards direction.
Automatic marker tracking is likely to fail in case when structured light is used to add texture details to the object surface, as the light pattern will not be static with respect to the moving object surface.

All the best.

Regards,
SAV





dan

  • Newbie
  • *
  • Posts: 4
    • View Profile
Re: tricky mesh per frame performance scanning
« Reply #4 on: February 19, 2018, 04:06:59 AM »
Thank you for the links SAV!

Changing vendors is not an option since I'm working for another vendor myself, on the vfx end of things. By the time we get the data the shoot is long over unfortunatelly. Sometimes, depending on the studios and the project there tends to be somewhat of a disconnect between the on-set crew and the vfx companies. Also, since different departments have different needs the supplied reference is often biased towards one more than another.

Anyway, 4D processing is what I'm using, it'd be exactly what I need and works more or less, the only problem I have is the camera alignment which is off due the the less than ideal setup. I'll go over the description you sent me (been reading the manual which was a bit more brief on the subject) maybe that'd give me some pointers. As opposed to the nice setup we can see at IR's rig I have 8 instead of 16 cameras, and a strange camera pair alignment not very well suited for photoscan.

 I dont have markers, but maybe I can find some matching skin detail on each camera and set some markers based on that. Most if not all of the tutorials online were dealing with marker setup in aerial footage, I had some issues with implementing that in my 4D approach of things.

Thank you for your time and pointers!

Dan


SAV

  • Hero Member
  • *****
  • Posts: 710
    • View Profile
Re: tricky mesh per frame performance scanning
« Reply #5 on: February 20, 2018, 01:42:09 AM »
No worries, dan.

Having 16 instead of 8 cameras would make life/photo alignment much easier  ::)

As you mentioned, there are probably a few distinctive facial features that you could use for marker placement. For example, eyes and/or the nose tip should work well.

You could even push the boundaries a little more by doing some advanced Python scripting within PhotoScan. You could load a third party Python library such as openCV which would allow you to automatically detect facial features (e.g., eyes) and places markers there.  :o

Regards,
SAV


dan

  • Newbie
  • *
  • Posts: 4
    • View Profile
Re: tricky mesh per frame performance scanning
« Reply #6 on: February 21, 2018, 12:26:19 AM »
Thanks SAV!

16 cameras would be great. The current setup in question is based on the stereo pair approach I guess, makes it tricky to handle in Agisoft, but for my scenario I found that might be still the best solution.

openCV is a great idea. I'm going to try and get some rnd support, my python is very limited, mostly doing maya stuff. :)

I tried manual marker placement which gave me a very slight misalignment of some camera pairs, just enough to be annoying, parts of the face offseted by 2-3mm. It did use the alignment for the whole sequence though which is a big plus. I'm going to run a few more tests to get some consistency.

Cheers,

Dan

georgesemmanuelarnaud

  • Newbie
  • *
  • Posts: 2
    • View Profile
Re: tricky mesh per frame performance scanning
« Reply #7 on: May 21, 2018, 01:50:37 AM »
do you guy mean that is possible to make a facial rig with only 16 cameras ?