Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Olliebrown

Pages: [1]
1
Greetings!

We have been trying to work with the data stored in the standard XML file of camera positions along with the 'undistorted photos' that can be produced from the export menu to do some view-dependent texture mapping.  The idea is, the individual views can be precisely projected back onto the geometry and cleverly blended to produce very realistic material appearances (see Buehler et al.'s Unstructured Lumigraph and derivative works).

So far we've had a lot of success doing this.  PhotoScan calibrates all the views and computes the appropriate extrinsic camera properties that we can extract from the XML camera data.  Additionally, we can take care of any intrinsic camera properties essentially with the 'undistorted photos' option that removes all camera distortion.  We are also able to get the model and these views in alignment using the global rotation, translation and scale values in the XML file if any are included.

However, we have found one combination of transformations that we cannot seem to handle consistently.  If the object includes both a global rotation AND a global scale (caused by rotating the object to align it with the axis and then scaling with scale bars) we cannot get the proper projection for the individual views no mater how we treat the matrices and combine them.

My current suspicion is that there is something happening with the 'undistorted views' that we are not fully accounting for.  Can anyone provide any insight into the FULL model that is being applied when you do 'undistorted views' from the export menu?  I can find bits and pieces of it around in libraries like OpenCV but none of them account for all of the parameters that are listed for camera intrinsic properties in the XML file.  Also, some more technical explanation of the options in this 'undistorted views' export dialog (like the 'square pixels' checkbox) and how they might be relevant to our target application, VDTM, would be very useful.

And of course, this might have nothing to do with the undistortion process so if there any other ideas anyone has I'm all ears!

Any thoughts or assistance would be very much appreciated!

Seth Berrier

2
General / Re: Underlying Algorithms for PhotoScan
« on: April 21, 2015, 09:20:46 PM »
Absolutely!  I will provide links and just generally talk about our work in the forums once we get it published (we hold it close until then).  It's submitted to Digital Heritage 2015 now and we'll hear back pretty soon it looks like.

Finger crossed!

Thanks for the info everyone.

Seth B.

3
General / Re: Underlying Algorithms for PhotoScan
« on: April 17, 2015, 11:24:33 PM »
HA!  That's almost uncanny how similar that is to my post.  Thanks for the link.

4
General / Underlying Algorithms for PhotoScan
« on: April 17, 2015, 08:25:15 PM »
We make use of photoscan to seed our research into light field rendering and digital curation.  As such we often mention photoscan in our research papers as being an incredible tool that puts powerful computer vision algorithms in a simple, easy to use and tune package!

In doing so we've speculated as to the actual algorithms in use under the hood.  Since this is proprietary software that is not open source we don't know for certain what is in use but based on the output generated and the format of the console messages we have been able to make some reasonable deductions.

I'm wondering if you guys (Agisoft) would be willing to officially confirm what general algorithms you use, minus all the details that are trade secrets of course (you wield all the algorithms way better than any other software out there and I don't want to undermine you ability to make money off of that).

Here are some of the things I've assumed up to now:
  • Some form of SIFT or SURF must be in use to identify features up front along with descriptors for matching
  • Bundle adjust in some form must be used to fit the camera model for each view and get accurate camera poses and the sparse point cloud (The console output also suggests this is an SfM process but I don't know enough here to deduce anything more specific)
  • The dense cloud reconstruction must be some form of MVS disparity calculation, again I don't know enough to deduce more than that
  • The mesh reconstruction is clearly the free implementation of screened poisson surface reconstruction available here: http://www.cs.jhu.edu/~misha/Code/PoissonRecon/Version6.13/
  • Texture generation, I have no idea how this is done, if there is an appropriate research paper about it, especially the color correction (which at least also seems to be utilizing bundle adjust but I have no background here to deduce more)

Anyways, just wondering if you can share more so we can talk about it more precisely in academic circles where they care about these sorts of things.

Thanks!
Seth Berrier

5
Python and Java API / 'Model.renderNormal()' or something similar
« on: October 16, 2014, 11:11:37 PM »
This may be a question or it may be a feature request if it doesn't exist yet.

When choosing 'export depth' for a complete model I am presented with a dialog where I can select from the options of 'texture', 'depth' and 'normal' images to export.  The texture option seems to correspond with the python function Model.renderImage() and the depth one goes with Model.renderDepth().

Where is the function for creating that normal map image?  Is this available via the scripting API somewhere else?  Is this something that could be added to a future version of the API so I could automate export of this image as well as the depth and texture images?

Another thing that would be helpful would be more information about the depth image that is created with this option.  To really be useful we need to know more about the camera projection matrix and the canonical viewing volume (specifically the near and far clipping planes).  I'm planning on reverse engineering this information but if you (or python) could simply provide me with it that would save a LOT of time!   ::)

Thanks!
Seth B.

Pages: [1]