Dear Dmitry,
First of all, I would like to say that you have produced a very impressive bit of software. We are using Photoscan and Photoscan Professional here at the University of Tasmania to derive 3D point clouds from historical aerial photography and aerial photographs acquired from an unmanned aerial vehicle (UAV). I have previously worked with SIFT, libsiftfast, Bundler, PMVS2, and CMVS, and have achieved some very good results with these tools. I have come across some examples where Photoscan outperforms Bundler when it comes to calculating camera position, orientation, and distortion parameters, especially when the focal length is unknown or uncertain (like in underwater photography). I am keen to publish some of our results in scientific journals, however, in order to do that I want to be able to report on the general type of algorithms that are used in Photoscan. As far as I can tell Photoscan follows a very similar approach to the following workflow:
- SIFT
- key matching
- bundle adjustment for camera position, orientation, and distortion parameters + sparse point cloud (based on Bundler?)
- patch-based multi-view stereo for dense point cloud reconstruction (based on PMVS2 and or CMVS?)
- surface reconstruction based on Delauney triangulation or Poisson surface reconstruction
- Texture mapping
I am curious to known if you developed your own algorithms from scratch or if you have built on and improved on existing algorithms. I realise that you might not be able to or not want to provide too much technical detail. However, in order to use the Photoscan results in a publication I want to be able to report on the workflow and methods. Any help would be greatly appreciated.
Thanks in advance and keep up the great work!
Arko