Hi
I am working on a project for pavement inspection, where two six-lens 360 spherical camera is mounted on a car for street view image collection. I use original unstitched images obtained from each lens, and import them as a multicamera system (the rig are calibrated in advance) for photo alignamet and pavement dense cloud generation.
Each project contains 10000 groups of images (total of 80000 images), and each group is equaly extracted in 1 m distance. It takes several hours for alignment, but I have encountered a problem in the processing of depth maps generation.
I understand that depth map generation is to use selected neighbor points (default is 100, and I modifed it to -1) that obtained from valid matches in overlapped area.
However, since this is a close-range enviorment project, what I am interest is the pavement area and the overlap images are limit, i.e. neghbioring 5 groups, there is no necessary to process depth maps that from two groups of images at distance longer than 5 meters, even they are overlapped and have valid matches on buildings.
Therefore, I am wondering is there a way that I can select a group of images for depth map generation instead of using matches, as seaking all possible image pairs for depth generation is really time consuming. Though adjusting the tweaks may help, but is still not a good solution as the vaild matchs are also varied scene by scene, it is hard to chose a proper value.
The 100 default value is very quik, but resulting in missing points in the pavement and the confidence is too low, whiel -1 value gets complete and more confident dense points on pavement but the processing time is at least 10 times longer.
As my project scene is stable, hope there is a option for me to process depth map in only a certain distance range.
Attatched are some of my results.