Except the case the 3D object reconstructed represents an interior, in the majority of cases the user makes photos around or along the surface of the targeted object. Therefore, after the alignment of the photos is done, all the points detected outside the surface defined by the positions of the camera can be safely discarded, thus improving both the speed of building the geometry and memory usage (the grid needed will be radically smaller). Additionally, the reconstructed 3D model will represent only the targeted model, and not also the artificially created interior containing the object. Does this make sens to you too?