Chances are that it tried at some point to align these 30 images, individually or at small subsets, with the rest but it failed.
Bare in mind that feature detection is not a deterministic process and that the quality setting also implicitly affects the acceptance thresholds for feature correspondence.
Once you have the sparse point cloud, it's probably easier to confirm that some of the detected points are indeed reliable matches; thus, half of them were successfully realigned.
For the rest of the images, which were not aligned, an incremental alignment is required before the dense matching.
Depending on the quality of the images within the overlapping areas, you might have to repeat this process so that enough features, which can be matched, are detected.
In some cases you might have to incrementally align some of the images that were aligned successfully in order to generate features that will be detected in the problematic images.
In any case, you will have to run the dense matching and mesh generation again because the alignment/optmisation only estimates the exterior orientation and does not update the depth maps.