The way i understand it, if you imagine 2 cameras looking straight ahead with a given distance between them, then this setup can give very good XY accuracy but not so good in Z, whereas if the cameras are convergent then you get better Z accuracy at the expense of XY accuracy. The ratio of the XY accuracy to Z accuracy is the reconstruction uncertainty so if the XY accuracy is equal to the Z accuracy then you get an reconstruction uncertainty of 1, when they cameras are somewhat convergent. As the cameras get closer together then if they maintain their angle to each other it is effectively the same as becoming 'more parallel' since the area of overlap gets smaller and what's left is the portion of the field of view that still looks straight ahead. So the smaller baseline is equivalent in a sense to being more parallel so the XY accuracy increases, the Z accuracy decreases, and the ratio between them goes up so therefore so does the reconstruction uncertainty, so it is a combination of position and angle.
To not really answer your question very well, because I'd need to work it out/test it, i think you see a kind of radial gradient of reconstruction uncertainty across the points from an image pair, with points near the middle being more 'uncertain' than points at the edges, but i may have that the wrong way round, but im sure i've seen that before as i slide the slider.
The most uncertain thing is how confident i am about any of this, but i dont think im too far off
i will try and test it all out and think about it more thoroughly at some point before too long!