I do some very similar work to what you are doing, and I use a variety of workflows similar to those discussed on dificult objects that won't behave, but I actually find that the following works more often than not with good photo sets:
1) import photos all into a single chunk
2) align
3) build dense cloud
4) manually chop out the unhelpful bits.
5) process as normal with any degree of optimisation you feel liek that day
Step 4 benifits from propping the object up off the turntable factionally with little bits of foam etc, well underneeth the base of the object so that camera can't see them.
When the object isn't quite right, or there is minor misalignment, I sometimes find that this model is close enough for masks from object before starting again.
If I remember right, generate masks from background is designed for dealing with situations (for example) where you take a photo of the turntable without the object on it, and use that as the input. It takes the colour diference between "background image", and the actual image, masking out those that are similar. With a turntable, it will usually get a little confused that the turntable is moving, but it will do a good job on the wall behind and an ok job on the turntable. good oclour contract between object and background is needed otherwise you risk bits of the object vanaishing.
I believe it's asking for a folder containing the background image file, then separatly a filename within that folder -- this is so that other workflows where the background iamge is diferent for every image can be managed.
A while back we used to use this to create a primative "background from how close to white the pixel is" by feeding in a pure white image of the same resolution as the images, and tweaking the tolerance.