Hi Camilla,
So, my workflow is as follows:
1. Shoot multiple exposure (bracketed) images in RAW, 3 exposures.
2. Use Nuke (made by The Foundry) to convert the photos to floating point EXR files (so a floating point file but without any proper high dynamic range) with ACEScg colour space.
3. Use Nuke to batch convert the bracketed images into 32-bit linear HDR images. (still in the ACEScg colour space.)
4. Run the dataset in Photoscan
5. Export the texture from Photoscan along with processing the images in photoscan to undistort them. (Undistorting doesn't seem to change any colour information). Always keeping them EXRs.
6. Import the 3D model into Mari (made by the Foundry) and use projections within that to re-project the undistorted images onto the model giving the freedom to paint directly onto the model fixing any problems seen in the photoscan texture.
Ideally I would be colour calibrating the images before putting them into Photoscan. But so far it hasn't caused any issues, the compositors are able to colour correct the final output and match it to the film plate, there is so much range in the EXRs there is a lot of flexibility when making colour adjustments.
The processing we do in Nuke was custom written here for our pipeline. But it is possible to do that in Photoshop. It may not handle the ACEScg colour space part, it will simply make your 32-bit images linear and display them in sRGB colour space, but I don't think that would actually cause any issues.
So, to specifically try and answer your questions:
1) If I was doing this at home without access to our custom tools I would use Adobe Camera Raw to convert RAW to 16-bit tiff (using proPhoto for maximum colour infomation) then convert those to 32-bit files within photoshop by simply changing the bit-depth and save them as EXR files. You could shoot bracketed exposures to get even more information into your textures and give you more flexibility later. Then you could use mergeHDR Pro in photoshop to create 32-bit linear HDR images, I would simply input 16-bit tiff files into mergeHDR and export as EXRs. (We don't use Photoshop for this partly because it doesn't have the option to batch mergeHDR.)
2) Simply convert your images in Photoshop. If you go up in bit-depth then there is no loss of information.
3) I have had huge success working with floating point linear images in Photoscan, it finds the exact same number of points (I haven't been scientific in checking, but the results I get appear the same to me).
Additionally, because you are using linear images you should find if you shot a Macbeth chart when taking the photos you will always be able to colour correct the final results because there is so much data within the linear files that you have the latitude to pull the colours around to make sure they are accurate. I know of tools where you simply colour pick the colours on the Macbeth chart and the software will automatically calibrate the image.
The beauty of working with floating point linear images is that no matter how much you pull around the colours the data is always there, it never gets lost. If you dropped down the exposure of a linear image so it all appears black and saved it out the information is still all there. Open it up, brighten up the values and nothing has been lost.
It is interesting you mention converting RAW to 32-bit linear images as it is something I have just been looking into this last week. There is a topic about it on the acescentral.com website:
https://acescentral.com/t/canon-cr2-stills-in-aces/206/3 (ACES is quickly becoming industry standard for colour management within TV and film.) More info on it here:
http://www.oscars.org/science-technology/sci-tech-projects/aces.
I hope that helps, let me know if I need to be clearer on any point.
Thanks,
Nick.