Forum

Author Topic: Jpegs and Color Space?  (Read 5676 times)

mitchfx

  • Newbie
  • *
  • Posts: 33
    • View Profile
Jpegs and Color Space?
« on: November 03, 2014, 06:02:48 PM »
Quick question about Agisoft Photoscan...Does anyone know if there is a preferred color space for 8bit jpegs to use for model creation?  sRGB, Adobe RGB, ProPhoto RGB?  I know 16bit Tiff is always best, but I often use jpeg for non-critical work to save drive space, memory requirements and processing time.  Everything I've done with jpegs up until now has been exported from Lightroom as sRGB, but I'm curious if there is any benefit in Photoscan by using a wider gamut color space.

Thanks,
Mitch

Ant

  • Newbie
  • *
  • Posts: 20
    • View Profile
Re: Jpegs and Color Space?
« Reply #1 on: June 10, 2018, 02:34:20 PM »
Hi Mitch,
Have you ever received an answer to your post regarding color space?
Regards,
Ant

babllz

  • Newbie
  • *
  • Posts: 4
    • View Profile
Re: Jpegs and Color Space?
« Reply #2 on: September 07, 2018, 12:38:25 PM »
Hi Mitch,

I am interested into this topic as well.

I don't know (or simply don't understand) how does Photoscan manage color space while merging the photos in the generation of the orthomosaic.

I shoot in a highly controlled environment, and perform radiometric calibration of the camera prior to the creation of a Photoscan project (i.e. shoot to a target, creation of custom camera ICC profile, assign it to photos and convert to ProPhoto RGB/eciRGB_v2 color space).

Then I create a Photoscan project with TIFF 16-bit ProPhoto RGB images, but when I export the orthomosaic and open it with Adobe Photoshop the color management message "The document [filename] does not have an embedded RGB profile" (IMAGE 1) so, Photoshop does not know from where the image is coming and how to correctly display the colors. (More on color management with Photoshop in Color Management Guide by Arnaud Frich.)

Obviously, the most adequate choice is to Don't color manage, but as you can see from the images attached, the colors of the orthomosaic (IMAGE 2) are totally different from the original, calibrated shot I used to create the model in Photoscan (IMAGE 3).

It is a huge problem to me because I need a faithful reproduction of colors as well as morphology (color management in 3D reconstruction is an important topic in this moment, especially in the Cultural Heritage sector). A second calibration of the orthomosaic (i.e. creation of another ICC profile, assignment and conversion to color space) introduces unpredictable errors since the Photoscan's blending algorithm is not explicit, i.e. I don't know whether it converts to another color space (ProPhoto RGB > sRGB?) and, if it's the case, it's not even known what engine and rendering intent it exploits.

As I saw from previous posts in this forum (like this one), this is an old issue that either the community and the technical support ignored... But let's consider also (as an hypothesis) the number scientific analyses (e.g. multispectral) that were biased because of a poor color management.

So, here is the question (which is more a request) to the technical support:
May you consider to fix this issue by preserving the original images' color space, bit depth and, in general, the radiometric quality in the generation of the orthophoto?

Thank you!

nickmac

  • Newbie
  • *
  • Posts: 7
    • View Profile
Re: Jpegs and Color Space?
« Reply #3 on: September 07, 2018, 01:21:03 PM »
Hi Camilla,

Have you considered converting the images to 32-bit float linear images making them essentially colour space independent and then running them through Photoscan, export 32-bit EXR files and then convert them back again to your preferred colour space at the end?

I am using 32-bit EXR files sucessfully with Photoscan and the import and export images match colour-wise.



Nick.

babllz

  • Newbie
  • *
  • Posts: 4
    • View Profile
Re: Jpegs and Color Space?
« Reply #4 on: September 17, 2018, 01:52:01 PM »
Hi nick, thank you for the quick reply.

I actually have never considered using 32-bit floating point images, and after a brief research on the web I've found an analogous post on non- color managed software. Assuming that Photoscan follows the same rules, what you say actually rise some doubts --perhaps because I interpret two different workflows:

OPTION 1
  • Shoot images in RAW format using a professional camera (let's say with a Nikon)
  • Develop RAW file to 32-bit floating point linear image (let's say OpenEXR or TIFF format)
  • Run the dataset on Photoscan
  • Export the resulting orthophoto in EXR format
  • Color manage with editor (eg Photoshop)

OPTION 2
  • Shoot images in RAW format
  • Develop RAW file to 16-bit linear image (TIFF format)
  • Convert 16-bit TIFF to 32-bit EXR
  • Run dataset on Photoscan
  • Export the resulting orthophoto in EXR format
  • Color manage with editor

Then my questions are
  • OPTION 1: How do I develop RAW files into 32-bit floating point linear? In other words: which raw developer engine can I use to do so, knowing that none of the ones I use (Raw Therapee, Adobe Camera Raw, Capture One Pro, dcraw) manage that output format and bit depth? Do you use any specific plugin?
  • OPTION 2: which software shall I use to convert 16-bit TIFF to 32-bit EXR? Is there any loss or alteration of the data while doing so?
  • Does Photoscan manage well linear images (since these are usually darker than gamma corrected ones)? Is there any falloff in the performance, e.g. less feature detected?

May you describe your workflow and software? Perhaps I totally misunderstood your answer, and if so please correct me!

Thank you,

--Cam

nickmac

  • Newbie
  • *
  • Posts: 7
    • View Profile
Re: Jpegs and Color Space?
« Reply #5 on: September 19, 2018, 07:26:59 PM »
Hi Camilla,

So, my workflow is as follows:

1. Shoot multiple exposure (bracketed) images in RAW, 3 exposures.
2. Use Nuke (made by The Foundry) to convert the photos to floating point EXR files (so a floating point file but without any proper high dynamic range) with ACEScg colour space.
3. Use Nuke to batch convert the bracketed images into 32-bit linear HDR images. (still in the ACEScg colour space.)
4. Run the dataset in Photoscan
5. Export the texture from Photoscan along with processing the images in photoscan to undistort them. (Undistorting doesn't seem to change any colour information). Always keeping them EXRs.
6. Import the 3D model into Mari (made by the Foundry) and use projections within that to re-project the undistorted images onto the model giving the freedom to paint directly onto the model fixing any problems seen in the photoscan texture.

Ideally I would be colour calibrating the images before putting them into Photoscan. But so far it hasn't caused any issues, the compositors are able to colour correct the final output and match it to the film plate, there is so much range in the EXRs there is a lot of flexibility when making colour adjustments.

The processing we do in Nuke was custom written here for our pipeline. But it is possible to do that in Photoshop. It may not handle the ACEScg colour space part, it will simply make your 32-bit images linear and display them in sRGB colour space, but I don't think that would actually cause any issues.


So, to specifically try and answer your questions:

1) If I was doing this at home without access to our custom tools I would use Adobe Camera Raw to convert RAW to 16-bit tiff (using proPhoto for maximum colour infomation) then convert those to 32-bit files within photoshop by simply changing the bit-depth and save them as EXR files. You could shoot bracketed exposures to get even more information into your textures and give you more flexibility later. Then you could use mergeHDR Pro in photoshop to create 32-bit linear HDR images, I would simply input 16-bit tiff files into mergeHDR and export as EXRs. (We don't use Photoshop for this partly because it doesn't have the option to batch mergeHDR.)

2) Simply convert your images in Photoshop. If you go up in bit-depth then there is no loss of information.

3) I have had huge success working with floating point linear images in Photoscan, it finds the exact same number of points (I haven't been scientific in checking, but the results I get appear the same to me).


Additionally, because you are using linear images you should find if you shot a Macbeth chart when taking the photos you will always be able to colour correct the final results because there is so much data within the linear files that you have the latitude to pull the colours around to make sure they are accurate. I know of tools where you simply colour pick the colours on the Macbeth chart and the software will automatically calibrate the image.

The beauty of working with floating point linear images is that no matter how much you pull around the colours the data is always there, it never gets lost. If you dropped down the exposure of a linear image so it all appears black and saved it out the information is still all there. Open it up, brighten up the values and nothing has been lost.

It is interesting you mention converting RAW to 32-bit linear images as it is something I have just been looking into this last week. There is a topic about it on the acescentral.com website: https://acescentral.com/t/canon-cr2-stills-in-aces/206/3 (ACES is quickly becoming industry standard for colour management within TV and film.) More info on it here: http://www.oscars.org/science-technology/sci-tech-projects/aces.



I hope that helps, let me know if I need to be clearer on any point.


Thanks,
Nick.
« Last Edit: September 20, 2018, 01:42:30 PM by nickmac »

jedfrechette

  • Full Member
  • ***
  • Posts: 130
  • Lidar Guys
    • View Profile
    • www.lidarguys.com
Re: Jpegs and Color Space?
« Reply #6 on: September 20, 2018, 04:14:44 AM »
We follow essentially the same process as Nick. We do typically apply a base grade (estimated from a Color Checker) to the linear exrs as part of the raw development process so that they are white balanced and normalized relative to expected ACES values. However, as he notes, that can also be done latter with out any data loss. There also probably isn't much reason to use 32-bit float exrs rather than sticking with the much more efficient 16-bit half floats. Half floats will still keep all of the dynamic range you can reasonably expect to record.

One note I'll add about raw development in Nuke is that under the hood it uses dcraw so you can also use dcraw directly if you would like. Although we were originally using Nuke for raw development, we've recently switch to using OpenImageIO as it it offers some benefits in terms of automation.

The fact that Agisoft doesn't mess up carefully constructed color pipelines is a big selling point for me. The only other thing I wish it had was support for OpenColorIO so the in-app image color matched what we see in apps like Mari and Nuke.
Jed

nickmac

  • Newbie
  • *
  • Posts: 7
    • View Profile
Re: Jpegs and Color Space?
« Reply #7 on: September 20, 2018, 12:28:40 PM »
Hi Jed,

Thanks for your input! I am reassured that you follow a fairly similar process. We have only relatively recently added Photoscan to our arsenal so I have been quite involved with exploring the best way of doing things.

I was a bit inaccurate in my description because we also use 16-bit half float rather than 32-bit float. It starts getting a bit confusing when there is both 16-bit half and 16-bit short so didn't want to cause any confusion for anyone unfamiliar with the differences between all the different bit depths. As you are probably aware Photoshop doesn't properly differentiate between 32-bit float and 16-bit half, it just has one floating point mode which shows as 32-bit.

I am interested that you are now using OpenImageIO. We run a python script to convert our RAW images that calls Nuke, so everything is done without user intervention. Would you mind elaborating on what other benefits you get in the automation of it?


Thanks,
Nick.

jedfrechette

  • Full Member
  • ***
  • Posts: 130
  • Lidar Guys
    • View Profile
    • www.lidarguys.com
Re: Jpegs and Color Space?
« Reply #8 on: September 21, 2018, 01:08:52 AM »
Would you mind elaborating on what other benefits you get in the automation of it?
OIIO is easier to deploy than Nuke. Not all of our machines have Nuke on them and in some cases we've wanted to build simple standalone image processing tools that can be deployed to machines with no prerequisites. OIIO is a fantastic Swiss army knife for that sort of thing. Of course, not needing to tie up a Nuke license is also a benefit that is not to be overlooked. :-) From a functional perspective, anything you can do in OIIO can be done in Nuke so I don't think there's any advantage there. Certainly being able to prototype scripts visually in Nuke is much more convenient.

I try to avoid PhotoShop as much as  I can so won't argue there. This sort of stuff does tend to get convoluted pretty quickly. As usual XKCD sums it up nicely: https://xkcd.com/1882/
Jed

nickmac

  • Newbie
  • *
  • Posts: 7
    • View Profile
Re: Jpegs and Color Space?
« Reply #9 on: September 21, 2018, 11:25:24 AM »
As usual XKCD sums it up nicely: https://xkcd.com/1882/

Hahaha I hadn't seen that one before, brilliant!

babllz

  • Newbie
  • *
  • Posts: 4
    • View Profile
Re: Jpegs and Color Space?
« Reply #10 on: September 21, 2018, 11:42:50 AM »
Hahahahah LOL

_________________________________

I was totally unaware of the 32bit float and different 16bit half bit depths, that's a good topic to study.

I am trying to exclude Photoshop from my workflow as well, don't like to miss control with its fancy hipster algorithms so every action must be weighted carefully. OCIO looks like a super powerful tool, I'll give it a try! Thank you Jed for sharing!

To answer Nick's considerations
Quote
because you are using linear images you should find if you shot a Macbeth chart when taking the photos you will always be able to colour correct the final results because there is so much data within the linear files that you have the latitude to pull the colours around to make sure they are accurate. I know of tools where you simply colour pick the colours on the Macbeth chart and the software will automatically calibrate the image
We do actually shoot a set of targets and include them in the models; among them we include the ColorChecker SG to create a custom camera ICC profile with ArgyllCMS, then apply it with Photoshop. Next step is extending the workflow with the conversion to 32bit float EXR you suggested and run the dataset on Photoscan.

Thank you guys for helping!

--Cam