Monday, 13 February 2012

Exposing images for digital cinematography

The discussion below represents an interesting archive of attitudes prevalent at the beginning of 2012 towards exposure and workflow in exposing the electronic cinema image (specifically in Red Epic and Red One, but branches out into all digital cinematographic issues). There are some rules here that I’ve always worked by but specifically one: that exposure is best viewed with Raw and view all else as metadata that accompanies an image into post and in so doing that one should situate the exposure ‘correctly’ so that you get the most out of the ‘negative’ but also break that rule to derive the most important aesthetically developed image. When I was working in high definition my maxim was: underexpose by at least half a stop - now that idea is defunct.
In reading this information, do note Florian Stadler’s comment which is both interesting and important:
“It is important to process the footage in LOG space (Redlog Film) in the fully tested and implemented workflow I mentioned. You will recapture the RAW images most extent of highlight information by setting the ISO to 320 and the Gamma to Redlog Film”. 
In photochemical practice unless you were ‘trying something out’ you always went for a ‘fat negative’, meaning you took the most information into post so that the digital grade could deliver the most information to enable maximum manipulation of the image (I began work with film at the tail end of the change from photo-chemical into telecine - which was then to change into digital intermediate). Previously, before telecine, the medium had very low flexibility for manipulation whilst using printer lights (having said that all the wonderful film looks of around 80 years of cinema were derived from that ‘inflexibility’). If you were ‘trying something out’ you were testing a look and testing is the primary methodology that a DP uses.
This discussion does not cover putting an 80D orange filter over a native raw sensor that is balanced to daylight. I have mentioned this theory on Cinematographers Mailing List and been disagreed with and in other posts been heavily supported - so in all of this, you must come to your own practical and aesthetic decisions - because being a DP is (partly) about being an artist and artists make subjective decisions whereas scientists make objective decisions. You have to choose what you’re going to be.

Note: As usual things got a little fractious so I've removed those particular posts and all personal contact details to protect everyone. Do let me know if anyone objects to what's here. Lastly, as always, Geoff Boyle puts a very apposite comment, this time at the very end.
CML-DIGITAL-RAW-LOG Digest for Friday, February 10, 2012.
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Adam Wilt
Date: Thu, 9 Feb 2012 18:37:46 -0800
X-Message-Number: 1
Don't think "shoot at 800, then (using REDCINE-X) bring ISO down to 500". Think "pick an ISO rating that properly trades off highlight protection versus noise". Then "develop" the footage as needed using curves to hold highlight and shadows (or blow them out as you see fit), but probably not fiddling with ISO unless you like the look of 2/3-stop underexposure (grin).
For the purposes of this discussion, let's say that the camera has 12 stops of dynamic range between the level where highlights start clipping and the level where shadow noise becomes intolerable. That 12-stop number itself isn't important; I might say it should be higher, and Art Adams might say it should be lower, because we have different tolerances for noise. I'm just picking a number that keeps the mental math stone-simple. With the M-X sensor, not using HDRx, an ISO rating of 800 means that you'll have six stops above middle gray (e.g., above your incident meter's reading) before your highlights clip, and six stops below middle gray before your shadows get lost in nasty noise. So, if you meter for ISO 800, you'll get a balanced rendition: a tolerable degree of highlight protection, a tolerable (and equal) degree of shadow detail.
Now, take that ISO 800 footage into post. If you metered for ISO 800 and set the lens that way, then an ISO 800 rating will put middle gray pretty much where it should be (let's ignore for the moment any variations in where RED thinks middle gray should be and where you think middle gray should be, grin).
You don't like the noise at ISO 800? Set ISO to 500 in REDCINE-X. Yes, the noise is suppressed by 2/3 stop, but the image is darkened 2/3 stop as well. You now have exactly the same thing as if you had shot at ISO 500 and had purposefully underexposed 2/3 of a stop to protect highlights (the only way middle gray will fall where it should with an ISO rating of 500 in RCX is if you metered and exposed for ISO 500 in the camera).
You can counteract that underexposure with a custom curve, pulling the midtones back up. But, in essence, you're just undoing the ISO change as far as the midtones are concerned; you could just as easily leave ISO at 800, and use a custom curve to slightly crush the shadows in the ISO 800 "development" and get much the same result.
The key thing to remember is that the camera has a fixed dynamic range; all you're doing by changing ISO ratings at the time of the shoot is trading off highlights vs noise (have a look at 
where I shoot both the M and M-X sensors at various ISO ratings and grade them "normally", without curves). If you need to protect more highlights, you'll have to stop down when shooting, and in post, once you've pulled your scene midtones back up where they belong, whether via ISO or FLUT or curves, you'll have more noise as a result.
You can shoot at a lower ISO: your images will be cleaner, but you'll give up highlight headroom. Shooting at ISO 200, letting in two more stops of light, means your scene will be two stops cleaner / less noisy, and your noise-limited shadow detail will be 8 stops down instead of 6--but highlights will clip two stops sooner, at only 4 stops over middle gray.
Or you can shoot at a higher ISO: At ISO 3200, you'll stop down two stops. You'll preserve two more stops of highlight headroom (8 stops over middle gray), but you'll have two stops more noise over the entire image, with only 4 stops of shadow detail before the noise becomes intolerable.
Which way makes sense? It entirely depends on the scene, what's worth protecting, and your noise tolerance.
I normally shoot exteriors on the RED M-X at ISO 800 because I like uncontrolled highlights to have detail and color, and I don't mind a bit of "grain" and some noise in the shadows. Art Adams prefers a cleaner image; he'll rate the camera at ISO 320 or 400, for a 1-1.3 stop advantage in image cleanliness; he'll sacrifice 1 or 1.3 stops of highlight headroom to get that reduction in noise. I'll do the same thing on a greenscreen stage with controlled lighting, where I don't have any excessively bright things in the image that need six stops of headroom; it's nice to have that added cleanliness for keying. 
It's all about context; there is no one correct answer. It's simply YOUR tradeoff between highlight protection and noise level. And, of course, with EPIC you have the opportunity to employ HDRx to hold those highlight, too... but that's a whole new topic.
Of course there is NO substitute for running tests yourself, instead of trusting what some goober like me says on the Internet. :-)
Is this trivial? Have I been living in the dark for years? Have you done anything similar to this, or maybe have something better?
In general, the secret for getting good pix out of the RED (at least as far as the tonal scale is concerned) is judicious use of the curves control. S-curving the raw exposure to gracefully handle highlights and gently roll off the shadows, while keeping decent contrast in the middle, is a Good Thing... bear in mind that the FLUT processing in the newer "color science" processing does some of this for you (unlike the early days when it was entirely up to you).
Trivial to say, perhaps, but the possibilities are endless...
Adam Wilt
technical services, Meets The Eye LLC, San Carlos CA
tech writer, provideocoalition.com, Mountain View CA
USA
----------------------------------------------------------------------
Subject: RED Workflow
From: Florian Stadler
Date: Thu, 9 Feb 2012 18:53:20 -0800
X-Message-Number: 2
What I tend to do is the following: 
Shoot/expose at 800 for day int/ext and 500/640 for night exterior/interior, making sure nothing falls into noise zone and nothing clips on the sensor (but I let clipping happen in the 800 LUT).
I then "develop" the RAW negative at sensor native 320ISO in REDLOG Film and use a LUT as a starting point in the grade (Arri provides a really good one, great on skintones). This allows me to shoot the sensor at the optimal under/over sweet spot and retrieve all information captured by the RAW sensor. 
Florian Stadler, DP LA
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Art Adams
Date: Thu, 9 Feb 2012 20:43:10 -0800
X-Message-Number: 3
What Adam said. I couldn't have said it better. You don't get more dynamic range out of the camera by changing the ISO, you just reallocate the bits above and below 18% gray. Slower ISO gives you more room for shadow detail and less for highlights, but crushes noise; higher ISO gives you more noise but better highlights.
Tal, the trick you talk about trying is just the long way around. If you like the camera at 800, shoot it at 800; if you like it at 500, shoot at 500. Shooting at 800 and processing for 500 doesn't change a thing, it just makes the workflow more complicated. Fortunately nothing ever goes wrong in post.
You may find yourself on a beach or in a snow storm, at which point ISO 800 makes perfect sense because you don't have any shadow detail that will go noisy and you need lots of highlight retention. You may find yourself shooting a very dark night interior or exterior without highlights, at which point you might consider ISO 200 for rich, clean noise-free shadows.
It does make sense to pick one ISO and stick with it, but I think you also need to be a little flexible and rate the camera properly for the circumstances--especially if they are adverse.
I haven't shot anything with an Epic yet but based on what I've seen it might be the first camera I'm willing to rate as fast as 800 for normal use. I've been rating Red One MX's at 400 or 500, and I rate Alexa at a nice solid 400, but until I get my hands on an F65 the Epic seems to be the fastest I've played with so far.
I'm not a big fan of noise. I like clean shadows with lots of detail.
-----------------------
Art Adams | Director of Photography
San Francisco Bay Area
showreel -> www.artadamsdp.com
trade writing -> art.provideocoalition.com
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Art Adams
Date: Thu, 9 Feb 2012 20:47:28 -0800
X-Message-Number: 4
This allows me to shoot the sensor at the optimal under/over sweet spot and retrieve all information captured by the RAW sensor. 
I must be missing something. The same amount of info is present no matter how the image is "processed" later: that is fixed during capture, and all you're doing after that is pushing bits around. Shooting at ISO 800 and "processing" at 320 just shows you a darker image, which you then apply a LUT to in order to make it look normal again. Why not process at 800 and apply your custom LUT to that?
Also--Arri provides a LUT for Red footage?
-----------------------
Art Adams | Director of Photography
San Francisco Bay Area
showreel -> www.artadamsdp.com
trade writing -> art.provideocoalition.com
----------------------------------------------------------------------
Subject: RED Workflow
From: Florian Stadler
Date: Thu, 9 Feb 2012 22:26:48 -0800
X-Message-Number: 5
I must be missing something. The same amount of info is present no matter how the image is "processed" later:
It is vital and absolutely matters how you process a RAW image before color correction, are you kidding?
You are missing the concept of a "digital negative" and regard the "LUTed positive" as all you captured. 
It is important to process the footage in LOG space (Redlog Film) in the fully tested and implemented workflow I mentioned. You will recapture the RAW images most extent of highlight information by setting the ISO to 320 and the Gamma to Redlog Film. And yes, Arri publishes LUT's designed for their cameras to make the transform from LogC space to Rec709 and that LUT happens to be a pretty decent (not as turnkey as an Alexa LogC of course) starting point after said exposure treatment and processing.
Florian Stadler, DP, LA
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Dan Hudgins
Date: Thu, 9 Feb 2012 23:22:03 -0800 (PST)
X-Message-Number: 6
Quote: [The same amount of info is present no matter how the image is "processed" later:]
The same amount of information is in the R3D file, but you cannot directly access that information without it going through the RED (tm) SDK code that all programs that process R3D into something else use (except REDROCKET (tm) that deviates from the processing somewhat due to a different processing used for speed, it seems).
Because there is no DNG conversion from the wavelet encoded color planes of the sensor, you cannot get at the actual sensor data before white balance (I think they said there was some non-color space option, but that probably also has some deviation from the linear un-clipped non-white balance data sensor data, anyone know? [if it was then no ISO or K adjustment would impact its export]).
So some ISO curve is applied, and some white balance clipping is applied, and those are based on assumptions of where 18% midtone should be (46% as red has said) or in the case of Cineon (tm) code 470/1023 which is not disputed because Kodak (tm) defined that value once and for all time.
But even with the Cineon (tm) curve being used, RED's SDK should also apply a ISO curve for soft clip of the 'super-white' values above 90% white level of 685/1023, otherwise there would be no change if you adjust the ISO in REDCINE-X (tm) and green sensor clip would be set to code 1023/1023.
Because the sensor may have dynamic range greater than the original Cineon (tm) definition, values above 1023/1023 need to be clipped off as part of the ISO adjustment curve, or soft-clipped UNDER 1023/1023 like the shoulder of a film scan would be if the negative was pull-processed.
Its probable that the ISO curves used in the RED SDK/PROGRAMS does have some loss of data for high values of the highlights, you can test that by having a white and gray card and overexposing them at various stop values from +4 to +8 and see where you can no longer see a separation between the two in the processed data, like using a probe to measure the exact code values in a 48bpp full range TIF file.  With the softclip working right all three colors would keep some separation up to the point that the green pixels clip, so that would be true NO MATER WHAT ISO is selected in REDCINE-X (tm), you would just see a change in the magnitude of that separation.
If rather when you are at 320 you see 0 separation at +4 stops, but you see 400 separation at 3200, then you know that adjusting the ISO in processing does increase the highlight detail. If rather at 320 you see 10 separation and 400 at 3200, then you know that the highlights are having some detail, just more posterization after bit reduction to 10bit and 8bit use formats.
Because you have higher bit formats output from REDCINE-X, you may not see the tonal separation on a 8bit or 10bit monitor, in that case another way to see it is to increase the contrast in the highlights or shadows after you make a 48bpp TIF file, then the tonal seperation will be large enough to see on a monitor, along with how much banding you get from the tonal expansion.
Because the assumptions made in the ISO and K correction BEFORE export to Cineon (tm) Log (film log) DPX files is made, there should be some utility is adjusting the processing ISO before export, and then un-doing that to some degree. I would though caution NOT using 10bit Cineon (tm) files for such yo-yo-ing of the tonal values as 10bits only has one spare bit to make grading adjustments +/- 2x or 0.5x transfer curve slope, if you are going to yo-yo the tones to compensate for the ISO curve used in REDCINE-X before export as Cineon (tm), you should export as 48bpp TIF or DPX, not 10bpc or you may get histogram gaps from the LUT used to convert the Log-C to Rec.709 PLUS any additional grading done to re-center midtone and apply curves.
If REDCINE-X allowed the import of the user ISO curves and had control screens for exact adjustment of the clip points, then you could bypass the ISO values and white balance adjustments altogether and relate the linear sensor data directly to the DPX Cineon (tm) code values, which is how I'm doing things with DNG processing.  If you make your own ISO curves from fitting the linear sensor data into the Cineon (tm) range, then you know the exact translation of sensor code value to DPX file code value without guesswork or arguing about what does what or not. In that way you can tailor the highlight detail and shadow noise to the subject matter in each shot, vs the exposure level on the sensor used at the time of shooting.  The assumption is that is what was done by RED -for you- as I have noticed comments that such adjustments using native sensor balance are beyond the average camera user, but in place of knowing exactly what is going on with the data,
you need to try to guess what has been done, not really ever knowing.
Dan Hudgins
San Francisco, CA USA
Developer of 'freeish' Software for Digital Cinema, NLE, CC, DI, MIX, de-Bayer (DNG), film scan and recorder, temporal noise reduction etc.
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Craig Leffel
Date: Fri, 10 Feb 2012 01:30:26 -0600 (CST)
X-Message-Number: 7
Florian wrote; 
I must be missing something. The same amount of info is present no matter how the image is "processed" later: 
It is vital and absolutely matters how you process a RAW image before color correction, are you kidding? 
You are missing the concept of a "digital negative" and regard the "LUTed positive" as all you captured. 
________________ Snip _______________________ 
This whole conversation is what's wrong with most Dp's understanding of shooting and color correcting Red footage. 
There is NO reason anyone should be color correcting Red footage from a pre-processed or converted file format. 
Most of the REAL color correctors on the market, that are designed to actually do color correction as a professional and 
all day everyday task can color correct from the native Raw R3D file. 
This means the colorist has - 
1. The entire dynamic range of the sensor capture to work with. According to Red, that's 15 stops. In my experience, that's bullshit. 
However, having the entire dynamic range of the sensor and the capture at your disposal is important. 
2. The entire Metadata package of the Red SDK to work with. 
This means that I can strip out the ISO settings, the Kelvin settings, and anything else I need to do to reduce noise or recover detail, or bend the picture to fit 
expectations, consistency, quality of light or matching color balance to other shots from different periods on other cameras. 
3. The software is doing the debayer live and from the raw file itself, with the colorist able to make scene by scene decisions about the quality of the debayer. 
So, as many have said it doesn't really matter what you do on a Red. What's important is what your lighting ratios are in terms of Key to Fill 
and where you place your whites. As long as you don't clip the whites or blacks in the capture, a colorist using the raw file can slide the exposure scale 
up and down to fit the right spot on a PER SCENE basis. As mentioned earlier, if you expose toward the middle of the dynamic range you are giving yourself 
and the colorist lots of room to slide the entire dynamic range of the exposure up and down by a number of stops. As long as detail has been preserved in the capture 
there is no limit in terms of what can be done and manipulated. 
Given that when color correcting this way the colorist can; 
Change ISO 
Change white balance 
Change Kelvin 
Change Flut 
Change Colorspace output 
Change working colorspace 
Change exposure 
Change the quality of the Debayer 
Change curve 
Change exposure range 
All of this can happen scene by scene in a high grade color corrector. The Red is the only camera on the market where the colorist can recover the entire sensor 
data and all captured dynamic range from the Raw file. The Alexa is completely incapable of this at this time. The Alexa Raw file is currently severely limited. 
For those that enjoy working from a predefined colorspace and a predefined dynamic range, Alexa works ok. Arri still has not figured out how to make a workable Raw file, 
and why they bother processing an image at 1920x1080 when that's the same space we broadcast in or display in is beyond me. As a former colorist, I'm not happy at all 
with the ability to recover the sensor data from an Alexa shoot. It's predefined in a Log-C colorspace. That space has a beginning and an end, and a limited range.... unlike the R3D file. 
If you're confused as to what Color Correctors I'm referring to, here's a partial list of those that can use R3D files natively and in real time; 
Quantel 
Mystika 
Baselight 
Fiilm Master 
Scratch 
I'm not a Red fanboy. I've color corrected for 23 years on as many different cameras, systems and file types as you care to name. There are plenty of things I don't like about Red. 
However, if we're talking about process, it's the only camera on the market doing it even close to right. Arri is still trying to convince people that Prores is fine and that all we need to do 
is shoot and edit. Nothing could be farther from the truth for high end commercials, TV and Features. Sure, I'll bet some of you will say your work doesn't need to be color corrected. 
If you can find yourself an honest colorist, they'll disagree with you. I know many of you swear by Alexa. I wish I could show each and every one of you what you are missing from your sensor data, 
and what you actually captured - and what could be done with it - if I had the ability to show it to you. Your data and your work are being lost on the Alexa. 
The reality of this discussion is to think of exposure as a big ball of data in the middle of a predefined scale. You can place that ball anywhere you want, the scale remains the same, 
and the artifacts on either end of the scale remain the same. 
Best to all - 
Craig Leffel 
Former Senior Colorist 
Optimus 
Chicago / Santa Monica 
-- 
Craig Leffel 
Director of Production 
One @ Optimus 
161 East Grand 
Chicago, IL 60611 
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Nick Shaw
Date: Fri, 10 Feb 2012 07:55:24 +0000
X-Message-Number: 9
On 10 Feb 2012, at 06:26, Florian Stadler boundary=Apple-Mail-DF67F580-FABD-4B16-8164-0FAD7F374095 wrote:
And yes, Arri publishes LUT's designed for their cameras to make the transform from LogC space to Rec709 and that LUT happens to be a pretty decent (not as turnkey as an Alexa LogC of course) starting point after said exposure treatment and processing.
I know a lot of people use ALEXA LUTs for REDlogFilm media, but don't forget the standard LogC to Rec709 LUTs from the ARRI web app include a colour matrix which is specifically designed to convert ALEXA Wide Gamut into Rec.709 colour space.  Since footage from a RED camera is not in this colour space to start with, the matrix is not really appropriate.  I would suggest a 1D LUT myself or a 3D LUT with colour space conversion switched off.
I would also say I do not go along with the necessity of developing footage shot at ISO 800 at ISO 320.  There was an argument for this with older RED gamma curves, but with REDlogFilm all highlight detail in a clip shot at ISO 800 is preserved when developed at ISO 800.
Nick Shaw
Workflow Consultant
Antler Post-production Services
London, UK
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Dan Hudgins
Date: Fri, 10 Feb 2012 00:08:50 -0800 (PST)
X-Message-Number: 10
Quote: [3. The software is doing the debayer live and from the raw file itself,
with the colorist able to make scene by scene decisions about the
quality of the debayer.]
I would agree that doing the final render from the original R3D is better than going through the ISO and K corrections then grading after for the most part, that is how my system works with DNG frames, from sensor data direct to final render that way the various filters are 'centered' right on the final grade and not off center where they may do more damage to the perceived results.
So I guess there is no DNG converter for ALEXA?  There is for SI-2K (tm) and it seems to be a camera on the market.  Last I heard Kinor-2K and Acam were on sale.
My primary criticism of the adjustments to the REDCODE code is that there are too many of them, and how they all interact seems less than obvious, Its like taking 9 prescription drugs at once.  Having an alternative interface for translation from the sensor code values to the end use values that shows the exact code translation in a clear way would be an improvement that would clear up some of the fuzzy logic behind various ideas of what works best, as you could KNOW what happened to the sensor data and see both the original and result code values side by side to know the exact exposure levels on the sensor itself (As I can do by measuring a gray and white card without any corrections, is there a way to get a TIF out of REDCINE-X without ANY corrections at all so that the TIF code values are 1:1 correspondence to the sensor code values from the ADC?) 
Dan Hudgins
tempnulbox (at) yahoo (dot) com
San Francisco, CA USA
Developer of 'freeish' Software for Digital Cinema, NLE, CC, DI, MIX, de-Bayer (DNG), film scan and recorder, temporal noise reduction etc.
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Michael Most
Date: Fri, 10 Feb 2012 07:06:53 -0800
X-Message-Number: 11
On Feb 9, 2012, at 11:55 PM, Nick Shaw wrote:
I know a lot of people use ALEXA LUTs for REDlogFilm media, but don't forget the standard LogC to Rec709 LUTs from the ARRI web app include a colour matrix which is specifically designed to convert ALEXA Wide Gamut into Rec.709 colour space. 
The Arri online LUT builder lets you build a LUT with or without a matrix. Building a LogC to Video LUT with no matrix and extended range yields a LUT that works very well with RedlogFilm footage, just as Florian described.
Mike Most
Colorist/Technologist
Level 3 Post
Burbank, CA.
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Nick Shaw
Date: Fri, 10 Feb 2012 15:20:29 +0000
X-Message-Number: 12
On 10 Feb 2012, at 15:06, Michael Most wrote:
The Arri online LUT builder lets you build a LUT with or without a matrix. Building a LogC to Video LUT with no matrix and extended range yields a LUT that works very well with RedlogFilm footage, just as Florian described.
Absolutely.
That is why I said there was a matrix in "the standard LogC to Rec.709 LUT", and recommended "a 1D LUT ∑ or a 3D LUT with colour space conversion switched off."  To do this the user needs to understand properly how to use the options in the ARRI LUT generator web app, and I have come across many people who do not fully understand those options, including people who I would expect to know better!
Nick Shaw
Workflow Consultant
Antler Post-production Services
London, UK
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Michael Most
Date: Fri, 10 Feb 2012 07:30:18 -0800
X-Message-Number: 13
On Feb 9, 2012, at 11:30 PM, Craig Leffel wrote:
There is NO reason anyone should be color correcting Red footage from a pre-processed or converted file format. 
I disagree. If you have a project that consists only of Red originals, with no other cameras involved, no visual effects, no speed effects, and, well, basically only cuts and dissolves, that statement makes sense. But most "real" projects, especially long form projects, don't exist in that kind of a vacuum. There is often a healthy mix of camera originals (from multiple cameras) and visual effects, and the only real way to keep things properly conformed and coherent is to convert to a "standard" container for everything. That way things can be properly maintained and managed by editorial. And the truth is that with sensible settings, there is very little to no difference between doing a "live passthrough" RAW to RGB conversion and doing a transcode, because ultimately you don't correct RAW directly in any case. It must become an RGB image for any further manipulation to take place. Yes, the conversion settings can help optimize that, and yes, it's nice to work that way when you can
, but that's not always the case. And for all the talk about what's "proper," I know of very few large features shot on Red that have gone through a DI pipeline in their native form. Some, but very few, for the very reasons I mentioned.
So, as many have said it doesn't really matter what you do on a Red. What's important is what your lighting ratios are in terms of Key to Fill 
and where you place your whites. As long as you don't clip the whites or blacks in the capture, a colorist using the raw file can slide the exposure scale up and down to fit the right spot on a PER SCENE basis.
That's true, but it's also true that if you use RedlogFilm as the gamma curve and leave the camera metadata alone, you're not likely to clip anything that wasn't clipped in original production, provided the cameraman and/or the DIT knew what they were doing. The range that's maintained by the RedlogFilm conversion is very, very wide, and unlike previous Red gamma curves, it's very unlikely that you're going to see something clipped that wasn't.
Change ISO ..Change white balance ..Change Kelvin ..Change Flut..
ISO and Flut are the same thing. The only difference is that Flut is scaled for tenths of a stop.
The Red is the only camera on the market where the colorist can recover the entire sensor data and all captured dynamic range from the Raw file. The Alexa is completely incapable of this at this time. The Alexa Raw file is currently severely limited. 
Please explain this, because I haven't found that to be true at all.
Arri is still trying to convince people that Prores is fine and that all we need to do is shoot and edit. Nothing could be farther from the truth for high end commercials, TV and Featues.
Once again, neither I nor almost anyone I know - all of whom work every day in high end television and features (mostly television) have found that to be the case. LogC Prores files work quite well for television series work when put through a proper color pipeline. I really don't understand what you feel is the problem, unless, as I said earlier, the material is not being competently shot. And I don't think Arri is "trying to convince people" of anything. They provide tools and choices, and those tools and choices are selected by cameramen and production teams. ProRes HD files are one choice. Uncompressed HD is another. ArriRaw is another. Personally, I like the idea of having choices that can be tailored to the needs of the job at hand in terms of resolution, file size, quality, flexibility, budget, and available post time. Obviously Sony likes that approach as well, as they're doing essentially the same thing on the F65. And although I like Red and what they've done, the fact is
that Red is the one company that forces you to shoot a format you might or might not really need or even want. So there's two sides to that discussion∑.
Mike Most
Colorist/Technologist
Level 3 Post
Burbank, CA.
----------------------------------------------------------------------
Subject: RE: RED Workflow
From: Daniel Perez
Date: Fri, 10 Feb 2012 11:01:42 -0500
X-Message-Number: 14
Craig Leffel wrote: 
There is NO reason anyone should be color correcting Red footage from a pre-processed or converted file format. 
Most of the REAL color correctors on the market, that are designed to actually do color correction as a professional and 
all day everyday task can color correct from the native Raw R3D file. 
It is important to note though that it is not always clear (to me at least) how the RED SDK fits in the floating point workflow of those professional color correction systems. In particular when a RED ROCKET card is involved.
As far as I understand most color correction systems must use the RED SDK to process the RAW into float RGBfor further color grading (YRGB in Resolve). It is not clear how/when all RED SDK embedded color processingis done: floating point? what precision? ... in the end what does the RED SDK deliver to RGB color engine? Is it afloat framebuffer? ...I've been told it delivers fixed point RGB !!! ... in either 8bits, 10bits, 12bits or 16bits (whenthe systems lets you choose).
No system actually grades form the "native" R3D. They all grade from the RGB output provided by the RED SDK ...maybe "live" or "real time", but you are not actually grading the RAW Bayer Pattern. The question here is if all those embedded extra color transformations that the RED SDK provides can be considered part of the professional floating point grading system ... or if they should be considered just an input pre-process.
Daniel PerezVFX/DI - WhyOnSet Madrid - Tremendo Films
    
----------------------------------------------------------------------
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Keith Mottram
Date: Fri, 10 Feb 2012 17:18:07 +0000
X-Message-Number: 16
On 10 Feb 2012, at 16:51, Craig Leffel <craig@optimus.com> wrote this about Arri:
AND they've decided that getting in bed completely with Apple
Don't know about anyone else but I'm looking forward to road testing Baselight's FCP plugin... also got to be honest I prefer the look of prores4444 Log than Red. As for the cameras themselves- give me an audi over a hummer any day if the week...
Keith Mottram
Edit/ Post, London
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: David Perrault
Date: Fri, 10 Feb 2012 12:37:58 -0500
X-Message-Number: 17
,,,,,The reality of this discussion is to think of exposure as a big
ball of data in the middle of a predefined scale.,,,,,
Really ?!?
I think that's a bit delusional - that's just not the way photography, 
as an art, is practiced.  There is photography and there is scientific 
imaging - and there is a difference.
Imagine how *The Godfather* would look if exposures were chosen in such 
a scientific manner?
Sometimes things are clipped or squashed for a reason.
-David Perrault, CSC
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Craig Leffel
Date: Fri, 10 Feb 2012 12:19:32 -0600
X-Message-Number: 18
On Feb 10, 2012, at 11:37 AM, David Perrault wrote:
Really ?!?
Yes, Really.
I think that's a bit delusional - that's just not the way photography, 
as an art, is practiced.  There is photography and there is scientific 
imaging - and there is a difference.
Not really. Look at the most celebrated still photographers of all time, especially the ones that specialized in 
making and manipulating negatives. Take Ansel Adams. Known for the widest of latitude in the resultant prints that came from his exposures.
He and others pioneered concepts like N-1 development to place highlights on the negative so that they were in a place capable of being reproduced
at the intended scale or stop if you will. Placing your exposure within a capture medium you understand is exactly what Photography
and the Art of Photography is all about. You can't break the rules with any kind of knowledge and consistency if you don't know them.
Imagine how *The Godfather* would look if exposures were chosen in such 
a scientific manner?
Is your point that if the Godfather was captured with a flat neg in a defined space without clipping the capture curve that a timer or colorist couldn't
have possibly achieved that look? because I disagree. At that point we're talking about characteristics of certain film stocks, which in this discussion
means a camera or a file format. I would argue that the Art you see has as much to do with physical characteristics as it does the way it was printed.
The exposure of the capture is secondary. Those early photographers would argue that the Art comes in the darkroom where they purposely decided
how to present their images and made version after version of burning and dodging, and chemical bath changes, and differences in time per bath, and
the kinds of actual developer, stop and fix they used. As well as 2 bath developer. ALL of that contributed to their look. Photomechanical and physical
processes after the fact. Composition, framing and exposure have to happen in the camera. The rest is taste and personal opinion.
Sometimes things are clipped or squashed for a reason.
True enough... and when it was a time where we projected light through a physical surface and onto a wall, that kind of thinking made sense.
Digging into a film stock was just fine when the person making the exposure knew the intended display medium and format. The sheer fact that light is physically penetrating
through a physical object has everything to do with the intended output and the beauty it produces.
We're not living in those times anymore.
CL
_________________________
Craig Leffel
Director of Production
One @ Optimus
161 East Grand
Chicago, IL 60611
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Noel Sterrett
Date: Fri, 10 Feb 2012 13:44:13 -0500
X-Message-Number: 19
Daniel Perez wrote:
The question here is if all those embedded extra color transformations that the RED SDK provides can be considered part of the professional floating point grading system ...
Any data transformation (color space conversion, debayering, filtering, 
etc.) that cannot be perfectly reversed, involves a loss, however 
slight, of information. Where multiple transformations are involved, the 
order of processing can also influence the result. So in a perfect 
world, it would be preferable to have direct access to the sensor data, 
so that all processing thereafter would be left up to color correction 
systems rather than hidden by the manufacturer, either in the camera, or 
their SDK.
But we don't live in a perfect world, and at the moment, very few 
cameras let you really peek inside the sensor. Imagine what movies would 
have looked like if Kodak had hidden how film responds to light.
Cheers.
-- 
Noel Sterrett
Admit One Pictures
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Art Adams
Date: Fri, 10 Feb 2012 11:19:21 -0800
X-Message-Number: 20
Is your point that if the Godfather was captured with a flat neg in a defined space without clipping the capture curve that a timer or colorist couldn't have possibly achieved that look?
No, his point is that a flat negative would give a colorist the option of rendering it just about any other way he or she wanted. This is not desirable.
Given that we, as cinematographers, don't process and print our own work and must rely on the expertise of others, it is too easy for a rogue colorist or a meddling producer to come along later and change it all. If we shoot it such that it can really only be graded one way then we protect the integrity of our work and the director's vision.
The concept of "Just shoot a flat negative and we'll do the rest" moves the role of cinematographer from artist to technician. I'm a director of photography, not a director of data capture, and my role is not to simply hand over a bunch of data so that someone else can have all the fun. I shape it first, and I expect it to retain that basic shapeˆwith a bit of buffing around the edgesˆall the way through the post process.
I have to admit that I'm a stickler for shooting a solid "negative" which offers a fair bit of leeway in post, but I do that for my own peace of mind rather than to give someone a license to do it their way later.
-----------------------
Art Adams | Director of Photography
San Francisco Bay Area
showreel -> www.artadamsdp.com
trade writing -> art.provideocoalition.com
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Keith Mottram
Date: Fri, 10 Feb 2012 19:41:46 +0000
X-Message-Number: 21
There are examples where it is not possible to rebuild the look in post well, how about someone heavily backlit by the sun- in a shot like that i don't want all the detail known to man in the subjects clothes but I want the sun to wrap and burn round the subject in an organic manner. exposing for maximum range in this and others would ruin the shot. Unless the lighting is naturally flat then exposure should be the sweetest part for the end image not the sweetest point for the majority of options- unless there is a specific need for example VFX. 
Then again when it comes to commercials we're all technicians, if I ever think otherwise I just become depressed.
Keith Mottram
Edit/ Post... but does like to shoot occasionally.
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: David Perrault
Date: Fri, 10 Feb 2012 14:45:06 -0500
X-Message-Number: 22
,,,,,The exposure of the capture is secondary.
Uhmm...  No.
To a scientific objective, yes.  But the creative mandate does not
support your post-production-centric way of looking at things.
Comparing Ansell Adams prints and neg density to modern production film
and television productions is just obfuscation.
If the creative mandate of the cinematographer is maintained, with
collaboration that extends the final image,  then there is no denying
that capturing the most information possible has merit.
But when does that actually happen?  Modern production realities often
remove a degree of control from the cinematographer in the final image 
manipulations.  And that is putting it nicely.
The choice of exposure, and the inherent manipulation this provides, is
one of the ways photographers take the science of imaging into a
creative place.
,,,,,The sheer fact that light is physically penetrating through a
physical object has everything to do with the intended output and the
beauty it produces.
We're not living in those times anymore.
Thinking that way is pushing the art backwards.
,,,,,You can't break the rules with any kind of knowledge and
consistency if you don't know them.,,,,,
That's being pedantic without allowing for the creative mandate of those 
that do know the rules.
You need to read up on how *The Godfather* was shot.
-David Perault, CSC
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Kyle Mallory
Date: Fri, 10 Feb 2012 14:37:29 -0700
X-Message-Number: 23
My preferred/personal workflow:  Since you're shooting RAW. Monitor RAW.
If I don't have enough light to make it work, then turn off RAW 
monitoring, and tweak ISO/etc to get an idea of what can be recovered in 
post (or if its even worth trying to recover).  But for the 95% of 
everything... I have to remind myself that monitoring w/ meta (or 
Non-RAW) is a false representation of what the camera is actually recording.
The important thing is that the camera records what the camera records, 
regardless of your meta and how you choose to monitor.  Everything else 
is just pushing bits around *after the fact*.  You aren't going to 
magically create what wasn't there originally.  And if you think you 
are, you are wrong, and in fact you are most likely throwing information 
away somewhere else.
--Kyle Mallory
Filmmaker Hack
Salt Lake City, UT
----------------------------------------------------------------------
Subject: Re: RED Workflow
Date: Fri, 10 Feb 2012 11:29:57 -0800
X-Message-Number: 24
Thanks guys,
This discussion took an interesting turn...
Just to jump back to the original topic - I understand what you are saying about the distribution of dynamic range and rating the camera.
I think that some people are confused since the the term 'native iso' is still being used in this context occasionally. Perhaps this is misleading when discussing the RED camera.
If the 'native iso' of the camera is 320, then shooting 800 is 'underexposing'. switching to 'raw view mode' shows a darker image as you rate the camera higher and so on. I like the dynamic range distribution definition of this better.
So after reading your replies it seems that my original idea is unnecessary. Shoot and develop at the same iso, know your camera and it's abilities at the chosen iso - pretty much what I've been doing.
I'm still tempted to create a look for dailies that will be slightly different between day/ext and night/ext and treat it as I would treat two different film stocks used for the same purpose, but this becomes a creative choice more than a technical necessity.
Tal
Tal Lazar
Director of Photography
----------------------------------------------------------------------
Subject: Re: RED Workflow
Date: Fri, 10 Feb 2012 13:24:22 -0800
X-Message-Number: 25
Things change. What? You already knew that? My point is that all of us, as
artists and technicians, must nimbly navigate the actual issues impacting
"authorship" of the image in the here and now.
At the most basic level, shooting a fat, clean digital "negative" that
travels into post with metadata that indicates intent should work a treat.
IF the DP has enough "juice" to keep their "look" relatively intact through
to the finish that's great (if they'll pay you to participate in the grade
even better, but we all know how often that happens,-)). If the producers
see the DP as more technician than artist (as is typical in spot work),
most of us would consider that a poor use of resources, but unless you
don't plan to cash their check...
I know some DP's try to make a "thin" negative that falls apart with heavy
grading just to wrest control from their "collaborators". There was a very
high profile studio tentpole where the well known DP and the well known
colorist ended up in a veritable game of chicken where the DP kept dropping
exposure so that when the studio pushed the colorist to lift the levels
they would be stymied by the noise floor. WTF. Is this really the road we
want to go down?
Cameras that shoot in RAW color space with 12+ stops of DR like the RED
Epic present a different set of opportunities, and risks, than other
formats. IMHO it makes it more critical that the DP and the colorist are in
the loop with the creatives in designing the "look". Insert the usual great
power, great responsibility rap here. Bitch and moan all you want but does
anyone really expect to get that genie back in the bottle?
Blair S. Paulsen
4K Ninja
SoCal
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Art Adams
Date: Fri, 10 Feb 2012 15:39:42 -0800
X-Message-Number: 26
If the 'native iso' of the camera is 320, then shooting 800 is 'underexposing'.
Not really. "Native" just means that the signal coming out the A/D converter isn't being boosted any further in the DSP. Native gain means very little because, while it is the "cleanest" signal you'll get out of the camera, the noise level is what really defines how fast it is.
Even though ISO 800 is "underexposed" in relation to the native gain there's nothing wrong with using it if you like the results. There's no law that says you can only use the camera at its native gain. How to rate the camera is a creative decision, not a technical one.
I'm still tempted to create a look for dailies that will be slightly different between day/ext and night/ext and treat it as I would treat two different film stocks used for the same purpose, but this becomes a creative choice more than a technical necessity.
Exactly. And keep in mind that you can tweak FLUT and get into the RGB gains and contrast settings and tweak the look to your heart's content without affecting the underlying image. It's all reversible as long as you don't clip or push something vital into the noise floor, and post will see the look that you intended when it first comes up on a monitor.
I tend to watch Rec 709 and then toggle into raw occasionally to see if something bad is happening. My understanding is that the traffic lights always look at raw so they can give you a heads-up if something's wrong.
-----------------------
Art Adams | Director of Photography
San Francisco Bay Area
showreel -> www.artadamsdp.com
trade writing -> art.provideocoalition.com
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Art Adams
Date: Fri, 10 Feb 2012 15:43:41 -0800
X-Message-Number: 27
when the studio pushed the colorist to lift the levels they would be stymied by the noise floor. WTF. Is this really the road we want to go down?
No, but if the colorist doesn't respect the DP's intentions it's the road that will be traveled. Why would you expect anything less? Most DPs don't get into this business to be technicians. We'll fight for creativity. If someone doesn't like what we're doing then they need to tell us, and then--if things don't change--replace us. Fighting with colorists is not a productive use of anyone's time.
IMHO it makes it more critical that the DP and the colorist are in the loop with the creatives in designing the "look".
Under ideal circumstances that's exactly what happens. It doesn't always work that way, though.
-----------------------
Art Adams | Director of Photography
San Francisco Bay Area
showreel -> www.artadamsdp.com
trade writing -> art.provideocoalition.com
---
END OF DIGEST
CML-DIGITAL-RAW-LOG Digest for Saturday, February 11, 2012.
1. Re: RED Workflow
2. Re: RED Workflow
3. Re: RED Workflow
4. Re: RED Workflow
5. RE: RED Workflow
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Tsassoon
Date: Sat, 11 Feb 2012 08:22:20 +0530
X-Message-Number: 1
DNR
Tim Sassoon
SFD
Santa Monica, CA
Sent from my iPhone
On Feb 11, 2012, at 2:54 AM, Blair Paulsen  wrote:
DP kept dropping exposure so that when the studio pushed the colorist to lift the levels
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Bob Kertesz
Date: Fri, 10 Feb 2012 20:30:31 -0800
X-Message-Number: 2
There was a very high profile studio tentpole where the well known DP and the well known
colorist ended up in a veritable game of chicken where the DP kept dropping exposure so 
that when the studio pushed the colorist to lift the levels
they would be stymied by the noise floor. WTF. Is this really the road we want to go down?
Sounds very much like what was done on the original Godfather.
--Bob
Bob Kertesz
BlueScreen LLC
Hollywood, California
DIT and Video Controller extraordinaire.
High quality images for more than three decades - whether you've wanted them or not.
We sell the portable 12 volt TTR HD-SDI 4x1 router.
For details, visit http://www.bluescreen.com
----------------------------------------------------------------------
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Tsassoon
Date: Sat, 11 Feb 2012 10:29:33 +0530
X-Message-Number: 4
OTOH, in the real world, working on a movie with multiple VFX vendors and the need to do quite a bit of processing to RED images being used at their highest resolution; removing lens distortions, sharpening, crops, technical or pre-grade, etc., one would no more hand over the RAW footage to work from than one would OCN in a film show for vendors to scan or TK themselves (besides damage or loss).
There are reasons production does the scanning in a film show, and there are reasons to pre-process to an approved distribution DPX or EXR in a digital show. Mainly so there's only one movie being made.
Producers are not by nature enthusiastic about paying for the work, but we still manage to sell it :-)
Tim Sassoon
SFD
One more day in Mumbai
Sent from my iPhone
On Feb 11, 2012, at 12:14 AM, Noel Sterrett <noel@admitonepictures.com> wrote:
Any data transformation (color space conversion, debayering, filtering, 
etc.) that cannot be perfectly reversed, involves a loss, however 
slight, of information.
----------------------------------------------------------------------
Subject: RE: RED Workflow
From: "Geoff Boyle"
Date: Sat, 11 Feb 2012 08:04:22 -0000
X-Message-Number: 5
I've been watching this unfold and kept saying to myself "stay out of it"
but really!
Guys, there are a million ways to do anything and which one is "right"
varies job by job and facility to facility, client to client and place to
place.
THERE IS NO "RIGHT" WAY!!
There's only the way that works for you on that particular occasion.
Right now I'm assembling a 3D piece for a conference I'm speaking at and I
have rushes in SIV, CF mux, CF non mux, DPX, XDCam, NXCam, R3D, GoPro,  I'm
sure I've missed something.
I have SpeedGrade NX, Edius 7, Premiere Pro, RedCineX, Firstlight, Resolve
and on and on.
I'm transcoding everything to DPX using whichever route works best for that
source.
For CF it's establish a look, but not do any 3D work, in Firstlight, then
output to DPX via Adobe Media Encoder, with R3D it's RedCineX and out to
DPX, Edius seems to be best for Sony formats, and on and on.
In theory most of the edit software can work with the formats natively, and
they do, but I'm finding that there is a "best" route for each format and
they are not in the same software.
So, I use the best for any individual format and then take the common format
of DPX into SG and do any 3D work and final grading there, outputting to DPX
to then create a DCP.
Is it the best way?
It is for me on this job.
The next time I try this????
Cheers
Geoff Boyle FBKS
Cinematographer
EU Based
---
END OF DIGEST


As I mentioned at the beginning, this is a snapshot of attitudes at the beginning of 2012 and there's a lot of good sense here, but all methods to my mind are also rituals that in the end are there to get you through the day.


Terry Flaxton