After 27 years on the road as a DP I recently took up a Creative Research Fellowship in High Definition Imaging at Bristol University. There are two strands to my research i) to look at the effect of various resolutions on perception through making and exhibiting HD installations and ii) creating a resource for future researchers of video interviews on HDV of the thoughts of those people working in the higher reaches of HD at this point in time.
So I’ve just been in the USA doing some interviews with people who are across the HD medium in an interesting way. Some of the people interviewed were just back from this years NAB, at which Red have announced 3k and 5k cameras besides their Red One which is notionally 4k (most industry people who’ve tested it believe it to be about 2.7K but I’ve also heard that it is as low res at about 1.8 k). So how does the Red fit in to our future and does the technical stuff really matter ? Read on.
Part of my research at Bristol University is to find out what the affect of high resolution images are on humans as we develop and tailor this new technology to our needs. I have some evidence from a recent installation that increased resolution can confuse the viewer as to what is real and what is not – but this is just the beginning of this new medium.
Remediation is a word that describes what a new medium does by copying the effects of the preceding medium for a while, whilst it’s finding its feet. Still life painting morphed into still life photography and the fledgling medium had to deal with older ways of thinking. HD Cinematography has the mindsets of three prior mediums to deal with – film, analogue video and early digital video.
So for this next work I decided to re-shoot a scene made famous by Ansel Adams in Yosemite Valley which he captured through his exquisite Black and White images. This was not a gesture towards the idea of remediation – a form of copying - it was in fact about exploring something I’d noticed as I slowly get my eye into shape. In short my installation investigated at what level of resolution do people confuse real and projected reality. It turned out that images shot with 960 pixel AG HVX200 up-rezzed to 1080i create an interesting level of confusion with some projected plates of food and some real white plates placed in the projected plates location.
In this new strand of work I want to re-shoot those places that have been over shot to saturation to try to find something about what higher resolutions might say in addition to the iconography of the image, to try to photographically unveil some new truth about the location.
I hired a Red from Chater Cameras in Berkely after being directed to them by Art Adams, a well-known Bay Area Director of Photography who as it happens was only a week behind me in terms of doing a serious shoot with Red. Originally I’d posted on CML to try find a reputable source for a Red but apart from one in LA from Dale Launer (just a little too far from Yosemite) I mostly attracted some very dubious email correspondence from people who you just knew had no idea about the kit they were hiring out and certainly no back up. Chater had two bodies – that’s basic if you’re hiring. They also had the lenses I needed – you need serious glass with a Red. Also John Chater is a Scott and we like Scottish people don’t we.
I wanted an extremely long and detailed zoom out of the Bridal Veil Waterfall to slowly but surely increase detail, depth and definition until the viewer is completely overawed with the shot. This would be the kind of awe you get from An Ansel Adams photograph, but with the addition of the ‘reveal’ which of course the single image photographer was restricted from doing by their medium. My reveal also begins with a long slow digital zoom from the pixels of the image which displayed the water within the fall itself which was then picked up after it had zoomed out to 4k by the analogue lens zoom from tight on telephoto then out to wide shot.
Chater recommended Jeremy Long as a good AC familiar with the Red sufficiently to get the thing working and eventually downloading the data to laptop. I have to say, when you work with the kit it’s a fairly simple operational procedure – but with all of these things you need to get to touch the stuff in the first case – that’s why AC’s prep so much. The camera came from Berkeley to Yosemite after a 4 hour drive so if anything failed we were not going to be able to solve it – it would be a do-or-die shoot.
I realised I needed some good glass on the front of the camera in 4k mode so I asked John for his suggestions and we came up eventually with 24 – 290 Angeniux Optimo zoom that you can see in the picture – its not for the faint hearted. There’s a 6 inch Chroziel matte box on the front holding 4 bits of glass in front of the lens to ND the shot down to the right stop plus some atmosphere cutting glass as well as some sky ND Grad. Are grads dangerous on zooms? Only if you use them wrongly. As it happens I’m adding more and more grads into the mix these days to induce light control all around the lens. The 6 inch matte box is mandatory for proper HD shooting also - so the lightweight camera begins to get very heavy indeed.
So before I press record for the first time I’m standing at Tunnel View looking at the scene and wondering about exposure or at least some kind of placement on the latitude curve with a hubbub of people around me. I take out the light meter and take a reading at 320 ASA (and the Red is balanced at 5000k by the way – near daylight - I've begun to like this colour temperature a lot more recently as it actually reflects what my eyes are really seeing).As for 320 ASA, I think this is the correct 'exposure' for the current build.
In film we think of an Fstop (or Tstop) as a location on the exposure gradient and at the same time keep in mind that unlike video which was so damn critical in terms of exposure (don’t get it wrong basically – just like reversal film) – that all exposure values were in fact a relative judgement. We placed ourselves on an exposure curve and used experience to make an artistic choice about the representation of reality that we saw in front of us and in that choice induced atmosphere and hopefully the suspension of disbelief for the audience.
This gained an auteur kind of respect in the industry as we the cinematographers ‘painted with light’. I shan’t take that statement apart except to raise a doubt about what some people would prefer to maintain as an ‘artistic’ idea when in fact most DP’s use quite prosaic colour values to paint their ‘paintings’ - Like in any other practice, it's only the special ones with 'the touch' that make superlative work.
Many DP's simply use obvious colour ideas – warm looks make you feel comfortable with what’s going on, cold looks make you feel blue and alienated – etc. Conrad Hall and Vitorrio Storraro took two different approaches to the problem of conventional understandings and forged their own path in colour and exposure to widen the palette of the less talented people. We all owe them a great debt of gratitude for taking the risks they did and dragging the commercial folks along with them.
In video we had to play a slightly different game using all kinds of tricks to pull off the ‘dirtying of the look’ to generate some kind of organic feel in a clinically clean medium. In video I used to do all my work in camera and not abrogate my responsibility to the grader to work a thin patina of look over the footage. In film I lived on the edge of exposure – sometimes getting it wrong of course, but always knowing that as a head of BBC natural History once said – ‘we send people out with film cameras as opposed to video cameras because even a monkey can get something by spinning the dial’. That’s a telling statement, not because film is easy, but rather because that’s where the skill comes in – to get it ‘right’.
So it's said that a camera like the Red One has latitude for say 13 - 14 stops: 27 years ago I paid £500 to attend a workshop of a name cinematographer to try to learn how to expose film for low light. The cinematographer had specialised for his career in low ASA film and costume drama and was know for the excessive use of light. I asked him how to work at the low end and he didn’t understand my question – which was good because he relayed to me that I was thinking in the wrong way. There is no such thing as correct exposure, just a choice about where you place your exposure on the latitude gradient. So I paid my $1000 for that one piece of information: “Think Differently’.
So for absoulteley nothing I’ll pass this on to you: ‘Think Differently’ from now on about exposing electronic cinematography (that’s what I’ll now be calling it by the way, because that’s what I accept that it’s now legitimately become).
With all the above coming in to my mind as a dawning realisation started to take shape which I’ll talk about ina little while, I rated the camera at 320 ASA for rec 709 and measured 16 in the shadows and 45 in the highlights. The sweet spot on the lens was between 5.8 and 8 so I offered up a 2 stop polarising filter, a .3 or .6 of ND depending on the lowering or general lift of light, a .3 ND grad to obtain more of the clouds and a stop of 5.6 (and a half) to place the whole image on the place in the gradient to realise the way I want to see what was before me then, once again. There’s nothing special about the above calculation – it’s a straightforward safety net.
So I shot and Jeremy then downloaded the data from both Flash Cards and the 320 gig Red Drive. To demystify this process, one thing to keep in mind is that if you change speed or resolution on the camera you have to reformat the flash card or drive before recording. This means you push a card into the slot, set the settings and record, format it, shoot for 4 minutes, take out the card, put it into a cheap flash card reader through a USB connector on the mac and it mounts as a drive and you drag off the footage. Easy. If you use the Red Drive you just firewire it into the laptop and do the same thing. (why did red make their cards proprietory and therefore expensive ???? and has anyone hacked this yet or am I missing something? Forward apologies if I am). Scott Billups told me he’d shot an entire movie with 4 cards using them in 8 minute pairs (16 minutes at 2K). He’d enjoyed this as it had made the whole team feel like they were back on ten minute magazines. (Keep in mind you have around 160 minutes on the red 320 gig drive).
You then open RedCine to watch the files (at a quarter resolution on a 5400rpm portable drive which still blew everyone away that watches the shot - and a quarter resolution on my 1920x1080 laptop screen meant one thirty second resolution of the actual data as far as I can work out) or import at 2k into final cut. You can use a cropped 4k (4000 pixels not 4098 if you use Cienform’s Neo4k utilities for Final Cut). You switch to log and transfer then import though redcode. In Redcine you have to choose whether to watch in Red Log or Rec 709 (or various other routes). If you value the camera at 320 ASA it will look ok in Rec 709 or dark in Red Log as it should be rated then at 180 ASA. There’s a whole set of arguments about which to use – this is one to study up on but Rec 709 is good for TV, Redcode for cinema. Everyone can have a good argument about this subject the more that gets ‘properly’ shot. Also, with information like that above, don’t try and learn it – it’s too hard just through the mind. You’ll understand in a few minutes if you get the programme and press a few buttons. (I do love this about Red software by the way - no instructions, you just work it out as it's all self explanatory).
I found the process easy and even though every take I did was watched by a large crowd (put a movie camera at Tunnel View in Yosemite and even though the view is one of the most stunning on earth – gadgets still win out). In fact with all of Yosemite valley there before them, those who watched the shot slowly emerge on the RED LCD (no Viewfinder yet but the LCD is good in daylight) there were still ooh’s and ahhh’s a plenty.
Back in San Francisco and I had to reflect on what I was being told by those contemporary practitioners and users of very high level HD that I was interviewing: I’ve discovered that there are other questions to ask besides those that easily come to mind. For instance questions about where we’re going in terms of resolution are not necessarily of much import – does it really matter if the military are working with 64K? Even if that’s where we’ll be going commercially eventually and that’s what we electronic workers will be using at some point – sooner rather than later. Whether a question like this matters or not is about the state of mind you’re used to being in and what other states of mind may be relevant to the way the future will unfold.
As I go along asking the questions that have arisen through my own exposure to HD, then in using a camera like the Red One for instance one has to ask the question: does this line up with expectations generated by strong advertising messages from Jim Jannard and his team. In fact my recent use of Red is generating a new level of realisation about what is going on with HD which has grown out a part of my own intellectual terrain that I thought I had shut the gate and thrown away the key on. This is film thinking, a slightly overgrown garden which still has its fascinations and secret places. Right now I’ve thrown that gate open again and I’m clearing away the undergrowth with a high level of enthusiasm. That’s the end of that metaphor ! And here’s another:
I’m now feeling liberated finally. I’ve talked to highly professional people who are quite anxious about what is going on, but through experience I feel like ten years of climbing has brought me to the top of the cliff and has allowed me a view across the Mesa. I now realise we’ve been working in the canyons thinking that this is our terrain – but actually, the truth is that we have to inhabit an entirely different terrain completely to understand and use HD as it is now re-forming itself technologically. In fact HD as we have come to know it is dead.
Like taking on Einstein’s general law of relativity our minds need a re-boot and a re-think about what we’re actually looking at to begin to understand the High Resolution world as it now is. We are truly into the next generation of thinking about how we achieve our goal which is of course to make images that completely blow the audience away and so make images that enable the audience to performatively enter the space that the image is creating. I mean here that the images we create are immersive in its truest sense – that we plunge in and inhabit the space the image creates – with compulsion and agreement.
So here’s the rub. Electronic Cinematography is a description of a raw data flow where the data holds a latent image – just like the photo-chemical process of film held a latent image. It was latent until the development process released the image partially. I say partially as there were other processes where the image could be affected in a material way – bypassing the bleaching of silver from the negative was one such process that comes to mind – the now over-used bleach bypass process. This would have been an innovation in its time where the cinematographer would have asked the lab to do something that was until then unheard of.
Contemporary electronic cinematography would have it that you leave certain of the cinematographic responsibilities to post to fix it. To my eye this simply casts a thin patina of colour over the image for the discerning eye to be annoyed at. This is a hang over from the early days before the RAW data period, the days of compression, with its desperately awful solutions like Gop images which still fuel low end sub HD formats like HDV and low sampling and throwing away much of the camera head data and all the rest of the processes used to render an electronic image - this is basically now bad information.
So what’s really important here is the latent nature of the RAW data image and as with film there are some interesting things we can do that are akin to the idea of heating the developer and bypassing the bleach ! With RAW data I intuitively feel that you don’t just leave this to post to fix the image. The issue now is that just like film you can affect the ‘materiality’ of the image.
In the past in film cinematographers had clean stock and even sharper lenses and the aim of their job was to make an atmospheric image to carry the audience deeply into the story. So they realised that you should over or under expose – mess around with the temperatures of the various baths that unleashed the latent image until a change was made that revealed an image that was to the taste and aesthetic of the director of photography. Now we are at that moment where you really can do that with Raw data. The image in front of you can be recorded, imprinted in Data form and a latent image will exist until the ‘development’ of that image. Of course, here’s where the bright imaginative ones can come along and innovate ways of dealing with the metaphor so that real and substantial changes are affected in the images we see derived in the digital realm.
I have no doubt of the above – every bell I have is ringing to tell me that I am ‘warm, warm, warm’ as children say, in the game of positioning yourself near the goal to take the prize. Since shooting on the Red I realise that some people are going to have no idea at all what to do with what my intuition was telling me at Tunnel View in Yosemite.
When I see the shot again in its two dimensional form I want to see its essence as it occurred to me at the moment where I looked at the vast scene in front of me and in a zen-like way regarded the scene as I imagined Ansel Adams would have looked at the landscape.
The man had a zone system to measure exposure, to find a way of systematizing the process so that it worked each time he used it. But it was in the end an intuitive zone system. When I’m lighting a room or space I turn off my beta functions and shut my mind down so that the wide vision aspect of my seeing can come to the fore. By ‘widevision’ I mean in this sense a more reflective state of mind and therefore a more meditative view.
So when I stepped back from the precipice at Tunnel View with my light meter in my hand and my equations about filters come to a conclusion about what’s a good place on the lens, what other filters might I introduce – all that technical stuff and have to plump on an F stop that in some way is a measure of an irony – that there is a set of learned outcomes that act as confirmation that I should ignore all the set of learned outcomes one has accumulated to be able to cope with life – or expose a shot as it’s commonly known.
Anyone who’s exposed a foot of film will understand the previous description and this is the place we have to go to in digital cinema, together with willing accomplices from post-production data handling who can see that this may be a way forward. What I’m proposing here is a relationship between production and post-production that is simply there to bring this latent electrical image out into the open in a qualitative way.
In that little equation lay a route where people can travel to create images that are transcendent of the values of the first period of intensive look-creation from the industry. What we’ll see is subtleties of colour and resolution that haven’t been seen before. Images that are in their own way as qualitative as those derived though releasing the latent image in the film domain. I showed my teenage kids the proxy files and both commented separately that they sort of looked like CGI. This is significant because it is a realisation that something is going on that they can relate to – something that means something in a way that previous digital HD work didn’t.
A note on resolution and for this I'll take a metaphor up front: If a standard lens of a format is the one that when put in front of the eye does not change its magnification one way or the other (in other words neither gets smaller nor bigger but stays exactly the same), then I would content that our eyes tend to function at 'standard resolution'. WHat I mean by that is that when we focus on something we select the thing to look at and then bring to bare on that thing two elements: increased focus and increased resolution. By these means we separate the object for scrutiny. What this idea brings up is that we have the capacity to incrementally increase certain visual and mental functions - unlike camera optics which have to be set and relay information in a way the DP chooses to get at the essence of the thing he or she is trying to say something about on a narrative level. I haven't really completely thought this idea through - but I find it exciting that our 'sensorium' the set of senses and the sense common to all, the mind, are available in an incredibly subtle way to the experiencer.
Besides incredible resolution, what I haven’t said so far about my experience manipulating the raw data within RedCine and RedAlert (free programmes for exporting Red raw files) is that within these simple little programmes I’ve seen looks from within the footage that will completely blow people away. That are ‘different’ from what I’ve seen before – that are so, so much subtler, delicate, resolved, softer, in fact just better. The colour ranges are subtle enough to be sufficient for artistic work to occur.
All we need to do now is embrace the medium fully. I look forward to trying out the Arri 3k and the Dalsa 4k, the Red 3k and 5k to see if their images match, or better the Red 4k. Also, I expect Apple didn't show up at the 2008 NAB because they're re-defining final cut to be resolution independent - if that happens, then truly, what was latterly known as HD or 1920x1080 will be truly dead.
Thursday, 8 May 2008
Subscribe to:
Posts (Atom)