Saturday 29 December 2012

New Technologies of Digital Imaging


I recently wrote a post on CML asking the following:

"I saw The Hobbit in 3D at 24 fps at 2k, then walked into the next-door screening which was showing in 3D at 48fps at 2k.


So the first looked film-like and the second looked like old-style interlaced video - there was even a sensory and hallucinogenic lag in the image, mostly with regard to colour. People who buy 48fps argue that you should try to watch for 10 - 15 minutes to lock-in to the way you perceive the experience before condemning it out of hand.


Also as far as I understand it, instead of a 96th shutter, Lesnie shot the movie at a 64th shutter to add motion blur - and this didn't do anything to spoil the 24fps 2D filmic looking version when alternate frames were removed (being sharper than a 48th).


So my question is: Has anyone seen 48fps at 4k and if so, was the look filmic or video-like? 


I'm asking that question because it's my guess that 'film-immersion' works at certain 'sweet-spots' of the sensory experience and that because 24fps is one of those, then multiplying the factors could mean that either: 


a) a sweet-spot is disrupted if it's not a full multiplication of factors (so 3d at 48fps needs to be at 4k minimum to work) b) the next sweet-spot is a different multiple (96fps at 8k, or 120fps at 10k, or 192 fps at 16k - or something counter intuitive on an apparently different scale) c) after 24 fps the sweet-spot is way, way above those co-ordinates (on the basis that is a harmonic of the original)d) our senses film-immerse around 24 fps of image and 24 fps of black, regardless of resolution, and that's just how it is....


If there's anyone looking in from the production - you must have done shot some tests and screened variations - any comments? How did you prepare the 48fps and decimate the footage back to 24 fps?"


Someone wrote in response:

"I'm rather amazed and befuddled by all of these calculations and speculations as to the effect of framerates and sweet spots.  2D or 3D, the effect is apparent rather quickly, and this is nothing new. Oaklahoma! in Todd-AO (30fps) looks remarkably different to the CinemaScope (24fps) version which was shot along side it (a take with one camera, then a take with the other).  When the VariCam first came out I used it to demonstrate the difference between 24p and 30p, 48p and 60p.  The difference between 24p and 30p was easy to see for all.  You can think of it whatever you wish, but the moment you get to 30p the effect is very "video-esque" in motion, at least to a brain used to 60i American television.  There's no magical formula of shutter angle and 3D immersion which will change this.  Refresh rate is refresh rate and the mental connotation is, well, whatever the viewer brings to the table.  One can give it a try and decide if it is interesting or acceptable, but the effect is the effect and it's really that simple.  There's no "training the audience," no "finding the magic combination," no "filmmakers not using their tools properly."  Either it is liked by the audience or it is not.


And to a great degree, I think this is true of 3D in general".


I felt I had to reply:


"Mitch and Mike: we're ok with numbers aren't we? After all that's an aspect of much of what we do.
I didn't know the Hobbit was finished in 2k, so that was worth writing the post in the first place for, as I found something out - it certainly kills the issue of seeing a '4k' version at 48fps.


'There's no "training the audience," no "finding the magic combination," no "filmmakers not using their tools properly." Either it is liked by the audience or it is not.....And to a great degree, I think this is true of 3D in general'.


I get it that there should be a resistance to pr style thinking and I wasn't really interested in the 3D issue, as my interest is with audience immersion (we use light and camera movements to underscore dramatic narrative and deepen audience engagement - why not use new capacities in the medium we work in?)
I'm very privileged to be working on higher dynamic range capture and display and when you see this actually working before your eyes there's a sense of seeing three dimensions, which comes through without the tricks of standard stereopsis. The response after seeing this new imaging technology for the first time is: 'It's like looking through a window'. When you see an HDR image in HDR display space, the sense of it being plastic and unreal goes (mainly because up till now HDR images were seen in non-HDR display space and the audience didn't like what they saw). The possibility for really amazing lighting is there, because the display space is approximate to the eye-brain pathway.


True, up till recently each image has been captured using 7 or more exposures in each frame and so has been still-image or stop-motion - but we're now perfecting moving image HDR streams instead of stepped still images. At the capture stage data bottlenecks are becoming an issue as we're generating huge amounts of data (could be up to 1 terabyte per minute next year as we'll move into high fps hdr - so there'll need to be developments in every part of the chain - that'll explain my OCD interest in numbers then). Most importantly though, we'll be looking at what kind of content works in this new area.


Call me old fashioned or even OCD, but I am interested and curious about what can be done outside of what's currently liked by an audience, and I'd prefer this information to come out through CML, within the community that creates moving image art for a living - not only that but involve members in trying to do this. 'Course, we may not manage to pull it off, but I'm quite happy to go down in flames trying."


So reader: What do you think?

Saturday 30 June 2012

Art and Commerce in Cinema and Television


Years ago when I was a young DP I spent £500 on a single day session to learn how to expose for low light. That was a lot of money then. At that session there were several DP's who are now working at the upper echelons of UK film and TV Drama.  By the end of the day I had discovered that the answer to my question was: sure there are technical boundaries, but in the end there is no such thing as the correct exposure - because art is a gesture of the moment. A correct exposure in one moment is not the correct exposure in the next because the art or intention, or colour or texture of the scene.
The light meter and waveform monitor as used today for digital cinematography (and television back then) are two forms of evaluation, but these are just devices that allow a certain kind of ritual gesture to happen that produce a decision. If you're 'in the flow' when you make your decision, choosing a fat or thin f stop - or placing the exposure on the stop itself - then that decision, if made whilst transcending the form you're working in aligns with what artists do when they make their artistic gesture. That may sound airy-fairy but whether you're a street cleaner or an astronaut, we all operate in the same way - choices in the moment. So there's technical excellence on one hand and on the other, art - and if you're really aligning with 'the flow' or as Taoism would call it, 'The Way', then art can happen within commerce.

Sunday 22 April 2012

ART AND TELEVISION


Lately analogue video has been in the public psyche because of the analogue switch off in the UK as we move solely to digital transmission (I believe the last to switch will be Northern Ireland in October 2012). I have attended a few events in March and April and noticed the miss-information flowing around the subject which differs from my experience of my time making ‘video art’.

What follows are some discursive notes for a proposal to celebrate through a series of screenings, Analogue Video and its transition into Digital Video during the late seventies, the eighties and the early 90’s. This is not meant to be exhaustive, nor in fact researched – it’s my off the cuff memory at this moment of writing. I may return to this and actually research it thoroughly and create an academic and more scholarly work. For the moment though I am involved (as usual) in making work rather than writing about it, but here, for what it’s worth are some thoughts.

There are some video links in here but do let me know of other works online and I'll connect the dots... So basically this is personal history, told from a personal perspective that differs sometimes from published histories – and of those I have to say that the research hasn’t been the best: for instance my own 5 part series on UK and European Video Art is said to have been selected by myself and Sean Cubitt. Though Sean was interviewed by myself, he did not select any of the work. It was selected by Rod Stoneman and Triple Vision together. In a recent panel event the entire ‘On Video’ series was misattributed to Analogue Productions – though I like Anna Ridley of Analogue – I draw the line at that ownership (it was done by a new curator quoting an old history). Here I hope to address these inconsistencies. But this is currently a partial history and so there needs to be a thorough and methodical revision of the history as it is currently told in the published works – there’s a PhD thesis to be done here.

I may have misremembered some facts and would welcome correction on anything I’ve said during these notes - also healthy disagreement. Meanwhile however, you are going to see some details that differ from the books on the subject that have been published since 1980. There are many incorrect ‘facts’ stated in most of the UK output in the area because they have been coloured by a remediatory thinking that this history sets out to redress.

The question then arises: What else is incorrect in these histories? Also The further question arises concerning whether or not they leave out much of the ‘intent’ of the period under review and concentrate instead, on a history that fits a world view that was then dominant. For my money of course, the last question is its own answer. This history no longer needs to be as dominant as it was (and to some of us this history is destructive). As a documentarist as well as artist at the time, I found myself described in one book as a ‘sometimes psychedelic artist’ - a case of being belittled by faint praise.

One more point, lately (April 2012) there have been public events and conferences set up to examine the 70’s and 80’s and there has been an air of decay – a sort of Miss Haversham type feeling around the generation of nostalgia for an area that might have had import at one time, but has very little import now. But of course all the excitement generated around developments now will suffer this same decay – in the 70’s and ‘80’s when the excitement levels were also high – they were just as important in influencing the developments of the present time. That’s a little bit tautological but you know what I mean.

ARGUMENT
Though the Seventies and Eighties many makers dealt with the question of ‘ubiquity’ that analogue video had presented them with by engaging with the television form. Following on from Walter Benjamin’s 1936 essay these makers celebrated the fact that the aura of art could neither adhere to the original nor the replication of the original. If all were ubiquitous and re-producible where could the aura of the art object lie? Therefore the strategy became how to adduce value in other areas - in the aesthetics of the work itself. This was the earliest gesture toward the digital which itself has no material, only a set of processes to describe itself.

Two ideas that concerned the makers of the time were the realization that Television was the first form where the means of dissemination preceded the means of inscription and that all other media were formulate the other way around.

Secondly, we had better celebrate the fact that the work existed only whilst the electricity was turned on. No electricity, no work. No electricity, only the traces of the work in the form of the accouterment of the works: video cameras, edit machines, monitors etc.

From the beginning makers decided to intervene in the dominant hegemony which was the central value system of society: Television. Given this, there was an intellectual allegiance with situationism and its predecessor, dada.

Within this there were two kinds of makers – those who came to video direct (more or less like myself though I had used film in the past but importantly was not totally fascinated by it so that it became my sole medium) and secondly: people who had come from film in the delight that the image was instant and didn’t need several weeks before it came to their sight (and these makers of course remediated the nature of the new media through seeing it in terms of film.

Given the above experiments had gone on in various countries including Stan Vanderbeek’s extraordinary prescient use of two whole channels to deliver a live video interruption in 1970 entitled: Violence Sonata.

But in the UK, one of the earliest engagements of the recorded image – albeit on 16mm, was David Hall’s Television Interruptions in 1971.


For me these were the products of a film understanding and derived from an attitude evinced from the modernist project of truth to materials and remediated the new form in the shape of film and its working practices. Hall engages with the TV set in the sense that he occupies it with elements such as water – however, like the rest oi television at the time, his interruptions are constructed in the film medium with sensibilities derived from prior experiments in that medium).

In one intervention he focuses on a tap dripping and eventually the TV set is filled with another medium. This is reminiscent of Viola’s more spiritual installation where a camera looks at the drip on the end of a tap as it forms - a Buddhist statement of impermanence as the image is projected on a wall and the world comes into being and out of being periodically. The British material reading of the form at the time was more concerned with the material of the medium itself - in my opinion, a lesser study. Later taken up by conceptualists like Hirst and co with their evaporation out of art into concept. True, Viola had a material concern too - but over-ridden by the act of the artist concerned with our place in the world as opposed to the artist concerned with his or her materials.

The excitement these makers felt was limited as many film practitioners were bound by a love and loyalty to and of the material of film and therefore their excitement was derived from the fact that some of videos process were ‘improvements’ on the problems of film. With video you didn’t have to wait for development and printing; with video you could shoot for longer than a standard roll of 16 mm which lasted 10 minutes at most and 4 minutes if you used a Bolex 16 mm camera; with video you could erase what was unsatisfactory aesthetically and marvelously, re-record over that to make a new recording. But these virtues were not the aesthetics of the new medium, they were simply improvements over an old medium and therefore constituted a re-mediation of the new medium. The film-makers were busy re-inventing themselves in their own image.

What came next was a new generation of makers that were not bound by the aesthetics of the material of film or busy with an anti-establishment view on a material level. However they were intensely political and carried with them antiestablishment political views.

THE RELATIONSHIP BETWEEN EARLY ANALOGUE AND DIGITAL MEDIUMS
Prior to describing the history of what came between 1976 and 1992, the period where the exploration and investigation of a set of ideas that amounted to the birth of the digital age via aesthetic concerns it is important to situate what the author believes to be the actual condition of the digital realm as it currently stands. On element that can be identified about the digital is its dependence on electricity in some form. When the power is turned off, the digital ceases to exist. Another condition of the digital is its requirement that everything that enters its continuum is first encoded into some form of data. Also, the use of a term like ‘continuum’ identifies something about its state and its material condition - or rather its lack of a material condition.

If the digital is not a medium, or has no medium then one must describe it in other terms, that of process. Lev Manovich described this in ‘The Operations’ which are basically threefold in nature: to gather, to compose, to publish. One gathers on the net via software; one composes on a ‘site’ like on the computer via software; then one publish on the net via software. To extrapolate backwards into a prior art form such as sculpture, one conceives the work, chops the wood, sculpts the wood, display the wooden sculpture - now substitute any object of art and its materials. The is a description of various material mediums via the processes that describe their operations form inception into materiality. Loosely though, the project is the same: gather, compose, publish. The difference is the prior conception and origination of the work in the mind.

The process begun by Duchamp where he argued that the patron no longer should determine the nature of art by commission, but the artist should choose what the work should be. Magritte question notions of representation in a prior representational medium - ‘ceci n’est pas un pipe’ - the use of text under a picture of a pipe to demonstrate a loosening of pictorial form in relation to concept. Mid 20th century art recognised that one could begin with the material (or the process) such as with Jackson Pollocks' paintings and then eventually came Warhols' project, that of demonstrating that not only anything in our world is art if the artist so chooses, but all of us, artist or audience member should open our eyes to see with this understanding.

With Digitality we now transcend and end the conceptualists project. Hirsts final statement about form and value, the platinum skull, demonstrates the end of the material project and also the end of the artist as selector. An Absurdist gesture prior to the ubiquitous event of everyone as artist maker which is demonstrated daily on utube. But again, current Digitality is simply in a moment of change toward what Digitality will eventually become, so even these articulations and insights are remediated by what has gone before and do not fully describe what is truly happening. That will only come when time has revealed what the birth of Digitality truly was.

Where we currently stand is as ‘flatlanders’, the Victorian 2 dimensional creature that when witnessing the passage of a sphere through their world, first see it as a point then an expanding circle which then contracts to a point. They have been in the presence of three dimensions but not understood its nature. Our state of understanding is remediated by the past, our historicisations are naturally via the hindsight of the last understood era, our theories are equally derived from what has past, so the perception of the present is veiled through the absence of a language that will develop. The mistake is in trying to label it through the medium of the Victorian project which is about categorising and indexing each element into a separate part which of course is analytical and part of the enlightenment project which does not understand that we now have to develop theories that are underpinned by a gestalt approach, rather than an analytical approach.

THE DIGITAL AND ANALOGUE IN PERSPECTIVE
The period of innovation beginning in 1972 with the first edit that was constituted of a re-recorded image transposed across portapaks as opposed to that which was executed by a razor blade and glued together with sticky tape, ended around 1992 - and the world wide web was on the horizon via the early patterns of encoding of the analogue and now digital video signal. With the advent of wavelet transforms as opposed to discrete cosign transforms (both originated by Jean Baptiste Fourier in the early 1800s) a transformative period occurred during the ’90’s generally referred to by the term ‘convergence’.

This period was he tail end of a paradigm which began with the descent from the trees of early anthropoids with their gesture towards standing upright as the essential use of technicity and other uses of technology eventuating in the use of tools or implements, the first being the use of flints the last being the use of the stand alone personal computer.

By 2000 the modernist project had been superseded by the digital project, which still leaves many people confused by what it actually is - mostly because they try to understand it via modernism and its bastard child, post-modernism, a rehash of the analytical imperative with the bells and whistles of a non-rigorous gung-ho attitude. But convergence was simply the antecedent of the integrative as opposed to convergent moment. The integrative is digital, is no longer concerned with tools and implements to affect the world - the world as we now know it is digital, is immaterial, is not concerned with tools because the whole world is both tool and arena of experience: the medium is completely the message and the message and the medium is the world.

Integrative technology is the height of technicity where technology is the ontological state of being of its inhabitants, where the stand alone computer and its predecessor the flint tool gives way to a complete 3 dimensional real time mapping of the world inside the grand computer, where the ideal state is continuously held and updated waiting for perturbations in its fabric, created by its inhabitants which it intelligently and virally reacts to. The world is truly the suitcase, the suitcase is truly the world.

To situate the series of screenings I’m proposing, it is now necessary to elucidate the history of analogue and digital video with reference to the state of digitality we find ourselves in. The screenings themselves are intended to lead towards the propositions I’ve made in a discussion format at the end of the run with prominent makers (that are still active) from the sector.

HISTORY
It is important to note that the first gesture towards digitality via the analogue was accomplished by Frank Zappa using 2 inch video to ‘film’ the feature, 200 motels, in 1972. Here, 2 inch quadraplex machines were taken on site to to the studio to facilitate the recording of the film in apparently portable mode. The cameras however were connected to the recording machines via cabling.

In 1972 Hall and Le Grice made their interventions which were undertaken by film makers who were excited by the specific aspects of the new medium that speeded up the slower processes of film had coalesced into London Video Arts - this kind of film remediation of video was to hang around long into the early history of video.

Other film makers took an oppositional position and remained engaged with the material of film and its timeline whilst their colleagues more deeply immersed themselves into a remediated position with the new video medium. The concerns of that group and that period were of the academy: a concern with aesthetics of time, space, location, gaze etc that had developed from the work of the futurists, Vorticists, Fauves and so on who were a product of the acts of socialism and marxism at the turn of the 20th century. The influence of Kuleshev, Vertov and of course Eisenstein could be witnessed daily at the film co-op in the early seventies as the project continued and the light burned brightly.

The first portapaks entered the UK around 1967 and were instantly celebrated by a group of creative people distinct from the film based experimental moving image community located at the Film Co-Op. These however were more interested in ‘the happening’ than ‘art’. Yet of course, there were others less bashful about calling random experiments with light and colour by the term art, as was seen in the symposium on Expanded Cinema in April 2009 at the Tate. Early portapak video was a playful form which morphed eventually into ‘Community Video’.

As the middle of the 70’s passed, the community video makers jumped from out of the back of their vans in the derelict housing estates, they cried, much the same as that of the workers on an Agit Prop train during the 1917 revolution in Bolshevik Russia ‘We have the means of production - workers, let the revolution begin’. As Tony Dowmunt of Albany Video noted some years later: ‘Not many people came out to join the revolution and if it were raining then we’d be howling into the wind and rain’.

This socially active work was more related in some ways to the aesthetics of the post marxist experiments at the film co-op due to the simple common fact of a desire to change the society that the makers found themselves in. However, instead of examining the medium in a structural way as the filmmakers of the 20’s and 30’s had done, the community video makers were pleased that they finally had the means of production and it somehow echoed their lives. Film had to be sent away - video stayed right where you put the portapak and played back when you pressed ‘play’. This was instant and instantly affecting - it was of the period of now - a time period made popular in the sixties.

On the other side of the city however, painterly and sculptural concerns and the aesthetics that governed the academy and their work as derived from film practice grew and was sponsored by the Arts Council and became early video art.

Throughout the next three or four years new makers were engaging with the educational system and the project as espoused by the arts council sponsored video artists was falling on deaf ears. Punk was beginning but not necessarily in moving image terms (that was to happen 5 or 6 years later). But the strength of passion against the old school academy system was breaking down attitudes towards what video was and how it should be used. An early group thoroughly engaged in the struggle was Vida, coined from Video, to see. Vida meant, ‘look at this’. An imperative cry. Vida cut their teeth on late film style experiments with colour and flashing and actually shooting some film before abandoning the older language and engaging in the documentary form. By 1980 Vida had given over 250 shows.

Nothing was sacred at that point and whilst working through the ‘veracity of documentary’ Antony Cooper a founding member of Vida declared that ‘the only thing documentary documents, is the attitude of the maker to their subject at the time of making’. Hence documentary itself was under suspicion as not being truthful.

Elsewhere many other experiments were going on via the work of West London Community Video, Moonshine Community Arts, Fantasy Factory and Oval Video. Their film equivalents were Four Corner FIlms, Concord Film and Video, Circles Film Distribution, and the Film Co-op.

So the landscape held a series of separate and sometimes antagonistic artistic and political communities, split by aesthetics and intent. But then, with the advent of basic computers in the latter part of the 70’s, the new medium of analogue video was instantly in transformation. Mores Law, that stipulated that there would be an exponential increase in capacity accompanied by an exponential decrease in size, was having its effect.

By 1981 a group of interested parties, including London Video Arts. the Berwick street film collective, Oval Video, the Film Co-op, gathered around London Video Arts and formulated the idea that video should have a festival and the First National Video Festival was held at the film co-op in 1981, the second was at the ICA in 1982 and a dwindling 3rd festival at South Hill Park in Bracknell.

The altercations between the two media were overcome when the Independent FIlm Association allowed Video into the hallowed film ranks and the association became the Independent FIlm and Video Association - mainly because the language of video spoke to the new CHannel 4 initiative and film production was struggling both aesthetically, materially and financially with television as a display and distribution medium. Film sought to engage the video makers as allies in the cause.

Vida, who had originated in 1977 were responding to the transformative phase between film and video, then transmogrified into Triple Vision by 1980. Documentary experiments were still ongoing but now accompanied by experiments in narrative and non-narrative work. Some of the members of Vida had joined a commercial company called Videomakers in London’s Shaftesbury Avenue and the owners turned a blind eye to the exploits of this small team who then made equipment available to video artists and documentarists alike and began engaging in changing industry working practices by employing camera women at a time when there were only a few professional sound women in the sector.

Many Video makers had circled around London Video Arts, Oval Community Video, Albany Video and also Triple Vision who were working within the Framework of the Soho based company called Video Makers who worked in both the commercial realm and the arts realm. Videomakers distinguished themselves by engaging camera women and began to break down traditional working practices directly in the belly of the beast. Equally Videomakers allowed artists to come and use their equipment such as George Barber, George Snow and Gorilla Tapes. The Duvet Brothers were working at Diverse Productions at that time. Founded by Peter Donnebauer who had eschewed the cause of the Academy and its form of sculptural and painterly arts practice for the commercial realm. However, Rik Lander as part of the Duvet Brothers was given access after his working day to high level editing equipment, which allowed him and Peter Boyd McLean to creative distinctive forms of editing only glanced upon by traditional avant guarde film making. On his return form Australia, Jon Dovey who had worked with Oval Video brought back the australian fast cut form, a kind of montage of attractions on methedrine, which created a great furore at London’s Cinema Action when shown to a traditional film making audience. This was an avant garde of the electric cinema - not photo chemical cinema. The name of this form of editing was derived from black music experiments: “scratch Video’ named after working with playing vinyl records in a scratch style.

Whilst with Triple Vision I unconsciously utilised the form in a work which documented the arrival of Apple’s Macintosh through being the video crew (with Anthony Coper) for Apple on Ridley Scott’s famous commercial. I had previously worked with Jon Dovey on a Ridley Scott Commercial for British Airways. I then ‘stole’ the footage I shot, which I then used as ‘found footage’ and then scratched this into “Prisoners’. The act of scratching came about as I had edited this footage for about 6 or 9 monthds and I wasalways unhappy with the end result. It worked fine - but not potently enough. One night, about 3 oclock I became angry and cut the girl hurling the hammer into the television screen against the skinheads racist talk... I came out of my act and realised that this was how to cut the whole work. It’s not generally included in scratch anthologies because it is intensely serious and scratch had a humorous bent to it. C’est las Guerre.


Meanwhile due to the advent of Channel 4 and the appointment of Alan Fountain with Caroline Spry and Rod Stoneman then funded the workshop sector, which was primarily film based but struggling with the budgets, the sector was engaging in trying to break down traditional aesthetics, but being mostly film oriented and having to use video, the struggle became confused because it was primarily motivated by budgetary concerns. Nevertheless some amazing video works came out of the cracks of the period. Isaac Julien’s ‘Who Killed Colin Roach’ for instance.

I and the other members of Triple Vision then left Video Makers and due to Channel 4 funding managed to operate in a television company form until 1992. This was a fertile period as television documentaries on various subjects were produced but long-form narratives such as Laura Mulvey and Peter Wollen’s Bad Sister (1986) were also made completely on Video as opposed to film - as an artistic statement and exploration of that mediums suitability in the act of suspension of disbelief - or its absence due o the effects of the medium. Birmingham Film and Video Workshop made Out of Order in 1987 for £500,000 - an unheard of amount in the sector for a video production up until that point. It was also one of the first ‘films’ produced worldwide on video and then transferred at Moving Pictures to 35 mm for theatrical release.

And where may you ask was the representation of the ‘dominant artistic video’ form backed by the Arts Council ? Absolutely nowhere. Abroad many of us met up at festivals and our work, the work that was not celebrated in the UK by the Arts Council, was being celebrated everywhere but in the UK. Only amongst the film/video coterie that was in its Ivory Tower was there any sense that that was where the work was happening. We made many connections abroad, set up projects involving 18 groups through ten countries (the State of Europe which connected RTE, TRBF, Channel 4 and ZDF), had retrospectives at places like the Mill Valley FIlm Festival in California (Coppola and Lucas had just moved up there and set up a festival). I found muself one day outside of a screening three people who were musing on the change from film to video. As I listened it dawned on me that they were the directors of the three films that were screeing and they were smoking and talking nervously. They were called Jean, Jim and David. After a while I reslised that whilst they kidded me about my interest in video, they were actually Jean Jacques Bienix (Diva) Jim Jarmusch (Down By Law) and David Drummond (Defense of the Realm). I had a cigarette and proceeded to go back into the screning and realised the little funhny bloke next to me was the star of Down by Law, Roberto Begnini.

Meanwhile, a branch of the academy, barely recognised but too powerful for the academy to ignore, was publishing the American revolution in the form of John Wyver’s Illuminations Ghosts in the Machine commissioned by Channel 4’s arts commissioning editor Michael Kustow. However, this was not the English Academy, this was the vital, fast, speeding video that video audiences as far back as the Air and Acme Gallery shows held in 1980 were used to. The Americans had access to hardware and the British had a less well-endowed access. Chris Meigh Andrews, Alex Meigh, Dave Critchley and myself had organised a series of shows where the early works of Gary Hill and Bill Viola, John Sanborn and Kit Fitzgerald could be seen. Equally shows of the work of LVA were being seen in the US by exchange. I always had a principle to not put my own work in these shows seeing that as a corrupt act. Doh!

By 1984 the Americans had matured and Ghosts in the Machine was an 8 part series of mainly American Video Art. Countering this Triple Vision had been commissioned by Rod Stoneman and Alan Fountain at Channel 4 to make a series about UK video art entitled ‘On Video’. This was originally to be done by Luton 33 but somehow it hadn’t happened, so we received the phone call to come in and talk about it.






Two sixty minute programmes and one 90 minute programme were initially made and in contradistinction to Ghosts in the Machine, interviews filled the silence between video art works. The difference was context. Many artists work was shown including Jeremy Welsh, Cerith Wyn Evans and John Maybury.

Eventually by 1987 Channel 4 commissioned two more 90 minute programmes, ‘TV or not TV’ which was ‘On Video 4’ and ‘Statement of the Art’ which was in fact ‘On Video 5’ which also interviewed and showed the work of European Makers such as Dalibor Martinez and Robert Cahen and his excellent and ground breaking Just le Temps which rivaled anything Viola or Hill was doing with the aesthetics of Video.

At that time too, there was another television investigation which I directed in association with John Wyver called ‘In The Belly of the Beast’, which used Video Positive in Liverpool as a platform to discuss where video might be going. Ths programme was commissioned by Zanna Northen at Granada.


By 1987 I had developed a good relationship with Complete Video (a high level commercial house) at a moment when digital media became available. I gained access to some of the worlds most advanced digital equipment and this allowed me to investigate the coming digital realm with works such as ‘The World Within Us’ and later when I became Artist in Residence with them, The Inevitability of Colour (CH4 and ACE) which went on to be premiered at the Bonn Bienalle and win some international awards (Montbeliard and Locarno) - ironically I had directed Channel 4’s On Video series and The World Within Us was commissioned by John Wyver’s, Illuminations for Series 2 of Ghosts of the Machine. Meanwhile Invisible Television had been made by Gorilla Tapes (or Vulture Video depending on what they felt that month), and shown on Channel 4.


There is much more to say, many details to add but from the earliest experiments by Fantasy Factory and CAT, Albany Video, West London Community Video, Oval Video, Vida, Gorilla Tapes, the Duvet Brothers and Triple Vision, an aesthetic of production grew that was distinct from the academy and film based understandings of early video artists who’s concerns were those traditionally evinced in painting and sculpture. Again, there is much to add and as this is intended to be inclusive of what happened I welcome anyone emailing me to add to this history - or challenge it.

It is my contention that the excitement and aesthetics and material experiments of this time were the seedlings of the digital. We were passing across a boundary. Through my relationship with Complete video I made the Object of Desire which was a multi-layered version of Inevitability of Colour - this was deeply digital in its concepts and constructs and aesthetic. The Americans were generating works that were slight and lightweight with an aesthetic traceable to disney on a lot of levels. They were direct and obvious - the UK works were of a culture that had been around for a long time and one not prepared to be so simplistic about artistic and aesthetic concerns and therefore not so grabbing in their visual form - yet, in relation to time passed they stand up more strongly than the American works, which have of course grabbed the historical record. On that basis it makes sense to organise screenings of the named works of the timer against what was going on in the UK to give context and allow the audience to reflect on just how good the British makers were, who have been forgotten or written out from history.

These early investigations were indicative of what was to become digitial media and embodied concepts that were in contradistinction to the modernist project of truth to materials and a growing dependence on the concept as being as important as the material.

ADDENDUM: TURNING THIS ARGUMENT INTO A SERIES OF SCREENINGS
Screenings could run for three weeks and the first block could be the Channel 4 On Video series, 1, 2 (both 60 mins) and 3 (90 mins) and also On Video 4, ‘ TV or not TV’ and on Video 5, ‘Statement of the Art’ and a series of discussions with contemporary curators and artists. Screenings could be in the evenings, but also with agreement with various colleges during the daytime.

For the second week of screenings I propose to invite the group that chose the work for the 1st National Independent Video Festival in 1981 to select work from the ’80’s, plus have a series of discussions with artists who were active at the time the works were made.

The last 5 screenings could be in the form of showing a well known international work from a particular year that may for instance have originated in the United States - say the Vasulkas The Art of Memory - and then it could be accompanied by several works that originated in the UK and Europe. The point being that the US artists had a full blown push from their own culture on why the work should be seen as world quality work - the British however had none of this due to the reasons mentioned above, yet I will seek to demonstrate that the UK works are at least, as good as, if not better than the work that obtained the publicity. The screenings could be accompanied by discussions with artists of the time and contemporary artists and curators.

An additional fourth week of screenings could seek to demonstrate the nature of the digital via the works that have been made since 1992 - these works will be selected by a group formed of those active making work and curating during this period.





Some names of production companies who enabled motion image art work to be seen on TV:

Illuminations
Triple Vision
Analogue Productions
Fields and Frames
Luton 33 (later developing into Gorilla Tapes and Vulture Video)

And of course, the entirety of the 1980’s workshop sector who tried in some way to intervene in this are. An early majore gesture was Peter Wollen and Laura Mulvey’s shot on video Bad Sister (1983) – and of course Frank Zappa and Tony Palmer’s  first ever video feature film made on two inch around 1971. http://www.youtube.com/watch?v=lyViqlFEKUI

Tuesday 17 April 2012

Changing Times


I've heard it said in the trade press recently that this is the year of 4k. Well yes, that's what the pros want (and ever higher resolution) and that's definitely what the trade wants given the need for continuous churn of products to induce profit. But there's a substrata of need within education and corporate work and also to there's a need to make available Digital Cinematography concepts for television work. So Blackmagic Design, well know for graphics and processing cards and storage solutions, who then went on to buy Da Vinci Colour Grading software, have now gone on to creating their first camera. Well known Digital Imaging Technician, Jonathan Smiles has been quoted as saying that "as soon as light hits the lens it's all post production", and whilst I reject that quite heartily because it disables a tier of artistic input (i.e. the cinematographer), Blackmagic's intervention makes this statement all the more true.

I've always considered that the job of a good cinematographer is to be the chief quality control clerk of the production and that they of all the roles should understand completely the pathway from light into lens through to light emitting from or bouncing off the screen - so the Blackmagic camera which comes with connectivity through thunderbolt and then through Da Vinci Resolve management system (including scopes) takes the whole Digital Cinematography concept one stage further on in its development from what it once was in the age of photo chemical imaging.

In Los Angeles the Global Cinematography Institute understand that everything is changing and now seek to train the modern cinematographer right across the whole gamut of roles in the image making process - they call this Expanded Cinematography. Though still regarding lighting as the highest achievement of human sensibility, akin to the work of a renaissance painter, they know that the cinematographer has this earlier responsibility that has been shirked since post and colour grading started to take over in the ‘90’s. Times are again changing and it would do well for the trainers and pedagogues who teach the new generation of film-makers to be aware of the way that things are realigning. Digital Video is dead, long live data cinematography.

Monday 13 February 2012

Exposing images for digital cinematography

The discussion below represents an interesting archive of attitudes prevalent at the beginning of 2012 towards exposure and workflow in exposing the electronic cinema image (specifically in Red Epic and Red One, but branches out into all digital cinematographic issues). There are some rules here that I’ve always worked by but specifically one: that exposure is best viewed with Raw and view all else as metadata that accompanies an image into post and in so doing that one should situate the exposure ‘correctly’ so that you get the most out of the ‘negative’ but also break that rule to derive the most important aesthetically developed image. When I was working in high definition my maxim was: underexpose by at least half a stop - now that idea is defunct.
In reading this information, do note Florian Stadler’s comment which is both interesting and important:
“It is important to process the footage in LOG space (Redlog Film) in the fully tested and implemented workflow I mentioned. You will recapture the RAW images most extent of highlight information by setting the ISO to 320 and the Gamma to Redlog Film”. 
In photochemical practice unless you were ‘trying something out’ you always went for a ‘fat negative’, meaning you took the most information into post so that the digital grade could deliver the most information to enable maximum manipulation of the image (I began work with film at the tail end of the change from photo-chemical into telecine - which was then to change into digital intermediate). Previously, before telecine, the medium had very low flexibility for manipulation whilst using printer lights (having said that all the wonderful film looks of around 80 years of cinema were derived from that ‘inflexibility’). If you were ‘trying something out’ you were testing a look and testing is the primary methodology that a DP uses.
This discussion does not cover putting an 80D orange filter over a native raw sensor that is balanced to daylight. I have mentioned this theory on Cinematographers Mailing List and been disagreed with and in other posts been heavily supported - so in all of this, you must come to your own practical and aesthetic decisions - because being a DP is (partly) about being an artist and artists make subjective decisions whereas scientists make objective decisions. You have to choose what you’re going to be.

Note: As usual things got a little fractious so I've removed those particular posts and all personal contact details to protect everyone. Do let me know if anyone objects to what's here. Lastly, as always, Geoff Boyle puts a very apposite comment, this time at the very end.
CML-DIGITAL-RAW-LOG Digest for Friday, February 10, 2012.
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Adam Wilt
Date: Thu, 9 Feb 2012 18:37:46 -0800
X-Message-Number: 1
Don't think "shoot at 800, then (using REDCINE-X) bring ISO down to 500". Think "pick an ISO rating that properly trades off highlight protection versus noise". Then "develop" the footage as needed using curves to hold highlight and shadows (or blow them out as you see fit), but probably not fiddling with ISO unless you like the look of 2/3-stop underexposure (grin).
For the purposes of this discussion, let's say that the camera has 12 stops of dynamic range between the level where highlights start clipping and the level where shadow noise becomes intolerable. That 12-stop number itself isn't important; I might say it should be higher, and Art Adams might say it should be lower, because we have different tolerances for noise. I'm just picking a number that keeps the mental math stone-simple. With the M-X sensor, not using HDRx, an ISO rating of 800 means that you'll have six stops above middle gray (e.g., above your incident meter's reading) before your highlights clip, and six stops below middle gray before your shadows get lost in nasty noise. So, if you meter for ISO 800, you'll get a balanced rendition: a tolerable degree of highlight protection, a tolerable (and equal) degree of shadow detail.
Now, take that ISO 800 footage into post. If you metered for ISO 800 and set the lens that way, then an ISO 800 rating will put middle gray pretty much where it should be (let's ignore for the moment any variations in where RED thinks middle gray should be and where you think middle gray should be, grin).
You don't like the noise at ISO 800? Set ISO to 500 in REDCINE-X. Yes, the noise is suppressed by 2/3 stop, but the image is darkened 2/3 stop as well. You now have exactly the same thing as if you had shot at ISO 500 and had purposefully underexposed 2/3 of a stop to protect highlights (the only way middle gray will fall where it should with an ISO rating of 500 in RCX is if you metered and exposed for ISO 500 in the camera).
You can counteract that underexposure with a custom curve, pulling the midtones back up. But, in essence, you're just undoing the ISO change as far as the midtones are concerned; you could just as easily leave ISO at 800, and use a custom curve to slightly crush the shadows in the ISO 800 "development" and get much the same result.
The key thing to remember is that the camera has a fixed dynamic range; all you're doing by changing ISO ratings at the time of the shoot is trading off highlights vs noise (have a look at 
where I shoot both the M and M-X sensors at various ISO ratings and grade them "normally", without curves). If you need to protect more highlights, you'll have to stop down when shooting, and in post, once you've pulled your scene midtones back up where they belong, whether via ISO or FLUT or curves, you'll have more noise as a result.
You can shoot at a lower ISO: your images will be cleaner, but you'll give up highlight headroom. Shooting at ISO 200, letting in two more stops of light, means your scene will be two stops cleaner / less noisy, and your noise-limited shadow detail will be 8 stops down instead of 6--but highlights will clip two stops sooner, at only 4 stops over middle gray.
Or you can shoot at a higher ISO: At ISO 3200, you'll stop down two stops. You'll preserve two more stops of highlight headroom (8 stops over middle gray), but you'll have two stops more noise over the entire image, with only 4 stops of shadow detail before the noise becomes intolerable.
Which way makes sense? It entirely depends on the scene, what's worth protecting, and your noise tolerance.
I normally shoot exteriors on the RED M-X at ISO 800 because I like uncontrolled highlights to have detail and color, and I don't mind a bit of "grain" and some noise in the shadows. Art Adams prefers a cleaner image; he'll rate the camera at ISO 320 or 400, for a 1-1.3 stop advantage in image cleanliness; he'll sacrifice 1 or 1.3 stops of highlight headroom to get that reduction in noise. I'll do the same thing on a greenscreen stage with controlled lighting, where I don't have any excessively bright things in the image that need six stops of headroom; it's nice to have that added cleanliness for keying. 
It's all about context; there is no one correct answer. It's simply YOUR tradeoff between highlight protection and noise level. And, of course, with EPIC you have the opportunity to employ HDRx to hold those highlight, too... but that's a whole new topic.
Of course there is NO substitute for running tests yourself, instead of trusting what some goober like me says on the Internet. :-)
Is this trivial? Have I been living in the dark for years? Have you done anything similar to this, or maybe have something better?
In general, the secret for getting good pix out of the RED (at least as far as the tonal scale is concerned) is judicious use of the curves control. S-curving the raw exposure to gracefully handle highlights and gently roll off the shadows, while keeping decent contrast in the middle, is a Good Thing... bear in mind that the FLUT processing in the newer "color science" processing does some of this for you (unlike the early days when it was entirely up to you).
Trivial to say, perhaps, but the possibilities are endless...
Adam Wilt
technical services, Meets The Eye LLC, San Carlos CA
tech writer, provideocoalition.com, Mountain View CA
USA
----------------------------------------------------------------------
Subject: RED Workflow
From: Florian Stadler
Date: Thu, 9 Feb 2012 18:53:20 -0800
X-Message-Number: 2
What I tend to do is the following: 
Shoot/expose at 800 for day int/ext and 500/640 for night exterior/interior, making sure nothing falls into noise zone and nothing clips on the sensor (but I let clipping happen in the 800 LUT).
I then "develop" the RAW negative at sensor native 320ISO in REDLOG Film and use a LUT as a starting point in the grade (Arri provides a really good one, great on skintones). This allows me to shoot the sensor at the optimal under/over sweet spot and retrieve all information captured by the RAW sensor. 
Florian Stadler, DP LA
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Art Adams
Date: Thu, 9 Feb 2012 20:43:10 -0800
X-Message-Number: 3
What Adam said. I couldn't have said it better. You don't get more dynamic range out of the camera by changing the ISO, you just reallocate the bits above and below 18% gray. Slower ISO gives you more room for shadow detail and less for highlights, but crushes noise; higher ISO gives you more noise but better highlights.
Tal, the trick you talk about trying is just the long way around. If you like the camera at 800, shoot it at 800; if you like it at 500, shoot at 500. Shooting at 800 and processing for 500 doesn't change a thing, it just makes the workflow more complicated. Fortunately nothing ever goes wrong in post.
You may find yourself on a beach or in a snow storm, at which point ISO 800 makes perfect sense because you don't have any shadow detail that will go noisy and you need lots of highlight retention. You may find yourself shooting a very dark night interior or exterior without highlights, at which point you might consider ISO 200 for rich, clean noise-free shadows.
It does make sense to pick one ISO and stick with it, but I think you also need to be a little flexible and rate the camera properly for the circumstances--especially if they are adverse.
I haven't shot anything with an Epic yet but based on what I've seen it might be the first camera I'm willing to rate as fast as 800 for normal use. I've been rating Red One MX's at 400 or 500, and I rate Alexa at a nice solid 400, but until I get my hands on an F65 the Epic seems to be the fastest I've played with so far.
I'm not a big fan of noise. I like clean shadows with lots of detail.
-----------------------
Art Adams | Director of Photography
San Francisco Bay Area
showreel -> www.artadamsdp.com
trade writing -> art.provideocoalition.com
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Art Adams
Date: Thu, 9 Feb 2012 20:47:28 -0800
X-Message-Number: 4
This allows me to shoot the sensor at the optimal under/over sweet spot and retrieve all information captured by the RAW sensor. 
I must be missing something. The same amount of info is present no matter how the image is "processed" later: that is fixed during capture, and all you're doing after that is pushing bits around. Shooting at ISO 800 and "processing" at 320 just shows you a darker image, which you then apply a LUT to in order to make it look normal again. Why not process at 800 and apply your custom LUT to that?
Also--Arri provides a LUT for Red footage?
-----------------------
Art Adams | Director of Photography
San Francisco Bay Area
showreel -> www.artadamsdp.com
trade writing -> art.provideocoalition.com
----------------------------------------------------------------------
Subject: RED Workflow
From: Florian Stadler
Date: Thu, 9 Feb 2012 22:26:48 -0800
X-Message-Number: 5
I must be missing something. The same amount of info is present no matter how the image is "processed" later:
It is vital and absolutely matters how you process a RAW image before color correction, are you kidding?
You are missing the concept of a "digital negative" and regard the "LUTed positive" as all you captured. 
It is important to process the footage in LOG space (Redlog Film) in the fully tested and implemented workflow I mentioned. You will recapture the RAW images most extent of highlight information by setting the ISO to 320 and the Gamma to Redlog Film. And yes, Arri publishes LUT's designed for their cameras to make the transform from LogC space to Rec709 and that LUT happens to be a pretty decent (not as turnkey as an Alexa LogC of course) starting point after said exposure treatment and processing.
Florian Stadler, DP, LA
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Dan Hudgins
Date: Thu, 9 Feb 2012 23:22:03 -0800 (PST)
X-Message-Number: 6
Quote: [The same amount of info is present no matter how the image is "processed" later:]
The same amount of information is in the R3D file, but you cannot directly access that information without it going through the RED (tm) SDK code that all programs that process R3D into something else use (except REDROCKET (tm) that deviates from the processing somewhat due to a different processing used for speed, it seems).
Because there is no DNG conversion from the wavelet encoded color planes of the sensor, you cannot get at the actual sensor data before white balance (I think they said there was some non-color space option, but that probably also has some deviation from the linear un-clipped non-white balance data sensor data, anyone know? [if it was then no ISO or K adjustment would impact its export]).
So some ISO curve is applied, and some white balance clipping is applied, and those are based on assumptions of where 18% midtone should be (46% as red has said) or in the case of Cineon (tm) code 470/1023 which is not disputed because Kodak (tm) defined that value once and for all time.
But even with the Cineon (tm) curve being used, RED's SDK should also apply a ISO curve for soft clip of the 'super-white' values above 90% white level of 685/1023, otherwise there would be no change if you adjust the ISO in REDCINE-X (tm) and green sensor clip would be set to code 1023/1023.
Because the sensor may have dynamic range greater than the original Cineon (tm) definition, values above 1023/1023 need to be clipped off as part of the ISO adjustment curve, or soft-clipped UNDER 1023/1023 like the shoulder of a film scan would be if the negative was pull-processed.
Its probable that the ISO curves used in the RED SDK/PROGRAMS does have some loss of data for high values of the highlights, you can test that by having a white and gray card and overexposing them at various stop values from +4 to +8 and see where you can no longer see a separation between the two in the processed data, like using a probe to measure the exact code values in a 48bpp full range TIF file.  With the softclip working right all three colors would keep some separation up to the point that the green pixels clip, so that would be true NO MATER WHAT ISO is selected in REDCINE-X (tm), you would just see a change in the magnitude of that separation.
If rather when you are at 320 you see 0 separation at +4 stops, but you see 400 separation at 3200, then you know that adjusting the ISO in processing does increase the highlight detail. If rather at 320 you see 10 separation and 400 at 3200, then you know that the highlights are having some detail, just more posterization after bit reduction to 10bit and 8bit use formats.
Because you have higher bit formats output from REDCINE-X, you may not see the tonal separation on a 8bit or 10bit monitor, in that case another way to see it is to increase the contrast in the highlights or shadows after you make a 48bpp TIF file, then the tonal seperation will be large enough to see on a monitor, along with how much banding you get from the tonal expansion.
Because the assumptions made in the ISO and K correction BEFORE export to Cineon (tm) Log (film log) DPX files is made, there should be some utility is adjusting the processing ISO before export, and then un-doing that to some degree. I would though caution NOT using 10bit Cineon (tm) files for such yo-yo-ing of the tonal values as 10bits only has one spare bit to make grading adjustments +/- 2x or 0.5x transfer curve slope, if you are going to yo-yo the tones to compensate for the ISO curve used in REDCINE-X before export as Cineon (tm), you should export as 48bpp TIF or DPX, not 10bpc or you may get histogram gaps from the LUT used to convert the Log-C to Rec.709 PLUS any additional grading done to re-center midtone and apply curves.
If REDCINE-X allowed the import of the user ISO curves and had control screens for exact adjustment of the clip points, then you could bypass the ISO values and white balance adjustments altogether and relate the linear sensor data directly to the DPX Cineon (tm) code values, which is how I'm doing things with DNG processing.  If you make your own ISO curves from fitting the linear sensor data into the Cineon (tm) range, then you know the exact translation of sensor code value to DPX file code value without guesswork or arguing about what does what or not. In that way you can tailor the highlight detail and shadow noise to the subject matter in each shot, vs the exposure level on the sensor used at the time of shooting.  The assumption is that is what was done by RED -for you- as I have noticed comments that such adjustments using native sensor balance are beyond the average camera user, but in place of knowing exactly what is going on with the data,
you need to try to guess what has been done, not really ever knowing.
Dan Hudgins
San Francisco, CA USA
Developer of 'freeish' Software for Digital Cinema, NLE, CC, DI, MIX, de-Bayer (DNG), film scan and recorder, temporal noise reduction etc.
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Craig Leffel
Date: Fri, 10 Feb 2012 01:30:26 -0600 (CST)
X-Message-Number: 7
Florian wrote; 
I must be missing something. The same amount of info is present no matter how the image is "processed" later: 
It is vital and absolutely matters how you process a RAW image before color correction, are you kidding? 
You are missing the concept of a "digital negative" and regard the "LUTed positive" as all you captured. 
________________ Snip _______________________ 
This whole conversation is what's wrong with most Dp's understanding of shooting and color correcting Red footage. 
There is NO reason anyone should be color correcting Red footage from a pre-processed or converted file format. 
Most of the REAL color correctors on the market, that are designed to actually do color correction as a professional and 
all day everyday task can color correct from the native Raw R3D file. 
This means the colorist has - 
1. The entire dynamic range of the sensor capture to work with. According to Red, that's 15 stops. In my experience, that's bullshit. 
However, having the entire dynamic range of the sensor and the capture at your disposal is important. 
2. The entire Metadata package of the Red SDK to work with. 
This means that I can strip out the ISO settings, the Kelvin settings, and anything else I need to do to reduce noise or recover detail, or bend the picture to fit 
expectations, consistency, quality of light or matching color balance to other shots from different periods on other cameras. 
3. The software is doing the debayer live and from the raw file itself, with the colorist able to make scene by scene decisions about the quality of the debayer. 
So, as many have said it doesn't really matter what you do on a Red. What's important is what your lighting ratios are in terms of Key to Fill 
and where you place your whites. As long as you don't clip the whites or blacks in the capture, a colorist using the raw file can slide the exposure scale 
up and down to fit the right spot on a PER SCENE basis. As mentioned earlier, if you expose toward the middle of the dynamic range you are giving yourself 
and the colorist lots of room to slide the entire dynamic range of the exposure up and down by a number of stops. As long as detail has been preserved in the capture 
there is no limit in terms of what can be done and manipulated. 
Given that when color correcting this way the colorist can; 
Change ISO 
Change white balance 
Change Kelvin 
Change Flut 
Change Colorspace output 
Change working colorspace 
Change exposure 
Change the quality of the Debayer 
Change curve 
Change exposure range 
All of this can happen scene by scene in a high grade color corrector. The Red is the only camera on the market where the colorist can recover the entire sensor 
data and all captured dynamic range from the Raw file. The Alexa is completely incapable of this at this time. The Alexa Raw file is currently severely limited. 
For those that enjoy working from a predefined colorspace and a predefined dynamic range, Alexa works ok. Arri still has not figured out how to make a workable Raw file, 
and why they bother processing an image at 1920x1080 when that's the same space we broadcast in or display in is beyond me. As a former colorist, I'm not happy at all 
with the ability to recover the sensor data from an Alexa shoot. It's predefined in a Log-C colorspace. That space has a beginning and an end, and a limited range.... unlike the R3D file. 
If you're confused as to what Color Correctors I'm referring to, here's a partial list of those that can use R3D files natively and in real time; 
Quantel 
Mystika 
Baselight 
Fiilm Master 
Scratch 
I'm not a Red fanboy. I've color corrected for 23 years on as many different cameras, systems and file types as you care to name. There are plenty of things I don't like about Red. 
However, if we're talking about process, it's the only camera on the market doing it even close to right. Arri is still trying to convince people that Prores is fine and that all we need to do 
is shoot and edit. Nothing could be farther from the truth for high end commercials, TV and Features. Sure, I'll bet some of you will say your work doesn't need to be color corrected. 
If you can find yourself an honest colorist, they'll disagree with you. I know many of you swear by Alexa. I wish I could show each and every one of you what you are missing from your sensor data, 
and what you actually captured - and what could be done with it - if I had the ability to show it to you. Your data and your work are being lost on the Alexa. 
The reality of this discussion is to think of exposure as a big ball of data in the middle of a predefined scale. You can place that ball anywhere you want, the scale remains the same, 
and the artifacts on either end of the scale remain the same. 
Best to all - 
Craig Leffel 
Former Senior Colorist 
Optimus 
Chicago / Santa Monica 
-- 
Craig Leffel 
Director of Production 
One @ Optimus 
161 East Grand 
Chicago, IL 60611 
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Nick Shaw
Date: Fri, 10 Feb 2012 07:55:24 +0000
X-Message-Number: 9
On 10 Feb 2012, at 06:26, Florian Stadler boundary=Apple-Mail-DF67F580-FABD-4B16-8164-0FAD7F374095 wrote:
And yes, Arri publishes LUT's designed for their cameras to make the transform from LogC space to Rec709 and that LUT happens to be a pretty decent (not as turnkey as an Alexa LogC of course) starting point after said exposure treatment and processing.
I know a lot of people use ALEXA LUTs for REDlogFilm media, but don't forget the standard LogC to Rec709 LUTs from the ARRI web app include a colour matrix which is specifically designed to convert ALEXA Wide Gamut into Rec.709 colour space.  Since footage from a RED camera is not in this colour space to start with, the matrix is not really appropriate.  I would suggest a 1D LUT myself or a 3D LUT with colour space conversion switched off.
I would also say I do not go along with the necessity of developing footage shot at ISO 800 at ISO 320.  There was an argument for this with older RED gamma curves, but with REDlogFilm all highlight detail in a clip shot at ISO 800 is preserved when developed at ISO 800.
Nick Shaw
Workflow Consultant
Antler Post-production Services
London, UK
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Dan Hudgins
Date: Fri, 10 Feb 2012 00:08:50 -0800 (PST)
X-Message-Number: 10
Quote: [3. The software is doing the debayer live and from the raw file itself,
with the colorist able to make scene by scene decisions about the
quality of the debayer.]
I would agree that doing the final render from the original R3D is better than going through the ISO and K corrections then grading after for the most part, that is how my system works with DNG frames, from sensor data direct to final render that way the various filters are 'centered' right on the final grade and not off center where they may do more damage to the perceived results.
So I guess there is no DNG converter for ALEXA?  There is for SI-2K (tm) and it seems to be a camera on the market.  Last I heard Kinor-2K and Acam were on sale.
My primary criticism of the adjustments to the REDCODE code is that there are too many of them, and how they all interact seems less than obvious, Its like taking 9 prescription drugs at once.  Having an alternative interface for translation from the sensor code values to the end use values that shows the exact code translation in a clear way would be an improvement that would clear up some of the fuzzy logic behind various ideas of what works best, as you could KNOW what happened to the sensor data and see both the original and result code values side by side to know the exact exposure levels on the sensor itself (As I can do by measuring a gray and white card without any corrections, is there a way to get a TIF out of REDCINE-X without ANY corrections at all so that the TIF code values are 1:1 correspondence to the sensor code values from the ADC?) 
Dan Hudgins
tempnulbox (at) yahoo (dot) com
San Francisco, CA USA
Developer of 'freeish' Software for Digital Cinema, NLE, CC, DI, MIX, de-Bayer (DNG), film scan and recorder, temporal noise reduction etc.
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Michael Most
Date: Fri, 10 Feb 2012 07:06:53 -0800
X-Message-Number: 11
On Feb 9, 2012, at 11:55 PM, Nick Shaw wrote:
I know a lot of people use ALEXA LUTs for REDlogFilm media, but don't forget the standard LogC to Rec709 LUTs from the ARRI web app include a colour matrix which is specifically designed to convert ALEXA Wide Gamut into Rec.709 colour space. 
The Arri online LUT builder lets you build a LUT with or without a matrix. Building a LogC to Video LUT with no matrix and extended range yields a LUT that works very well with RedlogFilm footage, just as Florian described.
Mike Most
Colorist/Technologist
Level 3 Post
Burbank, CA.
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Nick Shaw
Date: Fri, 10 Feb 2012 15:20:29 +0000
X-Message-Number: 12
On 10 Feb 2012, at 15:06, Michael Most wrote:
The Arri online LUT builder lets you build a LUT with or without a matrix. Building a LogC to Video LUT with no matrix and extended range yields a LUT that works very well with RedlogFilm footage, just as Florian described.
Absolutely.
That is why I said there was a matrix in "the standard LogC to Rec.709 LUT", and recommended "a 1D LUT ∑ or a 3D LUT with colour space conversion switched off."  To do this the user needs to understand properly how to use the options in the ARRI LUT generator web app, and I have come across many people who do not fully understand those options, including people who I would expect to know better!
Nick Shaw
Workflow Consultant
Antler Post-production Services
London, UK
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Michael Most
Date: Fri, 10 Feb 2012 07:30:18 -0800
X-Message-Number: 13
On Feb 9, 2012, at 11:30 PM, Craig Leffel wrote:
There is NO reason anyone should be color correcting Red footage from a pre-processed or converted file format. 
I disagree. If you have a project that consists only of Red originals, with no other cameras involved, no visual effects, no speed effects, and, well, basically only cuts and dissolves, that statement makes sense. But most "real" projects, especially long form projects, don't exist in that kind of a vacuum. There is often a healthy mix of camera originals (from multiple cameras) and visual effects, and the only real way to keep things properly conformed and coherent is to convert to a "standard" container for everything. That way things can be properly maintained and managed by editorial. And the truth is that with sensible settings, there is very little to no difference between doing a "live passthrough" RAW to RGB conversion and doing a transcode, because ultimately you don't correct RAW directly in any case. It must become an RGB image for any further manipulation to take place. Yes, the conversion settings can help optimize that, and yes, it's nice to work that way when you can
, but that's not always the case. And for all the talk about what's "proper," I know of very few large features shot on Red that have gone through a DI pipeline in their native form. Some, but very few, for the very reasons I mentioned.
So, as many have said it doesn't really matter what you do on a Red. What's important is what your lighting ratios are in terms of Key to Fill 
and where you place your whites. As long as you don't clip the whites or blacks in the capture, a colorist using the raw file can slide the exposure scale up and down to fit the right spot on a PER SCENE basis.
That's true, but it's also true that if you use RedlogFilm as the gamma curve and leave the camera metadata alone, you're not likely to clip anything that wasn't clipped in original production, provided the cameraman and/or the DIT knew what they were doing. The range that's maintained by the RedlogFilm conversion is very, very wide, and unlike previous Red gamma curves, it's very unlikely that you're going to see something clipped that wasn't.
Change ISO ..Change white balance ..Change Kelvin ..Change Flut..
ISO and Flut are the same thing. The only difference is that Flut is scaled for tenths of a stop.
The Red is the only camera on the market where the colorist can recover the entire sensor data and all captured dynamic range from the Raw file. The Alexa is completely incapable of this at this time. The Alexa Raw file is currently severely limited. 
Please explain this, because I haven't found that to be true at all.
Arri is still trying to convince people that Prores is fine and that all we need to do is shoot and edit. Nothing could be farther from the truth for high end commercials, TV and Featues.
Once again, neither I nor almost anyone I know - all of whom work every day in high end television and features (mostly television) have found that to be the case. LogC Prores files work quite well for television series work when put through a proper color pipeline. I really don't understand what you feel is the problem, unless, as I said earlier, the material is not being competently shot. And I don't think Arri is "trying to convince people" of anything. They provide tools and choices, and those tools and choices are selected by cameramen and production teams. ProRes HD files are one choice. Uncompressed HD is another. ArriRaw is another. Personally, I like the idea of having choices that can be tailored to the needs of the job at hand in terms of resolution, file size, quality, flexibility, budget, and available post time. Obviously Sony likes that approach as well, as they're doing essentially the same thing on the F65. And although I like Red and what they've done, the fact is
that Red is the one company that forces you to shoot a format you might or might not really need or even want. So there's two sides to that discussion∑.
Mike Most
Colorist/Technologist
Level 3 Post
Burbank, CA.
----------------------------------------------------------------------
Subject: RE: RED Workflow
From: Daniel Perez
Date: Fri, 10 Feb 2012 11:01:42 -0500
X-Message-Number: 14
Craig Leffel wrote: 
There is NO reason anyone should be color correcting Red footage from a pre-processed or converted file format. 
Most of the REAL color correctors on the market, that are designed to actually do color correction as a professional and 
all day everyday task can color correct from the native Raw R3D file. 
It is important to note though that it is not always clear (to me at least) how the RED SDK fits in the floating point workflow of those professional color correction systems. In particular when a RED ROCKET card is involved.
As far as I understand most color correction systems must use the RED SDK to process the RAW into float RGBfor further color grading (YRGB in Resolve). It is not clear how/when all RED SDK embedded color processingis done: floating point? what precision? ... in the end what does the RED SDK deliver to RGB color engine? Is it afloat framebuffer? ...I've been told it delivers fixed point RGB !!! ... in either 8bits, 10bits, 12bits or 16bits (whenthe systems lets you choose).
No system actually grades form the "native" R3D. They all grade from the RGB output provided by the RED SDK ...maybe "live" or "real time", but you are not actually grading the RAW Bayer Pattern. The question here is if all those embedded extra color transformations that the RED SDK provides can be considered part of the professional floating point grading system ... or if they should be considered just an input pre-process.
Daniel PerezVFX/DI - WhyOnSet Madrid - Tremendo Films
    
----------------------------------------------------------------------
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Keith Mottram
Date: Fri, 10 Feb 2012 17:18:07 +0000
X-Message-Number: 16
On 10 Feb 2012, at 16:51, Craig Leffel <craig@optimus.com> wrote this about Arri:
AND they've decided that getting in bed completely with Apple
Don't know about anyone else but I'm looking forward to road testing Baselight's FCP plugin... also got to be honest I prefer the look of prores4444 Log than Red. As for the cameras themselves- give me an audi over a hummer any day if the week...
Keith Mottram
Edit/ Post, London
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: David Perrault
Date: Fri, 10 Feb 2012 12:37:58 -0500
X-Message-Number: 17
,,,,,The reality of this discussion is to think of exposure as a big
ball of data in the middle of a predefined scale.,,,,,
Really ?!?
I think that's a bit delusional - that's just not the way photography, 
as an art, is practiced.  There is photography and there is scientific 
imaging - and there is a difference.
Imagine how *The Godfather* would look if exposures were chosen in such 
a scientific manner?
Sometimes things are clipped or squashed for a reason.
-David Perrault, CSC
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Craig Leffel
Date: Fri, 10 Feb 2012 12:19:32 -0600
X-Message-Number: 18
On Feb 10, 2012, at 11:37 AM, David Perrault wrote:
Really ?!?
Yes, Really.
I think that's a bit delusional - that's just not the way photography, 
as an art, is practiced.  There is photography and there is scientific 
imaging - and there is a difference.
Not really. Look at the most celebrated still photographers of all time, especially the ones that specialized in 
making and manipulating negatives. Take Ansel Adams. Known for the widest of latitude in the resultant prints that came from his exposures.
He and others pioneered concepts like N-1 development to place highlights on the negative so that they were in a place capable of being reproduced
at the intended scale or stop if you will. Placing your exposure within a capture medium you understand is exactly what Photography
and the Art of Photography is all about. You can't break the rules with any kind of knowledge and consistency if you don't know them.
Imagine how *The Godfather* would look if exposures were chosen in such 
a scientific manner?
Is your point that if the Godfather was captured with a flat neg in a defined space without clipping the capture curve that a timer or colorist couldn't
have possibly achieved that look? because I disagree. At that point we're talking about characteristics of certain film stocks, which in this discussion
means a camera or a file format. I would argue that the Art you see has as much to do with physical characteristics as it does the way it was printed.
The exposure of the capture is secondary. Those early photographers would argue that the Art comes in the darkroom where they purposely decided
how to present their images and made version after version of burning and dodging, and chemical bath changes, and differences in time per bath, and
the kinds of actual developer, stop and fix they used. As well as 2 bath developer. ALL of that contributed to their look. Photomechanical and physical
processes after the fact. Composition, framing and exposure have to happen in the camera. The rest is taste and personal opinion.
Sometimes things are clipped or squashed for a reason.
True enough... and when it was a time where we projected light through a physical surface and onto a wall, that kind of thinking made sense.
Digging into a film stock was just fine when the person making the exposure knew the intended display medium and format. The sheer fact that light is physically penetrating
through a physical object has everything to do with the intended output and the beauty it produces.
We're not living in those times anymore.
CL
_________________________
Craig Leffel
Director of Production
One @ Optimus
161 East Grand
Chicago, IL 60611
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Noel Sterrett
Date: Fri, 10 Feb 2012 13:44:13 -0500
X-Message-Number: 19
Daniel Perez wrote:
The question here is if all those embedded extra color transformations that the RED SDK provides can be considered part of the professional floating point grading system ...
Any data transformation (color space conversion, debayering, filtering, 
etc.) that cannot be perfectly reversed, involves a loss, however 
slight, of information. Where multiple transformations are involved, the 
order of processing can also influence the result. So in a perfect 
world, it would be preferable to have direct access to the sensor data, 
so that all processing thereafter would be left up to color correction 
systems rather than hidden by the manufacturer, either in the camera, or 
their SDK.
But we don't live in a perfect world, and at the moment, very few 
cameras let you really peek inside the sensor. Imagine what movies would 
have looked like if Kodak had hidden how film responds to light.
Cheers.
-- 
Noel Sterrett
Admit One Pictures
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Art Adams
Date: Fri, 10 Feb 2012 11:19:21 -0800
X-Message-Number: 20
Is your point that if the Godfather was captured with a flat neg in a defined space without clipping the capture curve that a timer or colorist couldn't have possibly achieved that look?
No, his point is that a flat negative would give a colorist the option of rendering it just about any other way he or she wanted. This is not desirable.
Given that we, as cinematographers, don't process and print our own work and must rely on the expertise of others, it is too easy for a rogue colorist or a meddling producer to come along later and change it all. If we shoot it such that it can really only be graded one way then we protect the integrity of our work and the director's vision.
The concept of "Just shoot a flat negative and we'll do the rest" moves the role of cinematographer from artist to technician. I'm a director of photography, not a director of data capture, and my role is not to simply hand over a bunch of data so that someone else can have all the fun. I shape it first, and I expect it to retain that basic shapeˆwith a bit of buffing around the edgesˆall the way through the post process.
I have to admit that I'm a stickler for shooting a solid "negative" which offers a fair bit of leeway in post, but I do that for my own peace of mind rather than to give someone a license to do it their way later.
-----------------------
Art Adams | Director of Photography
San Francisco Bay Area
showreel -> www.artadamsdp.com
trade writing -> art.provideocoalition.com
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Keith Mottram
Date: Fri, 10 Feb 2012 19:41:46 +0000
X-Message-Number: 21
There are examples where it is not possible to rebuild the look in post well, how about someone heavily backlit by the sun- in a shot like that i don't want all the detail known to man in the subjects clothes but I want the sun to wrap and burn round the subject in an organic manner. exposing for maximum range in this and others would ruin the shot. Unless the lighting is naturally flat then exposure should be the sweetest part for the end image not the sweetest point for the majority of options- unless there is a specific need for example VFX. 
Then again when it comes to commercials we're all technicians, if I ever think otherwise I just become depressed.
Keith Mottram
Edit/ Post... but does like to shoot occasionally.
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: David Perrault
Date: Fri, 10 Feb 2012 14:45:06 -0500
X-Message-Number: 22
,,,,,The exposure of the capture is secondary.
Uhmm...  No.
To a scientific objective, yes.  But the creative mandate does not
support your post-production-centric way of looking at things.
Comparing Ansell Adams prints and neg density to modern production film
and television productions is just obfuscation.
If the creative mandate of the cinematographer is maintained, with
collaboration that extends the final image,  then there is no denying
that capturing the most information possible has merit.
But when does that actually happen?  Modern production realities often
remove a degree of control from the cinematographer in the final image 
manipulations.  And that is putting it nicely.
The choice of exposure, and the inherent manipulation this provides, is
one of the ways photographers take the science of imaging into a
creative place.
,,,,,The sheer fact that light is physically penetrating through a
physical object has everything to do with the intended output and the
beauty it produces.
We're not living in those times anymore.
Thinking that way is pushing the art backwards.
,,,,,You can't break the rules with any kind of knowledge and
consistency if you don't know them.,,,,,
That's being pedantic without allowing for the creative mandate of those 
that do know the rules.
You need to read up on how *The Godfather* was shot.
-David Perault, CSC
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Kyle Mallory
Date: Fri, 10 Feb 2012 14:37:29 -0700
X-Message-Number: 23
My preferred/personal workflow:  Since you're shooting RAW. Monitor RAW.
If I don't have enough light to make it work, then turn off RAW 
monitoring, and tweak ISO/etc to get an idea of what can be recovered in 
post (or if its even worth trying to recover).  But for the 95% of 
everything... I have to remind myself that monitoring w/ meta (or 
Non-RAW) is a false representation of what the camera is actually recording.
The important thing is that the camera records what the camera records, 
regardless of your meta and how you choose to monitor.  Everything else 
is just pushing bits around *after the fact*.  You aren't going to 
magically create what wasn't there originally.  And if you think you 
are, you are wrong, and in fact you are most likely throwing information 
away somewhere else.
--Kyle Mallory
Filmmaker Hack
Salt Lake City, UT
----------------------------------------------------------------------
Subject: Re: RED Workflow
Date: Fri, 10 Feb 2012 11:29:57 -0800
X-Message-Number: 24
Thanks guys,
This discussion took an interesting turn...
Just to jump back to the original topic - I understand what you are saying about the distribution of dynamic range and rating the camera.
I think that some people are confused since the the term 'native iso' is still being used in this context occasionally. Perhaps this is misleading when discussing the RED camera.
If the 'native iso' of the camera is 320, then shooting 800 is 'underexposing'. switching to 'raw view mode' shows a darker image as you rate the camera higher and so on. I like the dynamic range distribution definition of this better.
So after reading your replies it seems that my original idea is unnecessary. Shoot and develop at the same iso, know your camera and it's abilities at the chosen iso - pretty much what I've been doing.
I'm still tempted to create a look for dailies that will be slightly different between day/ext and night/ext and treat it as I would treat two different film stocks used for the same purpose, but this becomes a creative choice more than a technical necessity.
Tal
Tal Lazar
Director of Photography
----------------------------------------------------------------------
Subject: Re: RED Workflow
Date: Fri, 10 Feb 2012 13:24:22 -0800
X-Message-Number: 25
Things change. What? You already knew that? My point is that all of us, as
artists and technicians, must nimbly navigate the actual issues impacting
"authorship" of the image in the here and now.
At the most basic level, shooting a fat, clean digital "negative" that
travels into post with metadata that indicates intent should work a treat.
IF the DP has enough "juice" to keep their "look" relatively intact through
to the finish that's great (if they'll pay you to participate in the grade
even better, but we all know how often that happens,-)). If the producers
see the DP as more technician than artist (as is typical in spot work),
most of us would consider that a poor use of resources, but unless you
don't plan to cash their check...
I know some DP's try to make a "thin" negative that falls apart with heavy
grading just to wrest control from their "collaborators". There was a very
high profile studio tentpole where the well known DP and the well known
colorist ended up in a veritable game of chicken where the DP kept dropping
exposure so that when the studio pushed the colorist to lift the levels
they would be stymied by the noise floor. WTF. Is this really the road we
want to go down?
Cameras that shoot in RAW color space with 12+ stops of DR like the RED
Epic present a different set of opportunities, and risks, than other
formats. IMHO it makes it more critical that the DP and the colorist are in
the loop with the creatives in designing the "look". Insert the usual great
power, great responsibility rap here. Bitch and moan all you want but does
anyone really expect to get that genie back in the bottle?
Blair S. Paulsen
4K Ninja
SoCal
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Art Adams
Date: Fri, 10 Feb 2012 15:39:42 -0800
X-Message-Number: 26
If the 'native iso' of the camera is 320, then shooting 800 is 'underexposing'.
Not really. "Native" just means that the signal coming out the A/D converter isn't being boosted any further in the DSP. Native gain means very little because, while it is the "cleanest" signal you'll get out of the camera, the noise level is what really defines how fast it is.
Even though ISO 800 is "underexposed" in relation to the native gain there's nothing wrong with using it if you like the results. There's no law that says you can only use the camera at its native gain. How to rate the camera is a creative decision, not a technical one.
I'm still tempted to create a look for dailies that will be slightly different between day/ext and night/ext and treat it as I would treat two different film stocks used for the same purpose, but this becomes a creative choice more than a technical necessity.
Exactly. And keep in mind that you can tweak FLUT and get into the RGB gains and contrast settings and tweak the look to your heart's content without affecting the underlying image. It's all reversible as long as you don't clip or push something vital into the noise floor, and post will see the look that you intended when it first comes up on a monitor.
I tend to watch Rec 709 and then toggle into raw occasionally to see if something bad is happening. My understanding is that the traffic lights always look at raw so they can give you a heads-up if something's wrong.
-----------------------
Art Adams | Director of Photography
San Francisco Bay Area
showreel -> www.artadamsdp.com
trade writing -> art.provideocoalition.com
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Art Adams
Date: Fri, 10 Feb 2012 15:43:41 -0800
X-Message-Number: 27
when the studio pushed the colorist to lift the levels they would be stymied by the noise floor. WTF. Is this really the road we want to go down?
No, but if the colorist doesn't respect the DP's intentions it's the road that will be traveled. Why would you expect anything less? Most DPs don't get into this business to be technicians. We'll fight for creativity. If someone doesn't like what we're doing then they need to tell us, and then--if things don't change--replace us. Fighting with colorists is not a productive use of anyone's time.
IMHO it makes it more critical that the DP and the colorist are in the loop with the creatives in designing the "look".
Under ideal circumstances that's exactly what happens. It doesn't always work that way, though.
-----------------------
Art Adams | Director of Photography
San Francisco Bay Area
showreel -> www.artadamsdp.com
trade writing -> art.provideocoalition.com
---
END OF DIGEST
CML-DIGITAL-RAW-LOG Digest for Saturday, February 11, 2012.
1. Re: RED Workflow
2. Re: RED Workflow
3. Re: RED Workflow
4. Re: RED Workflow
5. RE: RED Workflow
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Tsassoon
Date: Sat, 11 Feb 2012 08:22:20 +0530
X-Message-Number: 1
DNR
Tim Sassoon
SFD
Santa Monica, CA
Sent from my iPhone
On Feb 11, 2012, at 2:54 AM, Blair Paulsen  wrote:
DP kept dropping exposure so that when the studio pushed the colorist to lift the levels
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Bob Kertesz
Date: Fri, 10 Feb 2012 20:30:31 -0800
X-Message-Number: 2
There was a very high profile studio tentpole where the well known DP and the well known
colorist ended up in a veritable game of chicken where the DP kept dropping exposure so 
that when the studio pushed the colorist to lift the levels
they would be stymied by the noise floor. WTF. Is this really the road we want to go down?
Sounds very much like what was done on the original Godfather.
--Bob
Bob Kertesz
BlueScreen LLC
Hollywood, California
DIT and Video Controller extraordinaire.
High quality images for more than three decades - whether you've wanted them or not.
We sell the portable 12 volt TTR HD-SDI 4x1 router.
For details, visit http://www.bluescreen.com
----------------------------------------------------------------------
----------------------------------------------------------------------
Subject: Re: RED Workflow
From: Tsassoon
Date: Sat, 11 Feb 2012 10:29:33 +0530
X-Message-Number: 4
OTOH, in the real world, working on a movie with multiple VFX vendors and the need to do quite a bit of processing to RED images being used at their highest resolution; removing lens distortions, sharpening, crops, technical or pre-grade, etc., one would no more hand over the RAW footage to work from than one would OCN in a film show for vendors to scan or TK themselves (besides damage or loss).
There are reasons production does the scanning in a film show, and there are reasons to pre-process to an approved distribution DPX or EXR in a digital show. Mainly so there's only one movie being made.
Producers are not by nature enthusiastic about paying for the work, but we still manage to sell it :-)
Tim Sassoon
SFD
One more day in Mumbai
Sent from my iPhone
On Feb 11, 2012, at 12:14 AM, Noel Sterrett <noel@admitonepictures.com> wrote:
Any data transformation (color space conversion, debayering, filtering, 
etc.) that cannot be perfectly reversed, involves a loss, however 
slight, of information.
----------------------------------------------------------------------
Subject: RE: RED Workflow
From: "Geoff Boyle"
Date: Sat, 11 Feb 2012 08:04:22 -0000
X-Message-Number: 5
I've been watching this unfold and kept saying to myself "stay out of it"
but really!
Guys, there are a million ways to do anything and which one is "right"
varies job by job and facility to facility, client to client and place to
place.
THERE IS NO "RIGHT" WAY!!
There's only the way that works for you on that particular occasion.
Right now I'm assembling a 3D piece for a conference I'm speaking at and I
have rushes in SIV, CF mux, CF non mux, DPX, XDCam, NXCam, R3D, GoPro,  I'm
sure I've missed something.
I have SpeedGrade NX, Edius 7, Premiere Pro, RedCineX, Firstlight, Resolve
and on and on.
I'm transcoding everything to DPX using whichever route works best for that
source.
For CF it's establish a look, but not do any 3D work, in Firstlight, then
output to DPX via Adobe Media Encoder, with R3D it's RedCineX and out to
DPX, Edius seems to be best for Sony formats, and on and on.
In theory most of the edit software can work with the formats natively, and
they do, but I'm finding that there is a "best" route for each format and
they are not in the same software.
So, I use the best for any individual format and then take the common format
of DPX into SG and do any 3D work and final grading there, outputting to DPX
to then create a DCP.
Is it the best way?
It is for me on this job.
The next time I try this????
Cheers
Geoff Boyle FBKS
Cinematographer
EU Based
---
END OF DIGEST


As I mentioned at the beginning, this is a snapshot of attitudes at the beginning of 2012 and there's a lot of good sense here, but all methods to my mind are also rituals that in the end are there to get you through the day.


Terry Flaxton