Friday, 2 December 2011

A Kind of Wonder

In the last century the iconic image was more prolific due to the lower level of production of images generally. Now the tsunami of images and the fact that the nature of the iconic has been identified and therefore disempowered by both its own ubiquity and the ubiquity of the image in general, renders the newly iconic almost impossible to produce.
Cartier-Bressons ‘definitive moment’, that moment that identifies the essential image that characterises the moment that is available to the photographer had she or he the technique to capture it, and Conrad Hall’s confusingly titled ‘photographic moment’, given that he was a cinematographer, are available for all to achieve as technique has been quantised, digitaised and made ready for popular use via an availability through the 'professionalisation' of software. Naturally, when software developers could increase functionality in software, they did and this lead to the software outputting the semblance of the professional with the person addressing the software having very little professionalism - as professionalism is much more that 'the look of a thing'. Training in higher educational institutions took on the need to familiarise their student with 'the look of the qualitative' and utilised these software solutions so that an apparently ‘more professional’ trainee might be produced for the job market place. Equally trainees met this new level of training with enthusiasm and mass technique and mass aspiration to be the single producer of the iconic rose to meet the challenge. But as the Italians rightly say:  'Pochi sono chiamati, ancora rispondere a molte' - Meaning:

‘Few are called, yet many answer’.

Conrad Hall maintained that each still frame in a shot should have photographic quality (approaching Cartier's Definitive Moment in certain senses) but mainly in compositional quality, so that between the beginning frame of a shot and its end frame, all frames in between as the cameras eye roams across the scene should have the highest compositional quality, as well as the ‘correct’ play of light and subject activity. Hall was saying that even when the cameras eye roams across what could be called abstract compositions, because the subject cannot always be in frame, if the camera when behind a post for instance then the image produced should be like that of an abstract painter, perfect in all of its attributes.It should follow that the ubiquity and availability of high quality equipment and training to a high skill level makes available to all, this level of awareness of the construction of the image.

If the craft cannot be applied whilst in the act of capture - it’s ok as new functionalities of composition are available in programmes that fix reality. Take 'After Effects' for instance, there’s not much that cannot be rearranged in this programme when aligned and data exchanged with Photoshop, so that what was not achieved in the craft act can be genrated in ‘post’. Post meaning: the situation when one has time to think and dwell on construction of all the elements so that they appear to have been produced in the act of capture. This is of course both tautological and impossible.
But the world calls us to act when acts are necessary and craft acts need be realised when that moment calls. Post construction of the iconic is false as it is pre-conceived and post-conceived - realised from a position of understanding ubiquity and cliche, yet without the discriminative ability that stops its production. It speaks of what once was iconic and tries to duplicate what others have done before and in its replication in a  plastic medium and renders images non-iconic from lack of application of the taste that would be the very thing in the moment of capture that resisted cliche.
But post analysis is simply that and it is a form of dull and stultifying practice which renders its compositions also dull and that itself stultifies the production of the sense in us that is a response to the observation of the iconic. A kind of wonder.
The ready ability to respond to the world at the moment when the world produces the circumstances for the production of the iconic comes from continuous practice and a conscious awareness and the desire to stand on the verge of excitement at the possibility of its production. This excitement to visit this moment through the medium of realisation is what any craftsperson can utilise to elevate their practice to art.

Saturday, 22 October 2011


Impossibly, something absolutely perfect happened yesterday night whist watching Woody Allen’s ‘Midnight in Paris’.

For years film and video has been trying to be self-reflexive, to truly encode the fact of the making in relation to the audience. This is about how the subject, the makers and the audience are bound together in a group agreement to suspend disbelief about the act of watching a fiction of some kind, about how the cleverer works encoded this into the subject matter to reveal some deeper truth when you are in the depths of immersion in the fiction of the piece.

In the middle of the film Gil Peters goes back in time and meets Salvador Dali, Man Ray and Louis Bunuel.

Gil: I'm Gil, nice to meet you. It's a pretty name.
Bunuel responds:  A man in love with a woman from a different era. I see a photograph!
Man Ray: I see a film!
Gil: I see an insurmountable problem!

At that exact moment the projector in the cinema turned off and the safety lights came up and I was amazed that Woody Allen had arranged for thousands of cinemas across the globe to do this in every performance of the film. We sat for a moment and I mused on the nature of going to see films and engaging in fictions and what immersion and suspension of disbelief means.

A voice emitted from the projection box that ‘we’ll get the film on as soon as possible’. How amazing that Woody had issued dialogue for the cinemas to speak. Then the sound came up to let us back into the film gently, then the image, then the lights went down. What orchestration. I got back into the film.

Later I went to the box office and they told me that the electricity in the small city where I live had gone off at that exact moment.

You couldn’t have planned it...

Thursday, 8 September 2011

The Developing Language of Digital Technologies

On 7th September 2011 in Los Angeles, the Directors Guild of America exhibited the new Sony F65 Camera to the Movie Industry. This is the newest flagship Digital 4k camera with an 8k imaging sensor (where 4K refers to 4000 lines of resolution - this is the gold standard because it comes near to what a 35mm negative can do in terms of resolution and detail). It is only the third 4K camera to have been made after the Red One and the Red Epic. So Why an 8k sensor though? According to the Niquist/Shannon Sampling Theorem, to derive an actual and accurate measurement of (in this case) resolution, you need twice the sampling rate to achieve a true measurement. Therefore an 8K chip delivers 4k resolution. With the F65 Sony is seeking to take the high ground of Digital Cinematography with an act constructed to displace all of its competitors.

 In what follows want I want to examine this growing description of this work, a developing language with accompanying development of ideas that even the most blasé film or media student is to some degree conversant with, though their level of familiarity with the language does not necessarily mean they would understand the subject - especially as a lot of what is said by professionals is said within within metaphor. So to reveal as much of the indicators and referents to true meaning I wish to use some of the flurry of posts on the Cinematographers Mailing list on the day after the presentation of the Sony F65 as a snapshot to examine the attitudes and the developing language of those at the coal face of this developing technology as well as the developing language itself.

The CML was created by well-respected UK Director of Photography. Geoff Boyle’s respect comes from amongst other things his maxim: ‘test, test, test’ which is itself pure scientific materialist values. There are various professionally oriented lists online, like the absurdly named but hugely useful (and respected) Creative Cow which deals with most professional software programmes, plus there are many other lists that are full of ‘wannabees’ or students or ex-students who wish themselves to be in professional company. Of course as these people mature then those lists also become more professional. CML however, takes no prisoners and excels at ‘flaming’: there is zero tolerance for professional stupidity - that is, asserting something if you do not know it to be true yourself through having tested the logic, or citing peer-agreed and unquestionable professional reference.

The professional practitioner has a peer-review process as stringent as the academic model, with the same outcome of loss of respect from your peers if you get this wrong. In the early days of the groundbreaking company, Red Cameras, Red used the enthusiasm of its early adopter community as a PR space to broadcast its product. CML participants looked on initially, then challenged the claims of the adherents and some would argue that this in itself was to Red’s greatest benefit - because when the CML community itself had tested and then did praise the product, the approval was worth that much more when it did approve because of its initial with-holding of approval. CML has many different Cinematography lists from lenses to grip gear, from 70mm cameras to Digital Cinematography lists and these are visited daily by people at the very top of the professional sector and who are themselves practicing in the industry, through to people who mostly stay silent to learn - from feature film cinematographers down to second or third level camera assistants or edit assistants.

In attempting to reveal the meaning within the language of these professionals my intent is to disclose what will become important for theorists and later users of the technology when it filters down to educational or commercial or domestic use. It is becoming clear that the gap between professionals and then on the bell curve, the early adopters, or early users is closing.There is a set of reasons within this function, chief amongst them is the simple fact that mass-production eventually contributed to mass-availability and then mass-demand for higher quality. There’s also Gordon Moore’s Law which states: 

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year... Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer”. Gordon Moore, Electronics Magazine, 19th April 1965. In 1975 Moore altered his projection to a doubling every two years (1975: Progress in Digital Integrated Electronics).

 Theorists seek to sit outside the bell curve of adoption of technology, sometimes between research labs and research initiatives and professionals. Sometimes they create ideas prior to the research labs. Theorists evoke what happens in the world or in a specific domain through the use of language and the issue in this article is language, how it develops, who uses it, whether it informs thought, or whether new ideas generate new language - in a co-dependency of arising, as Noam Chomsky would have it. So we’re now at a moment in September 2011 where we have larger chips, faster recording mechanisms that handle what data those chips output. But higher resolution isn’t everything (as we’ll see within the comments that I’ll present you with below). Colour bit depth, accuracy of rendition in both capture in photosites and how those photosites are ‘read’ - for instance was the recording 8, 10, 12, 14 or 16 bits of colour - or higher? And frame rate for smoothness of movement plus increased immersion in the display device - but also what the dynamic range of the entire image is. Does it replicate the functions of the eye? If the camera does, does the display device? And so on and so forth.

So there’ll be a degree of jargon in what follows, meta-language and meta-ideas. I’ll not annotate at every moment, but occasionally I will try explain the ideas, plus I’ll summarise at the end. Please pursue the information even if the numbers go beyond your attention span (or will to live) and these will be revealed to be either true, sleight of hand or untrue (due to a distortion of truth). If you can stay with the argument, I shall seek to reveal the discourse between professionals and shine a light on the potential future meaning of the exchange between them. Please also refer to my online oral history research resource: A Verbatim History of the Aesthetics, Technologies and Techniques of Digital CInematography, which catalogues a global view of the developments in this new subject area by asking practitioners, theorists, Cinematographers, Artists and Professionals who use this technology what they think this technology is, what it does and what changes it is causing to happen. It can be found at: Lastly, I found myself writing extensive notes in the text to make what was written understandable, then I realised that these interrupted the narrative of the professionals musing on what was going on and what would arise from these technological developments. I pulled them out and placed them in the typical footnote position, then as asterisked notes at the end of this article - yet again the interruption was huge.

So I’ve now reorganised the entire article so that the days exchange is followed by a section called: Preliminary Summation, then a section entitled ‘Technical Notes as an Element of the Argument’ which is followed by a conclusion (or Coda), which itself is a set of parameters, to be read as just as informational or revealing as the rest of the language. I note this all here because I’m seeing the form that I’m writing within, changing before my eyes.

On Wednesday, 7th Sep 2011 00:53:38 -0400 (EDT) the strand of exchange of ideas began on the cml-digital-raw-log digest recipients cml-digital-raw-log. Tim Sassoon, a respected professional grader or colorist, begins the exchange by quoting an online mail-out from Bandpro, suppliers of professional equipment:

"Band Pro is now accepting pre-orders for the new Sony F65 digital cinema camera. With Sony's F65 Introductory Pack you can be one of the first to get their new 4K camera when it starts shipping in January 2012. And, with the full compliment of accessories that are included in the pack price of $85,000 you'll be ready to shoot 16-bit 4K footage out of the box. The Sony F65 camera utilizes an 8K digital sensor..." Tim comments: “I gotta say, that's pretty aggressive pricing for Sony. Will Arri step up to the 4K plate? I'd be willing to bet that by 2014, shooting or posting features at 2K will be very passe”.

So here Tim is predicting the end of 2k by 2014 - and yet for most people who are still shooting at around 1920 x 1080, they’re shooting with equipment using Gop structured pictures (where only some of the data is passed in Groups of Pictures from capture into processing before being falsely recombined into full frames to display - but of course, it may again then be torn apart back into GoP structures for internet streaming before display). So for the more purist data engineer, these are heavily compressed images at the outset and therefore not worthy of use.

 Here Carlos Acosta Date: Tuesday, 6th Sep 2011 22:25:45 -0700 talks about the levels of data this sort of device (the F65) will generate and how we deal with that. He also compliments and chides Sony on being formerly a closed organisation but also compliments Red Cameras on opening Sony up (against their will!). He corrects the comment supposedly from Mike Most:

“My favorite comment at the event (from, I believe, Mike Most): Jim Jannard knocked $100,000 off what would have been the price of the F65." Carlos answers: ‘That was me Bob ;-) What I saw tonight is Sony descending from the clouds looking to join the boots on the ground. It's kind of about face for them to even offer an "open" architecture. Of course we really don't know what the promise of openness really means any more than we know how much time fits on a 1TB data card. Being realistic, if it was totally done, it would be delivering right now. They obviously covered the lack of critical details with slick power point and and funny jokes. Kidding aside, $85k for a system of this caliber ain’t bad at all. The images were really fantastic. I suspect the Mike Most will have his questions about data format and other workflow issues answered. It will generate stunning quantities of data pushing many users to shoot in HD anyway”.

Within this paragraph is held the information that Sony are now following Red in the development of their cameras. Preciously they used to ‘prove’ the product befoire releasing it. The response from the professional users was often that engineers had designed the camera and consequently professionals had to make all sorts of adjustments to make the equipment fit for use. Here Sony are now releasing beta level equipment as evinced by there being a lack of a complete post-production path, but like Red they’ll now rely on the good offices of the professional community to sort this out. The euphemism one could use here is ‘consulting with the community’ to bring it on side - equally one could criticise both Red and now Sony for releasing equipment into professional usage that doesn’t actually work!

Michael Most on Tue, 06th September 2011 22:05:36 -0700 quotes Bob Kertesz: “My favorite comment at the event (from, I believe, Mike Most): Jim Jannard knocked $100,000 off what would have been the price of the F65." (note: Jim Jannard owns Red Cameras)”. 

Mike Most then responds: “Actually, I didn't say that. But I agree with it. The economics are changing, no doubt about it. But the nature of the Moore's Law rate of technical advancement has now dictated very different economies of scale with regard to technical devices like modern digital cinema cameras. Since these things are effectively obsoleted in a relatively short time, the purchase price has to be considerably lower to account for the shorter shelf life. I've never really talked to Jim Jannard about that, but despite his "obsolescence obsolete" statement, I think he foresaw this, and one of the reasons he came up with his dramatically lower price points is because he understands it. He has been remarkably generous in his upgrade policies, but I think he understands the implications of faster development creating faster obsolescence very, very well”.

Another post from Mike Most, Date: Tuesday, 06th September 2011 22:17:56 -0700 quotes Tim Sassoon: “I seriously hope they don't roll their own de-Bayering accelerator, as threatened at Cinegear and like RED, and instead write to NVidia CUDA engines”. Most carries on: .”..And BTW, since Sony is claiming that the sensor doesn't actually use a Bayer pattern, we probably shouldn't be calling in debayering in the first place. Maybe we should call it de-rotation debayering. Or maybe de-Hyper Hadding. Or maybe just image reconstruction, although that just sounds so SMPTE....”

Here Mike Most is reconstructing or inventing language to try to deal with the changes. Debayering is the system used to reconstruct colour information from a Black and White signal (effectively). Color filters are placed over the photosites in a specific pattern and then read back in post and reconstructed into a colour set within a certain colour space. The colour space of a printer, your optical system whist reading this, and a plasma display are all entirely different. So coherent systems that maintain colour throughout the chain are a necessity. Hyper Hading is a reference to early chips that where enabled with hole accumulation iode Sensors - it’s another strategy to turn light into data, into displayed light once more. In fact his reference to image reconstruction, though he jokes about it sounding like the SMPTE organisational way of referring to things, is quite apposite in this instance. It describes what actually happens.

Sony have always played this kind of game however - in the early days in it’s tussle with Kodak it named it’s Digital Video system: Cine Alta. The use of ‘Cine’ being a direct reference to film to create the beginnings of commercial displacement that would eventually win the commercial electronic corporations war against the photo-chemical corporations. It’s important to discuss Sony’s camera naming policy as it’s a key component of their commercial strategy to supersede the photochemical corporations by referring back to a film past. The two preceding cameras were the F23 and then the F35. Both these cameras costs hundreds of thousands of dollars - but as you’ll see, the F65 is under $100,000. For years Cinematographers had clamored for a 35mm sized chip so that all the benefits of 35mm could be exploited - after all, that had been the optical proactive in Hollywood since the beginning. 35mm optics automatically gave the kind of cinema we were used too. You know the shot where the two lovers kiss and the background is out of focus thus placing a spotlight and emphasis on the moment? That was due mostly to the physical pathway derived from 35mm optics. Though any Cinematographer could produce that shot on any format film or video (using different techniques to limit depth of focus).

Up to and including the F23 Sony had been wedded to a smaller chip size (usually half inch but in this case 3 x two thirds of an inch chips) and they’d also been wedded to CCD’s as opposed to CMOS chips. CCD’s discharge line by line and CMOS discharge the whole sensor in one go - there are various physical artifacts related to both processes. When Sony created the F35 they adopted a 35mm sensor CCD sensor - thus coming into line with both Red Cameras and Arriflex with their D21 and latterly the much praised Alexa system in terms of optics. But the F65 changes to a CMOS sensor and also refers to the double size of 35mm which is 70 mm, but abbreviated to 65mm. In industrial film production 70mm was physically slashed into 35mm, then into 16mm, then into 8mm. All of the variants which use the term ‘Super’, simply get rid of one row of perforations and therefore enable the frame size to become larger taking up the space where one row of perforations used to exist - surprisingly this renders an extra 40 per cent imaging area. However, the confusion employed by Sony is that the F65 is in fact a 35mm sized Sensor.

There is currently one type of digital cinema camera with a 65 mm sensor, the Phantom 65 made by Vision Research. Paradoxically this has a 34K sensor of 4096 x 2440 pixels and of course this has larger photosites due to the sensor size.

 Mitch Gross on the same listing comments on Alan Lasky’s comment: On September 7th , 2011, at 1:10 AM, "Alan Lasky" wrote: “It is good to see Sony loosening up a bit”. “I think the Reason there is not a clear message on post path is that Sony has chosen not to ram one down everyone's throat. Unlike the past, Sony's mission this time is to be very open in how the system can be supported. Yes you can integrate with current SR workflows, but you can also use all of the various 3rd party systems because Sony will provide SDK information for them to ingest the files. This is very much like Phantom CINE files or ARRIRAW, but a bit different than REDRAW because RED makes everyone incorporate their de-Bayering engine to insure that the process is consistent. One other difference with F65 is that it is a different pattern than Bayer mask, so that might take some more math work from the various processing systems out there, but again, Sony will provide the information. It's obvious that they have a way to extract the information beautifully. The download station has 10G Ethernet. We have an onset download station we built for The Phantom CineStation download dock that can empty a 512G CineMag in under an hour using 10GE. I would expect similar times from the Sony system. And $85K for the complete camera with the shutter, VF, recorder, a mag and the download station? Yowsa, compare that to the $300K system of the F35 a few years back! Killer deal, Sony”.

So here in amongst the detail is the debate on the way technology is taking up the call for faster, more qualitative technological response to the demands of the professionals who want better and better images. When Jim Jannard introduced the Red camera, it was as if in an irritated response to corporations like Sony who kept their systems to themselves. Here it becomes clear that these technicians: Cinematographers, colourists, graders, digital imaging technicians and editors truly understand the medium and are completely competent to understand the problems of the designers. It just might be that the cultural production of analogue and digital video, within the Asian market place suffered from the lack of openness of the societies that produced the technology. Equally however, the early European and American versions of that same technology were less user friendly than the Asian - or rather, Japanese, versions. So in the comment above Mitch Gross is discussing both cultural and technological issues - not to mention that both these strands of discussions are in the end in service to the aesthetic delivery of images into our world.

Michael Brennan a DP from Melbourne and also the editor of High Definition Magazine takes first the cultural and then the technological points up on Wed, 7 Sep 2011 20:56:08 +0100: “Of course we really don't know what the promise of openness really means any more than we know how much time fits on a 1TB data card He then quotes from various Sony pdfs;

“Series S55 cards (capable of 5.5 Gbps) will work at 2k/HD as well as 4k, "non 4k cards" series S25 (2.5 Gbs) will work at 2k and HD but apparently 4k at 23.98psf only. 1TB SR-1TS55 card can store: 59 minutes of f65 raw 16bit 4k 23.98psf 29 minutes of f65 raw (4k x 1k) 16 bit 120fps 572 minutes of HD SR lite 422, 23.98psf 160 minutes of HD SR HQ 444, 23.98psf In case of 3D recording record time will be halved”.

This is of course very technical and requires one to have a mathematical bent - but one is listening in to the metalanguage of the technicians - the twitter of the birds - that seek to bring advanced technology to us. He goes on (read this as if concrete poetry): “So three hours of SR HQ on a card that can be transferred in around 30 minutes. Note that there a two recorders one that does HD/2k the other that does 4K (and maybe HD too??) The SR-R 1000 is a portable 8TB drive with 4 x card slots. Takes 30 minutes to transfer 1 TB, can transfer 4 x cards at a time, looks like a tape deck. The SRPC-5 "transfer station" is a 1U form factor card reader with gigabit ethernet "to compliment existing on set data ingest" and a HDSDI out (if you want to transfer to HDCAM SR deck). Compact card reader is SR-PC4 with one slot and Gbe or optional 10Gbe (third party) and has optional F65 raw monitoring. Can copy direct to Esata drive via optional Esata interface. This is the one of most interest for use in the field, not sure of what the transfer time would be.....” ...and then the characteristic joke to alleviate the compression of attention: “At last a Dcinema camera with a ND filter wheel :)”

This is all difficult to read - perhaps like a translator of the early mesopotamian writings, or middle Egyptian, the translator has to find the kinds of meanings the language delivers, And no, this is not deconstruction, this is reconstruction. This is in a sense pure language that cannot deliver all of its meaning when translated. This means something in a certain kind of way to people who speak meta-language.

Mike Most sends a comment from his iPad (he’s on the move) Date: Wed, 07 Sep 2011 09:43:16 -0700 He quotes Alan Lasky who wrote: “So, I have another question regarding the F65: considering the current state of acquisition, what is the realistic target market vertical for the F65? Features? Television?” Mike responds: “Yes. Add in commercials and corporate production. Maybe even the military”. And again quotes Lasky: “My concern is that with current economic conditions being what they are the F65 may be perceived to be "too much dog for the fight" in something like episodic television”.

Mike Most again: “Not a chance. Television is it's most likely immediate market, IMHO. It's basically being positioned as a superior file based image capture device, using a familiar and respected codec, at what is essentially an Alexa-compatible price point. If you look at it as a substitute for the Alexa in the television market, you can look at recording directly to HD resolution SR files using either S-log or ACES and passing it through a rather straightforward pipeline. Despite Red's protests and despite the 8K/4K nature of the product, that's probably more than enough to get it heavily used this coming pilot season, provided Sony can produce and provided the first units prove to be as reliable as the prototypes seem to be. If anything, it's the requirements of the feature market that are more of a work in progress for the F65, in part because those workflows can be very unique on a per-picture basis, and in part because I'm far from convinced that there will be any simple, economical way to handle that amount of data. Nothing I heard last night changes that view. My feeling is that going forward, Sony will ultimately come up with at least a mathematically lossless compression scheme for the RAW data, perhaps multiple levels of compression a la Red. But I have to agree with my friend Jim Jannard that the uncompressed-only ship has already sailed. Only my opinion, though. YMMV.”

He signs: Mike Most, Colorist/Technologist, Level 3 Post, Burbank, CA. As he says at the beginning of this post: IMHO - “In my humble opinion”, which is a caveat phrase which says : I really know what I’m talking about, I have the experience and the expertise - however, I do accept that sometimes I can be wrong and please let me know if I am. There’s a lot of clues in this post. Sony has missed the uncompressed ship as it sailed three years or more ago when Jim Jannard of Red piloted the boat from the shore. Arriflex with the Alexa has grabbed the high ground because they’ve manufactured a camera more akin to Panasonic’s manufacturing response to Sony’s cameras in a previous era - the Alexa is a camera that delivers good pictures from the outset whereas Red needs work. It’s the difference between a stable mare and an unstable stallion. The other acronym Mike Most uses here is YMMV which means roughly ‘your mileage may vary’ which basically means your experience may be different, better or worse than what is described.

Here, Most responds to Jim Houston. On Sep 7, 2011, at 9:02 AM, Jim Houston wrote: “I thought the description of the strategy was very clear. There is no one-size-fits-all workflow. ... Yes, lots of vendors have lots of work to do, but the strategic approach was very clear. Most responds: “I think my original statement was a bit stronger than it should have been. I do see that Sony is basically making the data available and also making tools to interpret it available, and bringing in third party partners to do the specific implementations, and that's a strategy I can certainly agree with. I think my only real problem with what's been presented so far is that if one wants to record and preserve the original RAW data, there's nothing currently on the table to do that short of investing in petabytes of storage (an exaggeration, but for certain projects maybe not much of one). No matter how cheap storage is getting, it's still an awful lot of data to ingest, keep track of, and restore. And perhaps I've had too much Kool-Aid in the last 2 years or so, but I no longer see the need to adhere so completely to the "uncompressed is the only way" mantra. Even mathematically lossless compression would cut down those storage requirements by many terabytes on a typical feature project. And that has to be done at the camera/recording level. I still hold out hope that Sony is going to offer such a path, but I didn't hear any evidence of that last night, at least not on the RAW recording”.

 And here he steps up to the mark and begins to comment on the current situation: “Like it or not, we no longer live in a world where big facilities are the sole province of high end work. And we no longer live in a world where big iron can be the only solution. One of the lessons of both Red and Alexa is that when products are brought to the market that can be handled by both big iron and desktop solutions, the market is widened, acceptance is faster, and products are championed. I think that will likely be the case with F65 recording HD sized SR files, but I'd like to see a similar path for the higher resolution material that the camera can produce, allowing smaller shops and individuals to produce 4K projects with sensible storage requirements. Red has already shown that it can be done. I'd like to see Sony take that ball and run with it a bit. Competition can be a beautiful thing.”

Here’s one of the critical issues with the development and availability of uncompressed and RAW technologies: That big iron solutions (i.e. multi-million pound post houses in the worlds capitals) are in parallel with desktop solutions (once only every MAC computers, but now as PC’s have emulated MAC developments they also can be used, as well as Linux and other platforms). Wavelet Transforms have underpinned s called lossless or Raw data and mid-2008 suddenly 4k images could be played back with only three standard hard drives ganged together, in 2006 it had taken me 8 hard drives ganged together to produce the same outcome. Wavelet’s had been available in 2005, but not with this efficacy. We are in the middle of an onward rush, a tsunami of technology. But this technology, in delivering greater resolution (as well as dynamic range and also frame rates) is alleviating some of the earlier anxieties of the move from film to video to data cinematography.

Here, Tim Sassoon comments on Mike Mosts earlier point and then brings up a critical point: In a message dated 9/7/11 11:43:49 AM, Most writes: “Sony will ultimately come up with at least a mathematically lossless compression scheme for the RAW data” Sassoon’s response is: “Remember that the larger the frame, the less significant compression artifacts are, and the more important higher bit depth is”.

This is a very important comment as it shows that anxiety is a response relative to the conditions of the time. In the early days of HD and 2K the idea of an artifact within the image produced a complete and total adherence to the idea of lossless data amongst the most serious professionals. This was related to the fact that they did have experience of the highest levels of image generation in 35mm and 65mm film. They had a history of dedication to methodologies that avoided any kind of compromise of the image generation, development and display process. This evinces itself latterly for instance, in Christopher Nolan’s adherence to the use of 65mm to generate high quality entertainment features. But of course lossless data is an impossibility because it is never really achievable: even if one retained all of the data generated (at massive storage cost), the particular criteria adopted defeats the notion of lossless-ness. What I mean here is that the paradigm governing the technical thinking of the time says that data is a costly thing to generate. Not in monetary terms (although high levels of data do generate actual cost) but costly in terms of storage and the ability to manipulate the data for editing, grading compositing etc. Consequently generating 8 bit data with 256 samples (of a data criteria like YUV - so that’s 3 times 256) obviously generates less data than 10 bit (with 1024 samples per channel) - and so on. The real point here is that one would need an infinite bit depth to truly represent the world - but then one would reproduce the world - so what in effect, would be the point?

Mike Most comments on Tim Sassoon’s point: “Remember that the larger the frame, the less significant compression artifacts are, and the more important higher bit depth is. I think you and I are basically saying the same thing (no surprise there ;-D ), with one of us pointing out that even mathematically lossless compression is really not a requirement at these frame sizes.” Mike Most is saying that with the human optical system there might in fact be a limit far below the infinite horizon of data that the purist originally sought, that will work for the discerning eye.

So, here we are again at one of those seemingly watershed moments, which actually do not have the power of metaphor associated with a watershed with further inspection. In hindsight it might have seemed very dramatic at the time. With the Sony announcement of the F65, it might have seemed as if a distant horizon has rushed forward towards us and simply looked like they were very near indeed. What looked technically impossible before now looks technically not only achievable but far surpassable. But here I’d like to step back into film’s past to generate a sense of scale for the present. In his book ‘Using the View Camera: A Creative Guide to Large Format Photography’, Steve Simmons describes the relationship and also advantage between the larger still image film formats of camera over and above 35mm SLR cameras:

“The film used with the various view-camera formats is much larger than 35mm film. Film for the 2.5 x 3.25 camera is 5 times larger, 4 x 5 film is more than 13 times larger, and 8 x 10 film is 53 times larger. The increased film size produces clean, crisp images with a captivating sharpness. The surface textures of such materials as stone, brick and wood look almost three-dimensional in view-camera prints and transparencies. Large display prints have unblemished clarity and depth because the negative doesn’t have to be over-enlarged.”

This immediately refers to Tim Sassoon’s point: “Remember that the larger the frame, the less significant compression artifacts are, and the more important higher bit depth is. Also, if you work through the figures, 8 x 10 film (using the Canon Rebel as a guide) is 53 x 18 megapixels: that’s 954 megapixels! As you’ll guess, I’m being disingenuous and playing somewhat (but even if you used the Red One camera, that would be 440 megapixels). Steve Simmons talks about a ‘captivating sharpness’. ‘unblemished clarity’ and the images of materials look ‘almost three-dimensional’. This is all about increase of verisimilitude as our current technological tendency is about a series of increases in capacities which produce clues that translate as verisimilitude - hence the other phrase Steve Simmons uses ‘looks almost three-dimensional’*(See note at end).

So for a long time now we’ve had the ability to capture very, very detailed high resolution images. The difference now with Digital Cinematography is that we can fire these off at 24 frames, 25, 30, 48, 60 - in fact the capabilities of frame rate display is continuously increasing. We are effectively enabling still photography rapid fire to allow it to join cinematography, and by being digitally enabled we must append the title to Digital or Data Cinematography. I would conclude from the above that we are in the very early days of what is to become possible. And what eventually arrives will be far outside what we can currently imagine. This brings to mind some experiments conducted at the University of Bristol where Tom Troscianko in the department of Experimental Psychology has produced data that shows that current 3D techniques only generate 7 per cent more immersion than standard 2D images of the same subject matter. The technique used to measure ‘immersion’ is related to arousal. In fact, increased technological capacities, such as higher frame rates, higher dynamic range capture and display together with increased resolution produce more depth clues and generate a deeper level of engagement than 3D technologies.

So I myself have been guilty of believing in the digital revolution and given many papers on it - even the idea of the post digital. I’ve ruminated and written on the notion of data as being too closely related with digitality which many signal engineers regard simply as actually an enhanced analogue method. After all, Fourier presciently invented the Wavelet transform in 1807 and the ‘meat-grinder’ Discrete Cosine Transform in 1800 - way before digitality in the middle of the analogue era. I’ve written before on the idea of data as being pure and unmediated by numerical remediation - after all the data captured within the medium of the hologram is not mathematical, nor mediated (except in the strict sense that it’s held within a medium). But it is quantum and photonic in nature - both appendages or descriptions deny the notion of the mathematical - where mathematics is a telescope or viewing device into the ‘stuff’ of the universe and photons are of the stuff of the universe and light behavior appears to be quantum (in this perceptual realm at least).

It would seem that the idea of a technological revolution is a human gesture towards a paradigm change. It would seem that the reinvention and use of language is part of the strategy. To call something that’s happening ‘Digital’ when all you’ve know previously is analogue functionality, is very similar as the gesture of naming something ‘High Definition’ - which albeit can be seen to be a PR gesture, might also be an actual necessity for innovation and development (again thinking of Noam Chomsky in relationship to thought and language being two halves of the same coin) It’s the use of language where you aspire to something beyond the now. ‘The Truth’ and the idea of ‘now’ are of course dubious notions, unless you believe in the idea of the direction of entropy and therefore accept the forward notion of ‘Time’s Arrow’, (when there’s always a ‘next’).

On the same list the day after the F65 was launched, this post arrived from Harry Dawson: Date: Wed, 7 Sep 2011 10:43:19 -0700 “With film "going away" in a few years, there needs to be a 4K replacement, right? I'm shooting a project where we are doing 4K scans from 35mm. Not doing SFX but spanning three vertical plasma screens. Seems like SFX are going to need a higher resolution answer than Alexa. Here might be an answer, right?”

Harry is posing the question of ‘next’. In this case I won’t go in to what he’s suggesting technically as I’ll leave that to your own researches. Digitality now sits where ‘the Modern’ used to sit. It’s here right now and it feels good, because it suggests we’re in a period of movement, that we are materially achieving the dreams of our imagineers, science fiction writers beginning somewhere in the 7th century BC with the writer of the Epic of Gilgamesh, who’s original title was “He who saw the Deep”. I’m speaking here of an actual ‘writer’ who used text and of course I do accept that human’s have created forward looking stories from the beginning of language (and in the case of images in the cave paintings of Lascaux - who’s to say these were not imagined bountiful futures rather than ‘movies’ about the past?)

So in a sense I’m arguing that future imagining, via science fiction writers of the 1950’s and sci-fi television shows like Star Trek that posited warp drive and holodecks were the original acts of scientific theorising, that then created a vision for everyday scientists to work towards; that possibly at this point in time, where the world seems a little out of control, ‘the church of future hope’ is actively proposing that technically we can do anything - and given that our optical system is perhaps the most powerful and overwhelming sensory system - and somehow ontologically characterise what we actually are - then digital imaging is the place where the forward thinking work which seeks to usher in a new paradigm is taking place - it seems to me therefore, that the language and the conversations of those people that truly understand the technology, the possibilities it makes available, the developing practice and the following technical developments is a determinant of what will actually occur.

I will now outline some of the ideas that may not have been fully described earlier (because I didn’t want to limit the progression of the above narrative). I’m now proposing that though these are informational, that they also or revealing imminent technical, cultural and aesthetic developments. In the early days of HD where the naming of terminology described the aspiration for something better than what we’d been used to, High Definition simply meant ‘better’ at 1920 x 1080 photosites rather than 768 x 576. In 2007 when I started my Creative Research Fellowship the technology was very clunky, the recording mechanisms seemed incapable of recording the data generated and the idea of recording a truly lossless stream of data seemed impossible.

Then I became aware of various critical issues which determined the parameters of generating, recording and displaying digital images: Modular Transfer Function, which describes a chain of delivery from capture to display where resolution is defined by the lowest resolution link in the chain (like plumbing where flow is derived from the thinnest of pipes in the system); Wavelet Transforms which power everything digital by being that bit cleverer than discrete cosine transforms: the first being linked to the functions of a circle and the second being linked to the square. Clearly the smoothness of a circle as a metaphor is more gradual and gentler than the hard right angles of the square. Therefore reconstructions of data that’s been compressed with the functions of arcs and circles is more delicate than those compressed and uncompressed using the functions of a square wave. Wavelets just seem intuitively more reconstructable. I’ve used the term photosite rather than pixel as it is a more accurate description of the light receptor that generates a voltage which is then processed into data than the word ‘pixel’ on a CCD or CMOS sensor, (as it is where the basic data for a ‘display pixel’ is generated).

With regard the denominator ’2K’, HD is often referred to inaccurately as 2K, as 1920 is near to 2 thousand) - HD is 1920 x 1080 photosites, but one of the more true variants of 2K cinema, which uses a 2:1 aspect ratio is 2048 x 1024 photosites. The true 35mm sensor however might better be described as being in the region of 2000 x 1500 photosites because this generates an aspect ratio of around 4:3, which is the original 35mm academy ratio, which can also be expressed - if you divide 4 by to get 1.33 (Academy was actually 1.375). If you take a 35mm sized sensor that is 2k, then of course it has larger photosites than those on a same-sized 4k sensor - as there have to be 4 times as many packed in to the same space - and as with all things, when you do this sort of thing, there are drawbacks (which is for another article). The 4K variant using a 35mm sensor is 4096 x 2048 (double the 2k variant which uses a 2:1 aspect ratio). So using the 4K variant that would equate to 8.3 million photosites - so the Red Camera has a sensor (speaking in DSLR terms) which is less than half the recent cheap Canon EOS Rebel, which retails for about $900 and is 18 megapixels.

You get the point though - and importantly when you shoot in megapixel amounts of photosites this is then multiplied by how many frames per second you can shoot - so whilst Peter Jackson shoots the Hobbit Movie at 48 frames per second - at 4k and in sterographic 3D (i.e. two streams of 4K at 48 fps) the data streams are huge. Colour bit depth is typically talked about as 8, 10, 12 etc. What this refers to is the amount of samples taken - therefore how subtle the colouration is. 8 bits describes a sample of 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 which as a sum equals 256 samples. 10 bits is two more times 2 which equals 1024 (and so on). Incidentally each byte of data is comprised of 8 bits of data. Colour bit depth sits within a Colour Space (which is the term that describes what the parameters are for the gathering of and display of data). Clearly a printer has an entirely different colour space than either the human eye, or a plasma screen, or the newer Higher Dynamic Range Display technology that you will be seeing shortly. I could carry on and this ‘argument’ would also seem to transmute into a ‘glossary’. In fact the language and the thought become the same. In time of course these will again separate as we gain distance on the subject area.

It’s always been a difficult practice to theorise what is happening when it is happening. We’re now post-digital (so some claim) which I read as meaning: ‘we’re no longer confused about what it is and now we feel comfortable’. This means that people are looking to the horizon as if it’s the present. 4k now - 64k tomorrow - and why not? There’s a changing paradigm to be witnessed here. Digitality requires numerical representations of whatever the digital device is dealing with. Numerical equals mathematical. But there are ways of generating data that are not mathematical - within the hologram for instance. Pure data captured without mediating light through maths.

To try to make all of this that little bit more clear, here are what I offer some defining criteria for Digital or Data Cinematography:

a) The optical pathway is 35mm or above (if you research the reason that 35mm film was set at 35mm, you’ll see it could have been derived from manufacturing techniques for photographic usage - that is what was technically and industrially possible at the time).
b) it generates a progressively based image flow relating to a specific time-base as opposed to an interlaced image flow (one full frame of information at a time rather than a field-based workflow)
c) like one of its predecessors, film, it holds the image in a latent state until an act of development (or rendering) is applied - but unlike film is non-destructive of its prior material state)
d) it’s capture mechanism though generating a nondestructive, non-compressed data pathway from which an image can be reconstructed, is not its sole intent as a medium or method of capture (but is distinguished from digital video who’s sole intent is to generate images in a compressed manner from less than 35mm optical pathways)
e) the latter three qualities are also base characteristics of many developing digital technologies – for instance real time mapping of environments requires a capture of at least 3 infra-red imaging sources (Cameras used as sonar devices) running at 25 fps at a 'reasonable' resolution

Digital cinematography is more than just capturing images - it's a portal onto the digital landscape so far unexplored due to its apparent function as an image capture medium i.e. remediation. As a conclusion this short list may be satisfying or unsatisfying. There are many other ideas to work through and many developments coming that will need similar examination as this technology grows and changes.

and the point of a DP is?

I wrote this some while back, when Benjamin Button came on our small screens - and then forgot to post - but here it is anyway: I work with data or digital cinematography, but I'm a film sympathiser, or should I say cinema sympathiser? It's an aesthetic thing and there's some Digital Cinematography footage I see that looks like video in its worst most 'live' state. Benjamin Button for instance. Or maybe it was the re-interlacing it went through to get to TV that did it, but it was painful to watch even though it was lit well.

As digital cinematography develops the new HDR function is ok but it's represented in standard viewing space on any normal display and there's the rub. If light in the visible spectrum can be said to be of (say) 15 orders of magnitude and the eyes in the human system are instantly capable of 5 orders of instantaneous magnitude (and this is utilised throughout the 15 orders depending on time of day, levels of luminance and a lot of other factors - like a searchlight of conscious perception sliding up and down the scale) then the average standardly available display is around 2 - 3 orders of magnitude (at best).

With new HDRx on Red, If you shoot 5 orders of magnitude then compress it into standard display space, then everything is lost. The HDR display technology that Dolby is working with, is around 5 orders so HDR capture - 10 levels of black and 30 levels of white above normal displays - correctly displays all of the gathered light. This is far better than exhibiting a conjuring trick: 'look no lights to achieve what was only possible by lighting'. If you are looking at a CRT, LCD or Plasma when you view the now famous red barn shot: you'll see that everything that HDR truly is - is missing.

In 5 years proper HDR will be available generally (given Moore's law). When you see real HDR Display, that doorway in the shot is hard to look at because it's 2 orders of magnitude higher than what you're looking at in standard display space. People originally got excited about HDRx for the wrong reasons which were to do with an advance in their technique that would be made possible - almost as if the average DP is searching simply for natural light to solve their basic aesthetic problem - and for me that basic problem lay closer to the experiments and work of people like Dziga Vertov than it does to, say, Billy Bitzer. I buy Conrad Hall's assessment of the necessary search of the longtime cinematographer that is to find the photographic moment in every frame of the still - not it's technicalities, but in the aesthetic demand to make every frame as good as every other.

The Human Gaze

One of the UK’s national treasures, David Hockney, is experimenting with the human gaze by gathering together 9 HD cameras (consumer and therefore heavily compressed) to generate a single shot of the English countryside to produce one very high resolution image. My own experiments with the human gaze also addressed the issue Hockney is addressing by the act of looking, which he sometimes terms as ‘drawing’. This being the language of his enlightenment about making art. The art work itself is a metaphor for seeing. There is a shot of the countryside. There is no cutting. He does in fact use different moments of time from the different cameras and also slightly different angles of view (slightly more zoomed in or out) which in fact refers to his earlier polaroid recombinations of the world which somehow evoke cubist styles of painting and thought as expounded by Braque and Picasso. Hockney says that he idea of drawing is about looking and seeing - you simply have to look if you're going to draw. You have to engage. Meditate. Clear the mind of ratiocination so that there is only perception - and for the artist then give a clear response. Hockney is effectively arguing that art is a mediation between the world and the public. Warhol before him said look at the mundane things in the world around you - they too are art. Koons upped the ante towards kitsch. Hirst said value is the thing (his platinum skull). All along Hockney is saying: 'Beauty'. All of this refutes the idea of ‘interpretation’ as a way of deriving meaning, as espoused by those that critique or theorise the work. This is becoming a time when artist and audience no longer need the high priests, the theorists and the curators to tell them how to respond to art. Digitality and post digitality is enabling ‘entrainment to succeed where interpretation failed. The Artist, the artwork and the audience all become one, from the moment of creation, to the moment of perception - all entrain together. This is an entirely valid way of being, as valid as interpretation was for its time.

Thursday, 9 June 2011

By way of a letter to a fellow artist on the subject of the cadaverous nature of HD images

"Hi - glad you got back to me. Just to give you a context, I started using video in 1976 when we actually cut the tape with a razor blade then used sticky tape to join it - the resulting edit was like an explosion as it went past the heads and visually the image fell all over the place....

But then over the years as each new development happened things improved. One of my first art pieces was what we now call glitch art as I really liked the mess-upds of the medium - in fact video always offered surprises. All my friends became teachers because we were before the YBA and their use of video - we were doing what people like Gillian Wearing did - but the time was too early. Education or industry offered the only employment. So I decided that I needed to know the medium - like a 14th century painter I wanted to mix my own paints. That meant for me joining the industry, so with one hand I earned money, and threw it away making art with the other. Now as the industry is full of too many people the best place came up as a research fellow in education in my long-time interest of HD which I first came across in its analog form in 1992...

In some ways I agree with you but then I guess I have a whole history invested in the fact that the stuff actually works and gets better - however, that doesn't blind me. I'd learned to bring life to the video image often imitating film people who heated up the developer a few degrees so that it made material changes to the way the film looked. At the moment the cadaverous nature of data cinematography is because everyone leaves their footage till post to cast a thin patina of colour on top of the image rather than 'heating up the data' and interfering in the process and perhaps breathing life into the corpse.

In fact in 4 years of making HD work I've only managed to bring HD to life a few times as it's a fluttering technology that often goes flat-line on you.You get cast as Dr Frankenstein trying to bring the monster to life. Also the definition of the term HD is a problem. For me it's an industrial and therefore political term, now redundant. Sony needed to defeat Kodak or at least marginalise their photo-chemical Empire, which they're now done. But HD to me is tv driven - even being 16-9 is a reference to the buried desires of all the people that used to shoot 4:3 because glass vacuum technology could only go so big and electron guns with magnets switching them on and off started creating a whiplash effect over a certain size - so as 16:9 is an extrapolation of 4:3 then even the super hi vision 8K system is a complete remediation of an earlier technology - personally I haven't shot HD 16:9 for 3 or 4 years now. I always set to 2:1 and then project to the same aspect - but in my next project I'm turning the camera sideways and projecting 9:16 and a variant will be 1:1.

On the subject of remediation - I'm always wary of letting analog/digital and HD remediate what I now call Data cinematography or data imaging because the first three have nothing to do with the last which has more in common with telematics, haptics, mapping of 3D space and all the digital technologies that have been co-opted by pervasive media studies. I think of data imaging as being capable of being an image but also being a lot more. Kinnect spits out infra red pulses from the chip then receives them back again at different rates - like sonar - and maps the environment. It's a kind of image, but shows a little crack in the wall which when it comes down will describe a landscape we know is there but only guess at the content which is : the digital (or post-digital as it's now being called).

And on that note, in some ways data imaging is not really about the data. HD was, but what comes next is a window on a new landscape, a trojan horse technology beyond the two dimensional image and all that that entails. I mean I suppose that in offering more than an image within its definition, it then offers 3 dimensions - not in the quotidian way that 3D or stereography describes reality, but more in the Kinnect way. But of course Kinnect is a remediation too. I've seen 3D virtual objects that you can touch and been in environments where you stimulate events in telematically removed spaces which are mapped to the space you are in. Fundamentally it's the Hollo-deck from Star Trek - and yes, at the University of Bristol they have actually teleported a piece of matter from one place to another instantaneously.

Then there's 'metaphor' which always takes the eye off the ball. Basically 'HD' is not cadaverous, because it never lived. However I know what you mean - I like the fact that you're responding to it and naming it, describing it.

I'm probably already compromised because when I decided to get into the industry to fund myself as an artist rather than teach about what I didn't know (and some of my friends are now as high as it gets in education - and they've never once practiced and stopped making art a long time ago) I probably know too much technically to function as an artist... Having said that I'm about to shoot a new piece (so now I work in education like all the other old fucks but still use the money I earn to make art -:) Last December I showed 18 new works in HD at the P3 Gallery, before that I exhibited one of my HD installations at St. john the divine in new york for 5 months - it got a lot of good response and I think it got over the cadaverous nature you describe... That's sort of what I'm trying to take on".

Friday, 27 May 2011

Far Horizons, now behind us

The National Association of Broadcasters event in Las Vegas has just occurred - a very typically American title in that it assumes that all of us, all 6 billion are of that nation and persuasion. The main news here is the innovation of a robust set of Digital Cinematographic pieces of equipment with Sony's F65 as the biggest piece of kit in that as it uses an 8K chip it actually for the first time delivers 4k real resolution (with regard to various theorems and rules of mathematics). F23 means 'two thirds', F35 means a 35 mm sized chip, F65 means a 65mm film sized gate as the size of the chip.

Regardless of the paraphernalia and excitement around an event such as this, in the end NAB simply announced that whereas several years ago high resolution data on a par with photo-chemical film was almost impossible to achieve on many different levels (light response and rendition in a filmic way, the ability to record the signal without massively compressing the data - all of that stuff) right now we can surpass the level of quality of film. By that I mean that the resolution is higher in both capture and display and with the advent of High Dynamic Range Capture and Display we can surpass the rendition of what the eye sees with Digital Cinematography in a way that is higher on all counts than with Film. It has helped that finally CMOS chips have caught up with CCD's in development.

Forget the arguments about whether film does a different thing (which of course it does): on a technical level the argument is over.

That is a big statement - When I came upon HD around the early nineties, that was an impossible thing to imagine.

Wednesday, 4 May 2011

Invisible when appropriate

I try not to be so on the nose in the title of these posts but last night whilst watching 'Pina' by Wim Wenders (bless his cotton socks) apart from being ecstatic at the content and especially Pina Bausch's choreography for Stravinsky's Right of Spring - the 3D was invisible.

And in that it became invisible because the content was appropriate, so 3D came into its own. It took a ‘Wenders’ and a non-commercial subject to use space in a way that was worth doing and hence part of the film-makers palette. Of course Tim Burton et al can use the medium (because their gaze within whatever medium is skilled and talented) but the craggy old German had the simple sense to ask himself what the aesthetics of the medium were capable of then not only use these, but to use them in a way that did not - excuse the pun - stick out.

Rather Wenders simply used 'depth' like colour, light, camera movement etc as if it were simply one of the elements of the palette that the film maker has access to.

It’s still a pain in the backside to have to wear glasses, but holographic 3D is already being tested in the research labs so we won’t have to wear these 60’s Jetson styled objects for much longer.

Saturday, 23 April 2011


At the most recent National Association of Broadcasters event in Los Vegas (NAB 2011), the developments within Digital Cinematography have taken the project over the brow of the hill. For a long while resolution was a central issue and when resolution became discussable in realistic terms (where manufacturers were realistic about the actual resolution as opposed to the hoped for resolution), then the rating of the stock/data in terms of light response and also in terms of tonal and colour response came into play. S now we have resolution equivalent to film (and better projection), the sensors will have a high speed and low noise floor, and the colour and tone response – especially if one uses Higher Dynamic Range imaging principles – then we can begin to develop a true artistic response in this medium.

Sony exhibited the F65 camera – a name challenging the film format of 65mm – this camera has an 8k imaging censor and realistically says it is intended to deliver 4k (remembering the Niquist-Shannon sampling theorem where you need twice the sampling to deliver the resolution). Plus there were many, many other developments in 3D and all areas of the subject.

So now it’s time for artistic image development that does not simply rely on post to confer a look – now the main project begins; to develop artistry with data.

Saturday, 16 April 2011

3D is here to stay - and?

I’ve been trying to find out why I don’t particularly like 3D. I enjoy the simple circus attraction of course, but when I put the glasses on it distracts from the basic reason I go to the cinema to see a film, as opposed to watch it in some other way like on a plasma screen. I like to experience the size of the auditorium - most of all it’s that. Within the auditorium space, what works is the sheer size of the image, but more importantly the audio enhances the sense of the space and when accompanied by sight in the right relationship the experience of 'cinema' happens.

If I put on glasses I might as well be at home in a small space. Cinema disappears. Something about putting on the glasses cuts off the larger spatial experience and also somehow modifies the sound in a synesthetic canceling out of the larger experience, and pulling off the glasses to witness the blurred overlay of images detracts from whatever visual pleasures do survive visual because you realise that it’s all happening in the eye and brain.

There’s a more philistine approach which likens the invention of 3D to the invention of perspective and asks the question: of what use is the invention of perspective to a work of two dimensional abstract expressionism? This is 'argumentum ad - hopelessness (Sorry I don't know the latin for that). However, I shan’t deal with this because it’s a disparaging argument and there are better ones.

With the glasses on there is certainly an experience of what freud calls unheimlich - the uncanny - but though there’s a small pleasure in the experience of the 3D evocation of what’s before you, somehow the addition of what 3D gives has an equal measure taken away by the above elements, from the experience. And because expectation is heightened from the offer of an additional experience, its neutralisation is as an addition, disappointing.

The indescribable ‘strangeness’ that Freud discussed in his essay ‘Das Unheimliche’ is certainly present in the viewing of a 3D movie but once more it is neutralised by the kind of use it undergoes currently - popcorn 3D movies are the province of 3D and even Tim Burton’s authorial eye in ‘Alice’ is taken towards the chocolate box by its verve and competency of use. If you remember the Third Man and a Touch of Evil, though both are great, Wells use of the dutch angle has so much more power because he’s innovating with it, whereas Karol Reisz’s use is systematised and formulaic. A Touch of Evil evokes the uncanny, the Third Mans uses it to effectively deliver a nice viewing experience. Wells isn’t interested in ‘nice’ and todays use of 3D is all about 'nice'.

In seeing the Cave of Forgotten Dreams, I had hoped to be disabused of 'nice'. Werner Herzog (Mr 'not-nice') and he who is mostly comprised of uncanniness itself, tries to use 3D ‘properly’ in that he synchronises the use of 3D with a subject that demands its use. All the Lascaux cave paintings are in two dimensions - yet painted on carefully chosen 3Dimensional pieces of rock. Herzog argues that this pre-cinematic use of still-yet moving images is enhanced by the sensation of looking at what we know to be a 2Dimensional form in its first use displayed in a 3 dimensional form.

If the early artists used psychedelics to project themselves as flatlanders (using the VIctorian notion of 2 dimensional beings experiencing a 3Dimensional form as a point which grows into a circle and back down to a point) then those psychedelics enabled the earliest artists to create a form that when experienced today is recognisable as such. But of course, Herzog is saying that we’ve lost our wonder at their prescience - so he uses 3D shooting to re-evoke it. But in the end I think I would have preferred to see the film without glasses on and in 2D.

I look forward to WIm Wenders use of 3D in 'Pina' in the hope that he will actually make 3D come to something (but Pina Bausch's work is already amazingly wonderful and one might suspect that it shouldn't be messed around with) but my suspicion is that in general yes, 3D is here to stay this time because what it requires technically is present within digital acquisition in a way that it was not within film - but now that it is here to stay it will become ubiquitous and quotidian - some of us will say of course that it was never as good as it was cracked up to be anyway. In essence, 3D is its own worst enemy.

Roll on holographic 3D as the next technology and all the rest that are to come - but actually, it’s the art within the use of all technologies which is the important thing - as we all suspect before we formulate a sentence to discuss the issue.

Saturday, 12 February 2011

The Look: Digital Cinema Aesthetics and Workflows

The Look: Digital Cinema Aesthetics and Workflows will take place in Bristol (UK) on 1st April 2011

I am placing this post here because the syposium will summate what I'm currently trying to do.

This one-day symposium will explore, and attempt to demystify, the movement of film and video footage through the digital production process from camera to exhibition. The ‘look’ of a film used to be the domain of the cinematographer. As a result of the various new forms of image manipulation that have appeared in the last decade and a half, new types of collaboration have resulted – for example, between cinematographers, post-production supervisors, visual effects artists, and colourists. Given the multiplicity of ways in which the aesthetics of a film can change after shooting is complete, a key question presents itself: who controls what aspects of a film’s look?

This symposium will trace how the ‘look’ of shots changes at each stage of this process, explain some of the technologies that effect these changes, and discuss the decision-making behind these changes. It will also explore the reorganisation of production roles and responsibilities that has resulted from the digitisation of film-making workflows.

 The symposium will draw from a range of specialisms, bridging theory and practice: invited speakers will include Oliver Stapleton BSC (The Proposal, The Cider House Rules), Geoff Boyle DoP FBKS (Wallander, Mutant Chronicles), Jonathan Smiles Digital Production Supervisor, (District 9, Green Zone) Luke Rainey Colourist, (Band of Brothers, Man on a Wire), Professor Duncan Petrie, Professor Sean Cubitt, Dr Richard Misek, Dr Charlotte Crofts. Introduced by Professor Sarah Street. Mark Cosgrove, Director of Programme, Watershed Bristol

The day will consist of four sessions: image capture, data management, colour grading, and display; then a final plenary. Each of the four sessions will comprise a presentation by a film industry professional, a presentation by a film academic to open up wider questions, and a dialogue between the two hosted by Terry Flaxton AHRC Senior Research Fellow (and DoP). The intention is to introduce the practice of each to the other and of both to the general public, facilitating an open conversation about the aesthetic issues, pressures, technologies, and production roles involved in contemporary film production.

TICKETS: £50 With pre-ordered buffet lunch (If not ordered, meals can be purchased in the Watershed Bar, but waiting times may be long), £35 (including only morning and afternoon tea and coffee)

To book, Watershed box office +44 (0)117 927 5100

More information and schedule:
Concessions available. The attendance of industry professionals at this event is contingent on their feature commitments which is clear at the time of writing.