On 7th September 2011 in Los Angeles, the Directors Guild of America exhibited the new Sony F65 Camera to the Movie Industry. This is the newest flagship Digital 4k camera with an 8k imaging sensor (where 4K refers to 4000 lines of resolution - this is the gold standard because it comes near to what a 35mm negative can do in terms of resolution and detail). It is only the third 4K camera to have been made after the Red One and the Red Epic. So Why an 8k sensor though? According to the Niquist/Shannon Sampling Theorem, to derive an actual and accurate measurement of (in this case) resolution, you need twice the sampling rate to achieve a true measurement. Therefore an 8K chip delivers 4k resolution. With the F65 Sony is seeking to take the high ground of Digital Cinematography with an act constructed to displace all of its competitors.
In what follows want I want to examine this growing description of this work, a developing language with accompanying development of ideas that even the most blasé film or media student is to some degree conversant with, though their level of familiarity with the language does not necessarily mean they would understand the subject - especially as a lot of what is said by professionals is said within within metaphor. So to reveal as much of the indicators and referents to true meaning I wish to use some of the flurry of posts on the Cinematographers Mailing list on the day after the presentation of the Sony F65 as a snapshot to examine the attitudes and the developing language of those at the coal face of this developing technology as well as the developing language itself.
The CML was created by well-respected UK Director of Photography. Geoff Boyle’s respect comes from amongst other things his maxim: ‘test, test, test’ which is itself pure scientific materialist values.
There are various professionally oriented lists online, like the absurdly named but hugely useful (and respected) Creative Cow which deals with most professional software programmes, plus there are many other lists that are full of ‘wannabees’ or students or ex-students who wish themselves to be in professional company. Of course as these people mature then those lists also become more professional. CML however, takes no prisoners and excels at ‘flaming’: there is zero tolerance for professional stupidity - that is, asserting something if you do not know it to be true yourself through having tested the logic, or citing peer-agreed and unquestionable professional reference.
The professional practitioner has a peer-review process as stringent as the academic model, with the same outcome of loss of respect from your peers if you get this wrong.
In the early days of the groundbreaking company, Red Cameras, Red used the enthusiasm of its early adopter community as a PR space to broadcast its product. CML participants looked on initially, then challenged the claims of the adherents and some would argue that this in itself was to Red’s greatest benefit - because when the CML community itself had tested and then did praise the product, the approval was worth that much more when it did approve because of its initial with-holding of approval.
CML has many different Cinematography lists from lenses to grip gear, from 70mm cameras to Digital Cinematography lists and these are visited daily by people at the very top of the professional sector and who are themselves practicing in the industry, through to people who mostly stay silent to learn - from feature film cinematographers down to second or third level camera assistants or edit assistants.
In attempting to reveal the meaning within the language of these professionals my intent is to disclose what will become important for theorists and later users of the technology when it filters down to educational or commercial or domestic use. It is becoming clear that the gap between professionals and then on the bell curve, the early adopters, or early users is closing.There is a set of reasons within this function, chief amongst them is the simple fact that mass-production eventually contributed to mass-availability and then mass-demand for higher quality. There’s also Gordon Moore’s Law which states:
“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year... Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer”. Gordon Moore, Electronics Magazine, 19th April 1965.
In 1975 Moore altered his projection to a doubling every two years (1975: Progress in Digital Integrated Electronics).
Theorists seek to sit outside the bell curve of adoption of technology, sometimes between research labs and research initiatives and professionals. Sometimes they create ideas prior to the research labs. Theorists evoke what happens in the world or in a specific domain through the use of language and the issue in this article is language, how it develops, who uses it, whether it informs thought, or whether new ideas generate new language - in a co-dependency of arising, as Noam Chomsky would have it.
So we’re now at a moment in September 2011 where we have larger chips, faster recording mechanisms that handle what data those chips output. But higher resolution isn’t everything (as we’ll see within the comments that I’ll present you with below). Colour bit depth, accuracy of rendition in both capture in photosites and how those photosites are ‘read’ - for instance was the recording 8, 10, 12, 14 or 16 bits of colour - or higher? And frame rate for smoothness of movement plus increased immersion in the display device - but also what the dynamic range of the entire image is. Does it replicate the functions of the eye? If the camera does, does the display device? And so on and so forth.
LANGUAGE, REFERENCES, NOTES
So there’ll be a degree of jargon in what follows, meta-language and meta-ideas. I’ll not annotate at every moment, but occasionally I will try explain the ideas, plus I’ll summarise at the end. Please pursue the information even if the numbers go beyond your attention span (or will to live) and these will be revealed to be either true, sleight of hand or untrue (due to a distortion of truth). If you can stay with the argument, I shall seek to reveal the discourse between professionals and shine a light on the potential future meaning of the exchange between them.
Please also refer to my online oral history research resource: A Verbatim History of the Aesthetics, Technologies and Techniques of Digital CInematography, which catalogues a global view of the developments in this new subject area by asking practitioners, theorists, Cinematographers, Artists and Professionals who use this technology what they think this technology is, what it does and what changes it is causing to happen. It can be found at: http://www.flaxton.btinternet.co.uk/indexHDresource.htm
Lastly, I found myself writing extensive notes in the text to make what was written understandable, then I realised that these interrupted the narrative of the professionals musing on what was going on and what would arise from these technological developments. I pulled them out and placed them in the typical footnote position, then as asterisked notes at the end of this article - yet again the interruption was huge.
So I’ve now reorganised the entire article so that the days exchange is followed by a section called: Preliminary Summation, then a section entitled ‘Technical Notes as an Element of the Argument’ which is followed by a conclusion (or Coda), which itself is a set of parameters, to be read as just as informational or revealing as the rest of the language. I note this all here because I’m seeing the form that I’m writing within, changing before my eyes.
THE DAYS EXCHANGE BEGINS
On Wednesday, 7th Sep 2011 00:53:38 -0400 (EDT) the strand of exchange of ideas began on the cml-digital-raw-log digest recipients cml-digital-raw-log. Tim Sassoon, a respected professional grader or colorist, begins the exchange by quoting an online mail-out from Bandpro, suppliers of professional equipment:
"Band Pro is now accepting pre-orders for the new Sony F65 digital cinema camera. With Sony's F65 Introductory Pack you can be one of the first to get their new 4K camera when it starts shipping in January 2012. And, with the full compliment of accessories that are included in the pack price of $85,000 you'll be ready to shoot 16-bit 4K footage out of the box. The Sony F65 camera utilizes an 8K digital sensor..."
Tim comments:
“I gotta say, that's pretty aggressive pricing for Sony. Will Arri step up to the 4K plate? I'd be willing to bet that by 2014, shooting or posting features at 2K will be very passe”.
So here Tim is predicting the end of 2k by 2014 - and yet for most people who are still shooting at around 1920 x 1080, they’re shooting with equipment using Gop structured pictures (where only some of the data is passed in Groups of Pictures from capture into processing before being falsely recombined into full frames to display - but of course, it may again then be torn apart back into GoP structures for internet streaming before display). So for the more purist data engineer, these are heavily compressed images at the outset and therefore not worthy of use.
Here Carlos Acosta Date: Tuesday, 6th Sep 2011 22:25:45 -0700 talks about the levels of data this sort of device (the F65) will generate and how we deal with that. He also compliments and chides Sony on being formerly a closed organisation but also compliments Red Cameras on opening Sony up (against their will!). He corrects the comment supposedly from Mike Most:
“My favorite comment at the event (from, I believe, Mike Most): Jim Jannard
knocked $100,000 off what would have been the price of the F65."
Carlos answers: ‘That was me Bob ;-) What I saw tonight is Sony descending from the clouds looking to join the boots on the ground. It's kind of about face for them to even offer an "open" architecture. Of course we really don't know what the promise of openness really means any more than we know how much time fits on a 1TB data card. Being realistic, if it was totally done, it would be delivering right now. They obviously covered the lack of critical details with slick power point and and funny jokes. Kidding aside, $85k for a system of this caliber ain’t bad at all. The images were really fantastic. I suspect the Mike Most will have his questions about data format and other workflow issues answered. It will generate stunning quantities of data pushing many users to shoot in HD anyway”.
Within this paragraph is held the information that Sony are now following Red in the development of their cameras. Preciously they used to ‘prove’ the product befoire releasing it. The response from the professional users was often that engineers had designed the camera and consequently professionals had to make all sorts of adjustments to make the equipment fit for use. Here Sony are now releasing beta level equipment as evinced by there being a lack of a complete post-production path, but like Red they’ll now rely on the good offices of the professional community to sort this out. The euphemism one could use here is ‘consulting with the community’ to bring it on side - equally one could criticise both Red and now Sony for releasing equipment into professional usage that doesn’t actually work!
Michael Most on Tue, 06th September 2011 22:05:36 -0700 quotes Bob Kertesz:
“My favorite comment at the event (from, I believe, Mike Most): Jim Jannard
knocked $100,000 off what would have been the price of the F65." (note: Jim Jannard owns Red Cameras)”.
Mike Most then responds:
“Actually, I didn't say that. But I agree with it. The economics are changing, no doubt about it. But the nature of the Moore's Law rate of technical advancement has now dictated very different economies of scale with regard to technical devices like modern digital cinema cameras. Since these things are effectively obsoleted in a relatively short time, the purchase price has to be considerably lower to account for the shorter shelf life. I've never really talked to Jim Jannard about that, but despite his "obsolescence obsolete" statement, I think he foresaw this, and one of the reasons he came up with his dramatically lower price points is because he understands it. He has been remarkably generous in his upgrade policies, but I think he understands the implications of faster development creating faster obsolescence very, very well”.
Another post from Mike Most, Date: Tuesday, 06th September 2011 22:17:56 -0700 quotes Tim Sassoon:
“I seriously hope they don't roll their own de-Bayering accelerator, as threatened at Cinegear and like RED, and instead write to NVidia CUDA engines”.
Most carries on: .”..And BTW, since Sony is claiming that the sensor doesn't actually use a Bayer pattern, we probably shouldn't be calling in debayering in the first place. Maybe we should call it de-rotation debayering. Or maybe de-Hyper Hadding. Or maybe just image reconstruction, although that just sounds so SMPTE....”
Here Mike Most is reconstructing or inventing language to try to deal with the changes. Debayering is the system used to reconstruct colour information from a Black and White signal (effectively). Color filters are placed over the photosites in a specific pattern and then read back in post and reconstructed into a colour set within a certain colour space. The colour space of a printer, your optical system whist reading this, and a plasma display are all entirely different. So coherent systems that maintain colour throughout the chain are a necessity. Hyper Hading is a reference to early chips that where enabled with hole accumulation iode Sensors - it’s another strategy to turn light into data, into displayed light once more. In fact his reference to image reconstruction, though he jokes about it sounding like the SMPTE organisational way of referring to things, is quite apposite in this instance. It describes what actually happens.
Sony have always played this kind of game however - in the early days in it’s tussle with Kodak it named it’s Digital Video system: Cine Alta. The use of ‘Cine’ being a direct reference to film to create the beginnings of commercial displacement that would eventually win the commercial electronic corporations war against the photo-chemical corporations.
It’s important to discuss Sony’s camera naming policy as it’s a key component of their commercial strategy to supersede the photochemical corporations by referring back to a film past. The two preceding cameras were the F23 and then the F35. Both these cameras costs hundreds of thousands of dollars - but as you’ll see, the F65 is under $100,000. For years Cinematographers had clamored for a 35mm sized chip so that all the benefits of 35mm could be exploited - after all, that had been the optical proactive in Hollywood since the beginning. 35mm optics automatically gave the kind of cinema we were used too. You know the shot where the two lovers kiss and the background is out of focus thus placing a spotlight and emphasis on the moment? That was due mostly to the physical pathway derived from 35mm optics. Though any Cinematographer could produce that shot on any format film or video (using different techniques to limit depth of focus).
Up to and including the F23 Sony had been wedded to a smaller chip size (usually half inch but in this case 3 x two thirds of an inch chips) and they’d also been wedded to CCD’s as opposed to CMOS chips. CCD’s discharge line by line and CMOS discharge the whole sensor in one go - there are various physical artifacts related to both processes. When Sony created the F35 they adopted a 35mm sensor CCD sensor - thus coming into line with both Red Cameras and Arriflex with their D21 and latterly the much praised Alexa system in terms of optics. But the F65 changes to a CMOS sensor and also refers to the double size of 35mm which is 70 mm, but abbreviated to 65mm. In industrial film production 70mm was physically slashed into 35mm, then into 16mm, then into 8mm. All of the variants which use the term ‘Super’, simply get rid of one row of perforations and therefore enable the frame size to become larger taking up the space where one row of perforations used to exist - surprisingly this renders an extra 40 per cent imaging area.
However, the confusion employed by Sony is that the F65 is in fact a 35mm sized Sensor.
There is currently one type of digital cinema camera with a 65 mm sensor, the Phantom 65 made by Vision Research. Paradoxically this has a 34K sensor of 4096 x 2440 pixels and of course this has larger photosites due to the sensor size.
Mitch Gross on the same listing comments on Alan Lasky’s comment:
On September 7th , 2011, at 1:10 AM, "Alan Lasky" wrote:
“It is good to see Sony loosening up a bit”. “I think the Reason there is not a clear message on post path is that Sony has chosen not to ram one down everyone's throat. Unlike the past, Sony's mission this time is to be very open in how the system can be supported. Yes you can integrate with current SR workflows, but you can also use all of the various 3rd party systems because Sony will provide SDK information for them to ingest the files. This is very much like Phantom CINE files or ARRIRAW, but a bit different than REDRAW because RED makes everyone incorporate their de-Bayering engine to insure that the process is consistent. One other difference with F65 is that it is a different pattern than Bayer mask, so that might take some more math work from the various processing systems out there, but again, Sony will provide the information. It's obvious that they have a way to extract the information beautifully.
The download station has 10G Ethernet. We have an onset download station we built for The Phantom CineStation download dock that can empty a 512G CineMag in under an hour using 10GE. I would expect similar times from the Sony system.
And $85K for the complete camera with the shutter, VF, recorder, a mag and the download station? Yowsa, compare that to the $300K system of the F35 a few years back! Killer deal, Sony”.
So here in amongst the detail is the debate on the way technology is taking up the call for faster, more qualitative technological response to the demands of the professionals who want better and better images. When Jim Jannard introduced the Red camera, it was as if in an irritated response to corporations like Sony who kept their systems to themselves. Here it becomes clear that these technicians: Cinematographers, colourists, graders, digital imaging technicians and editors truly understand the medium and are completely competent to understand the problems of the designers. It just might be that the cultural production of analogue and digital video, within the Asian market place suffered from the lack of openness of the societies that produced the technology. Equally however, the early European and American versions of that same technology were less user friendly than the Asian - or rather, Japanese, versions. So in the comment above Mitch Gross is discussing both cultural and technological issues - not to mention that both these strands of discussions are in the end in service to the aesthetic delivery of images into our world.
Michael Brennan a DP from Melbourne and also the editor of High Definition Magazine takes first the cultural and then the technological points up on Wed, 7 Sep 2011 20:56:08 +0100:
“Of course we really don't know what the promise of openness really means any more than we know how much time fits on a 1TB data card
He then quotes from various Sony pdfs;
“Series S55 cards (capable of 5.5 Gbps) will work at 2k/HD as well as 4k, "non 4k cards" series S25 (2.5 Gbs) will work at 2k and HD but apparently 4k at 23.98psf only.
1TB SR-1TS55 card can store:
59 minutes of f65 raw 16bit 4k 23.98psf
29 minutes of f65 raw (4k x 1k) 16 bit 120fps
572 minutes of HD SR lite 422, 23.98psf
160 minutes of HD SR HQ 444, 23.98psf
In case of 3D recording record time will be halved”.
This is of course very technical and requires one to have a mathematical bent - but one is listening in to the metalanguage of the technicians - the twitter of the birds - that seek to bring advanced technology to us. He goes on (read this as if concrete poetry):
“So three hours of SR HQ on a card that can be transferred in around 30 minutes. Note that there a two recorders one that does HD/2k the other that does 4K (and maybe HD too??) The SR-R 1000 is a portable 8TB drive with 4 x card slots. Takes 30 minutes to transfer 1 TB, can transfer 4 x cards at a time, looks like a tape deck. The SRPC-5 "transfer station" is a 1U form factor card reader with gigabit ethernet "to compliment existing on set data ingest" and a HDSDI out (if you want to transfer to HDCAM SR deck). Compact card reader is SR-PC4 with one slot and Gbe or optional 10Gbe (third party) and has optional F65 raw monitoring. Can copy direct to Esata drive via optional Esata interface. This is the one of most interest for use in the field, not sure of what the transfer time would be.....”
...and then the characteristic joke to alleviate the compression of attention:
“At last a Dcinema camera with a ND filter wheel :)”
This is all difficult to read - perhaps like a translator of the early mesopotamian writings, or middle Egyptian, the translator has to find the kinds of meanings the language delivers, And no, this is not deconstruction, this is reconstruction. This is in a sense pure language that cannot deliver all of its meaning when translated. This means something in a certain kind of way to people who speak meta-language.
Mike Most sends a comment from his iPad (he’s on the move) Date: Wed, 07 Sep 2011 09:43:16 -0700 He quotes Alan Lasky who wrote:
“So, I have another question regarding the F65: considering the current state of acquisition, what is the realistic target market vertical for the F65? Features? Television?”
Mike responds: “Yes. Add in commercials and corporate production. Maybe even the military”. And again quotes Lasky:
“My concern is that with current economic conditions being what they are the F65 may be perceived to be "too much dog for the fight" in something like episodic television”.
Mike Most again: “Not a chance. Television is it's most likely immediate market, IMHO. It's basically being positioned as a superior file based image capture device, using a familiar and respected codec, at what is essentially an Alexa-compatible price point. If you look at it as a substitute for the Alexa in the television market, you can look at recording directly to HD resolution SR files using either S-log or ACES and passing it through a rather straightforward pipeline. Despite Red's protests and despite the 8K/4K nature of the product, that's probably more than enough to get it heavily used this coming pilot season, provided Sony can produce and provided the first units prove to be as reliable as the prototypes seem to be. If anything, it's the requirements of the feature market that are more of a work in progress for the F65, in part because those workflows can be very unique on a per-picture basis, and in part because I'm far from convinced that there will be any simple, economical way to handle that amount of data. Nothing I heard last night changes that view. My feeling is that going forward, Sony will ultimately come up with at least a mathematically lossless compression scheme for the RAW data, perhaps multiple levels of compression a la Red. But I have to agree with my friend Jim Jannard that the uncompressed-only ship has already sailed.
Only my opinion, though. YMMV.”
He signs: Mike Most, Colorist/Technologist, Level 3 Post, Burbank, CA.
As he says at the beginning of this post: IMHO - “In my humble opinion”, which is a caveat phrase which says : I really know what I’m talking about, I have the experience and the expertise - however, I do accept that sometimes I can be wrong and please let me know if I am. There’s a lot of clues in this post. Sony has missed the uncompressed ship as it sailed three years or more ago when Jim Jannard of Red piloted the boat from the shore. Arriflex with the Alexa has grabbed the high ground because they’ve manufactured a camera more akin to Panasonic’s manufacturing response to Sony’s cameras in a previous era - the Alexa is a camera that delivers good pictures from the outset whereas Red needs work. It’s the difference between a stable mare and an unstable stallion. The other acronym Mike Most uses here is YMMV which means roughly ‘your mileage may vary’ which basically means your experience may be different, better or worse than what is described.
Here, Most responds to Jim Houston. On Sep 7, 2011, at 9:02 AM, Jim Houston wrote:
“I thought the description of the strategy was very clear. There is no one-size-fits-all workflow. ... Yes, lots of vendors have lots of work to do, but the strategic approach was very clear.
Most responds:
“I think my original statement was a bit stronger than it should have been. I do see that Sony is basically making the data available and also making tools to interpret it available, and bringing in third party partners to do the specific implementations, and that's a strategy I can certainly agree with. I think my only real problem with what's been presented so far is that if one wants to record and preserve the original RAW data, there's nothing currently on the table to do that short of investing in petabytes of storage (an exaggeration, but for certain projects maybe not much of one). No matter how cheap storage is getting, it's still an awful lot of data to ingest, keep track of, and restore. And perhaps I've had too much Kool-Aid in the last 2 years or so, but I no longer see the need to adhere so completely to the "uncompressed is the only way" mantra. Even mathematically lossless compression would cut down those storage requirements by many terabytes on a typical feature project. And that has to be done at the camera/recording level. I still hold out hope that Sony is going to offer such a path, but I didn't hear any evidence of that last night, at least not on the RAW recording”.
And here he steps up to the mark and begins to comment on the current situation:
“Like it or not, we no longer live in a world where big facilities are the sole province of high end work. And we no longer live in a world where big iron can be the only solution. One of the lessons of both Red and Alexa is that when products are brought to the market that can be handled by both big iron and desktop solutions, the market is widened, acceptance is faster, and products are championed. I think that will likely be the case with F65 recording HD sized SR files, but I'd like to see a similar path for the higher resolution material that the camera can produce, allowing smaller shops and individuals to produce 4K projects with sensible storage requirements. Red has already shown that it can be done. I'd like to see Sony take that ball and run with it a bit.
Competition can be a beautiful thing.”
Here’s one of the critical issues with the development and availability of uncompressed and RAW technologies: That big iron solutions (i.e. multi-million pound post houses in the worlds capitals) are in parallel with desktop solutions (once only every MAC computers, but now as PC’s have emulated MAC developments they also can be used, as well as Linux and other platforms). Wavelet Transforms have underpinned s called lossless or Raw data and mid-2008 suddenly 4k images could be played back with only three standard hard drives ganged together, in 2006 it had taken me 8 hard drives ganged together to produce the same outcome. Wavelet’s had been available in 2005, but not with this efficacy. We are in the middle of an onward rush, a tsunami of technology.
But this technology, in delivering greater resolution (as well as dynamic range and also frame rates) is alleviating some of the earlier anxieties of the move from film to video to data cinematography.
Here, Tim Sassoon comments on Mike Mosts earlier point and then brings up a critical point:
In a message dated 9/7/11 11:43:49 AM, Most writes:
“Sony will ultimately come up with at least a mathematically lossless compression scheme for the RAW data”
Sassoon’s response is:
“Remember that the larger the frame, the less significant compression artifacts are, and the more important higher bit depth is”.
This is a very important comment as it shows that anxiety is a response relative to the conditions of the time. In the early days of HD and 2K the idea of an artifact within the image produced a complete and total adherence to the idea of lossless data amongst the most serious professionals. This was related to the fact that they did have experience of the highest levels of image generation in 35mm and 65mm film. They had a history of dedication to methodologies that avoided any kind of compromise of the image generation, development and display process. This evinces itself latterly for instance, in Christopher Nolan’s adherence to the use of 65mm to generate high quality entertainment features.
But of course lossless data is an impossibility because it is never really achievable: even if one retained all of the data generated (at massive storage cost), the particular criteria adopted defeats the notion of lossless-ness. What I mean here is that the paradigm governing the technical thinking of the time says that data is a costly thing to generate. Not in monetary terms (although high levels of data do generate actual cost) but costly in terms of storage and the ability to manipulate the data for editing, grading compositing etc. Consequently generating 8 bit data with 256 samples (of a data criteria like YUV - so that’s 3 times 256) obviously generates less data than 10 bit (with 1024 samples per channel) - and so on. The real point here is that one would need an infinite bit depth to truly represent the world - but then one would reproduce the world - so what in effect, would be the point?
Mike Most comments on Tim Sassoon’s point:
“Remember that the larger the frame, the less significant compression artifacts are, and the more important higher bit depth is.
I think you and I are basically saying the same thing (no surprise there ;-D ), with one of us pointing out that even mathematically lossless compression is really not a requirement at these frame sizes.”
Mike Most is saying that with the human optical system there might in fact be a limit far below the infinite horizon of data that the purist originally sought, that will work for the discerning eye.
PRELIMINARY SUMMATION
So, here we are again at one of those seemingly watershed moments, which actually do not have the power of metaphor associated with a watershed with further inspection. In hindsight it might have seemed very dramatic at the time. With the Sony announcement of the F65, it might have seemed as if a distant horizon has rushed forward towards us and simply looked like they were very near indeed. What looked technically impossible before now looks technically not only achievable but far surpassable.
But here I’d like to step back into film’s past to generate a sense of scale for the present. In his book ‘Using the View Camera: A Creative Guide to Large Format Photography’, Steve Simmons describes the relationship and also advantage between the larger still image film formats of camera over and above 35mm SLR cameras:
“The film used with the various view-camera formats is much larger than 35mm film. Film for the 2.5 x 3.25 camera is 5 times larger, 4 x 5 film is more than 13 times larger, and 8 x 10 film is 53 times larger. The increased film size produces clean, crisp images with a captivating sharpness. The surface textures of such materials as stone, brick and wood look almost three-dimensional in view-camera prints and transparencies. Large display prints have unblemished clarity and depth because the negative doesn’t have to be over-enlarged.”
This immediately refers to Tim Sassoon’s point: “Remember that the larger the frame, the less significant compression artifacts are, and the more important higher bit depth is.
Also, if you work through the figures, 8 x 10 film (using the Canon Rebel as a guide) is 53 x 18 megapixels: that’s 954 megapixels! As you’ll guess, I’m being disingenuous and playing somewhat (but even if you used the Red One camera, that would be 440 megapixels). Steve Simmons talks about a ‘captivating sharpness’. ‘unblemished clarity’ and the images of materials look ‘almost three-dimensional’. This is all about increase of verisimilitude as our current technological tendency is about a series of increases in capacities which produce clues that translate as verisimilitude - hence the other phrase Steve Simmons uses ‘looks almost three-dimensional’*(See note at end).
So for a long time now we’ve had the ability to capture very, very detailed high resolution images. The difference now with Digital Cinematography is that we can fire these off at 24 frames, 25, 30, 48, 60 - in fact the capabilities of frame rate display is continuously increasing. We are effectively enabling still photography rapid fire to allow it to join cinematography, and by being digitally enabled we must append the title to Digital or Data Cinematography. I would conclude from the above that we are in the very early days of what is to become possible. And what eventually arrives will be far outside what we can currently imagine.
This brings to mind some experiments conducted at the University of Bristol where Tom Troscianko in the department of Experimental Psychology has produced data that shows that current 3D techniques only generate 7 per cent more immersion than standard 2D images of the same subject matter. The technique used to measure ‘immersion’ is related to arousal. In fact, increased technological capacities, such as higher frame rates, higher dynamic range capture and display together with increased resolution produce more depth clues and generate a deeper level of engagement than 3D technologies.
So I myself have been guilty of believing in the digital revolution and given many papers on it - even the idea of the post digital. I’ve ruminated and written on the notion of data as being too closely related with digitality which many signal engineers regard simply as actually an enhanced analogue method. After all, Fourier presciently invented the Wavelet transform in 1807 and the ‘meat-grinder’ Discrete Cosine Transform in 1800 - way before digitality in the middle of the analogue era.
I’ve written before on the idea of data as being pure and unmediated by numerical remediation - after all the data captured within the medium of the hologram is not mathematical, nor mediated (except in the strict sense that it’s held within a medium). But it is quantum and photonic in nature - both appendages or descriptions deny the notion of the mathematical - where mathematics is a telescope or viewing device into the ‘stuff’ of the universe and photons are of the stuff of the universe and light behavior appears to be quantum (in this perceptual realm at least).
It would seem that the idea of a technological revolution is a human gesture towards a paradigm change. It would seem that the reinvention and use of language is part of the strategy. To call something that’s happening ‘Digital’ when all you’ve know previously is analogue functionality, is very similar as the gesture of naming something ‘High Definition’ - which albeit can be seen to be a PR gesture, might also be an actual necessity for innovation and development (again thinking of Noam Chomsky in relationship to thought and language being two halves of the same coin) It’s the use of language where you aspire to something beyond the now. ‘The Truth’ and the idea of ‘now’ are of course dubious notions, unless you believe in the idea of the direction of entropy and therefore accept the forward notion of ‘Time’s Arrow’, (when there’s always a ‘next’).
On the same list the day after the F65 was launched, this post arrived from Harry Dawson: Date: Wed, 7 Sep 2011 10:43:19 -0700
“With film "going away" in a few years, there needs to be a 4K replacement, right? I'm shooting a project where we are doing 4K scans from 35mm. Not doing SFX but spanning three vertical plasma screens. Seems like SFX are going to need a higher resolution answer than Alexa. Here might be an answer, right?”
Harry is posing the question of ‘next’. In this case I won’t go in to what he’s suggesting technically as I’ll leave that to your own researches. Digitality now sits where ‘the Modern’ used to sit. It’s here right now and it feels good, because it suggests we’re in a period of movement, that we are materially achieving the dreams of our imagineers, science fiction writers beginning somewhere in the 7th century BC with the writer of the Epic of Gilgamesh, who’s original title was “He who saw the Deep”. I’m speaking here of an actual ‘writer’ who used text and of course I do accept that human’s have created forward looking stories from the beginning of language (and in the case of images in the cave paintings of Lascaux - who’s to say these were not imagined bountiful futures rather than ‘movies’ about the past?)
So in a sense I’m arguing that future imagining, via science fiction writers of the 1950’s and sci-fi television shows like Star Trek that posited warp drive and holodecks were the original acts of scientific theorising, that then created a vision for everyday scientists to work towards; that possibly at this point in time, where the world seems a little out of control, ‘the church of future hope’ is actively proposing that technically we can do anything - and given that our optical system is perhaps the most powerful and overwhelming sensory system - and somehow ontologically characterise what we actually are - then digital imaging is the place where the forward thinking work which seeks to usher in a new paradigm is taking place - it seems to me therefore, that the language and the conversations of those people that truly understand the technology, the possibilities it makes available, the developing practice and the following technical developments is a determinant of what will actually occur.
TECHNICAL NOTES AS AN ELEMENT OF THE ARGUMENT
I will now outline some of the ideas that may not have been fully described earlier (because I didn’t want to limit the progression of the above narrative). I’m now proposing that though these are informational, that they also or revealing imminent technical, cultural and aesthetic developments.
In the early days of HD where the naming of terminology described the aspiration for something better than what we’d been used to, High Definition simply meant ‘better’ at 1920 x 1080 photosites rather than 768 x 576. In 2007 when I started my Creative Research Fellowship the technology was very clunky, the recording mechanisms seemed incapable of recording the data generated and the idea of recording a truly lossless stream of data seemed impossible.
Then I became aware of various critical issues which determined the parameters of generating, recording and displaying digital images:
Modular Transfer Function, which describes a chain of delivery from capture to display where resolution is defined by the lowest resolution link in the chain (like plumbing where flow is derived from the thinnest of pipes in the system);
Wavelet Transforms which power everything digital by being that bit cleverer than discrete cosine transforms: the first being linked to the functions of a circle and the second being linked to the square. Clearly the smoothness of a circle as a metaphor is more gradual and gentler than the hard right angles of the square. Therefore reconstructions of data that’s been compressed with the functions of arcs and circles is more delicate than those compressed and uncompressed using the functions of a square wave. Wavelets just seem intuitively more reconstructable.
I’ve used the term photosite rather than pixel as it is a more accurate description of the light receptor that generates a voltage which is then processed into data than the word ‘pixel’ on a CCD or CMOS sensor, (as it is where the basic data for a ‘display pixel’ is generated).
With regard the denominator ’2K’, HD is often referred to inaccurately as 2K, as 1920 is near to 2 thousand) - HD is 1920 x 1080 photosites, but one of the more true variants of 2K cinema, which uses a 2:1 aspect ratio is 2048 x 1024 photosites. The true 35mm sensor however might better be described as being in the region of 2000 x 1500 photosites because this generates an aspect ratio of around 4:3, which is the original 35mm academy ratio, which can also be expressed - if you divide 4 by to get 1.33 (Academy was actually 1.375).
If you take a 35mm sized sensor that is 2k, then of course it has larger photosites than those on a same-sized 4k sensor - as there have to be 4 times as many packed in to the same space - and as with all things, when you do this sort of thing, there are drawbacks (which is for another article).
The 4K variant using a 35mm sensor is 4096 x 2048 (double the 2k variant which uses a 2:1 aspect ratio). So using the 4K variant that would equate to 8.3 million photosites - so the Red Camera has a sensor (speaking in DSLR terms) which is less than half the recent cheap Canon EOS Rebel, which retails for about $900 and is 18 megapixels.
You get the point though - and importantly when you shoot in megapixel amounts of photosites this is then multiplied by how many frames per second you can shoot - so whilst Peter Jackson shoots the Hobbit Movie at 48 frames per second - at 4k and in sterographic 3D (i.e. two streams of 4K at 48 fps) the data streams are huge.
Colour bit depth is typically talked about as 8, 10, 12 etc. What this refers to is the amount of samples taken - therefore how subtle the colouration is. 8 bits describes a sample of
2 x 2 x 2 x 2 x 2 x 2 x 2 x 2
which as a sum equals 256 samples. 10 bits is two more times 2 which equals 1024 (and so on). Incidentally each byte of data is comprised of 8 bits of data.
Colour bit depth sits within a Colour Space (which is the term that describes what the parameters are for the gathering of and display of data). Clearly a printer has an entirely different colour space than either the human eye, or a plasma screen, or the newer Higher Dynamic Range Display technology that you will be seeing shortly.
I could carry on and this ‘argument’ would also seem to transmute into a ‘glossary’. In fact the language and the thought become the same. In time of course these will again separate as we gain distance on the subject area.
It’s always been a difficult practice to theorise what is happening when it is happening.
We’re now post-digital (so some claim) which I read as meaning: ‘we’re no longer confused about what it is and now we feel comfortable’. This means that people are looking to the horizon as if it’s the present. 4k now - 64k tomorrow - and why not? There’s a changing paradigm to be witnessed here. Digitality requires numerical representations of whatever the digital device is dealing with. Numerical equals mathematical. But there are ways of generating data that are not mathematical - within the hologram for instance. Pure data captured without mediating light through maths.
CODA
To try to make all of this that little bit more clear, here are what I offer some defining criteria for Digital or Data Cinematography:
a) The optical pathway is 35mm or above (if you research the reason that 35mm film was set at 35mm, you’ll see it could have been derived from manufacturing techniques for photographic usage - that is what was technically and industrially possible at the time).
b) it generates a progressively based image flow relating to a specific time-base as opposed to an interlaced image flow (one full frame of information at a time rather than a field-based workflow)
c) like one of its predecessors, film, it holds the image in a latent state until an act of development (or rendering) is applied - but unlike film is non-destructive of its prior material state)
d) it’s capture mechanism though generating a nondestructive, non-compressed data pathway from which an image can be reconstructed, is not its sole intent as a medium or method of capture (but is distinguished from digital video who’s sole intent is to generate images in a compressed manner from less than 35mm optical pathways)
e) the latter three qualities are also base characteristics of many developing digital technologies – for instance real time mapping of environments requires a capture of at least 3 infra-red imaging sources (Cameras used as sonar devices) running at 25 fps at a 'reasonable' resolution
Digital cinematography is more than just capturing images - it's a portal onto the digital landscape so far unexplored due to its apparent function as an image capture medium i.e. remediation.
As a conclusion this short list may be satisfying or unsatisfying. There are many other ideas to work through and many developments coming that will need similar examination as this technology grows and changes.