Thursday 2 January 2014

High Definition in 2014

2013 was an eventful year, not only in terms of the way moving image capture has developed, but also in terms of opening a new research center in Cinematography at the University of the West of England. During the year 3D was superseded by 4k as the key buzz phrase – not that 3D has gone anywhere, in fact with Gravity, in my opinion 3D has come to something, finally. What I mean by this is that in this film the camera moves around its subject and the extra level of depth generated by 3D has added something to the experience.

With 4k, cinematographers have been working at that resolution with the Red One since 2008 – though when a term like 4k is used, arguments break out about what that means – can a compressed signal ever really represent it’s supposed resolution, when there are so many factors that represent the true resolution? One of my earlier artworks uses this technology but the issue with 4k has been how we display the actual image for some while, but all key manufacturers now make 4k displays and also, now, the manufacture of the domestic TV screen is getting closer and closer to the quality of the professional display – so prices are coming down. I intend that the new research centre buys a 4k display in early in 2014 so that we can then display what we capture.
http://www.visualfields.co.uk/ANSEL.html

Earlier this year in collaboration with University of Bristol and BBC Research and Development I was privileged to lead several shoots in Higher Dynamic Range which had the intention of being displayed in HDR as well – this was a world’s first and because of that the code is still being written though we can display a basic edit of the piece at 8bit with one track spread across the dynamic range of the Dolby 6000 nit screen. In the eye brain system we have around 14.5 orders of magnitude of response which at any one time we use 5 orders of that scale - so in going into a starlit environment we slide down to the bottom of the14.5 order scale and on entering a desert landscape in bright sun, we slide those 5 orders of sensitivity up that scale to the top – thus keeping the highlights exposed properly for viewing. In this scale 1 order of magnitude is vast. So the difference between the eye brain system and what is displayed is immense. The screen you are viewing at best displays 2 – 3 orders of magnitude and the HDR screen we are capturing images for at University of Bristol is 5 orders – the same as the eye brain pathway. Using the term orders of magnitude means that the scale is not just arithmetic, but geometric – the highest values of the scale are millions of times that of the bottom of the scale. The eye/brain system is truly magnificent in its capacity.

Later in 2014 we expect to have combined the 2 tracks we shot into a truer form of HDR. The most surprising – and disturbing element of the shoot was in learning that 100 years of cinematographic law had to be turned on its head: In exposing for 6 stops of latitude between the two exposures I could only monitor the highest exposure which was 3 stops above the correct exposure, in the knowledge that the true exposure was set in virtual space and as with film I had to have ‘faith’ that the end image would be exposed properly. One track recorded 3 stops over, one track recorded 3 stops under – therefore I had to knowingly gather an overexposed image in the hope that somehow the two could be combined and a decent image delivered. When we finally had the code written for the recombination I was relieved to find that it had worked. It left me knowing how we’d achieved the end result, but emotionally I didn’t know how it could be ok when my experience was of searing over-exposure. Interesting.

Over the year I set about the process of setting up a research centre that meant presenting to various academic research committees and with luck by April 2014 we shall be authorized to proceed. Meanwhile I began the process of attaching visiting professors and the first was Emeritus Professor Chris Meigh Andrews of Central Lancashire. Chris is a professor of electronic and digital art and adds his weight towards investigating the histories that have been written on the subject (including his own second edition of ‘A History of Video Art’) plus an investigation of where we are and where we’re going during the advent of the digital. For my own summation of that issue you can read a short paper on the Future of the Moving Image and how it will affect the production of Art at this URL:  https://www.academia.edu/3807490/The_Future_of_the_Moving_Image.

On another issue, previously, Arts and Humanities subjects have utilized the theorization of a subject through various strategies, such as dialectics, structuralist analysis, semiology and so on. But now there is a sense within academia that though these have been useful tools, they are no longer fit for task, due to the constant and rapidly changing landscape caused by the introduction of the digital era. In the UK, the Arts and Humanities Research Council has called for new ways of evaluating subject areas and many researchers have wholeheartedly embraced empirical principles, a consequence of which is to have embraced cognitive neuroscience as a primary route for the use of eye tracking devices, fmri scanners and then combining testing with social science practices of evaluating the data or ‘evidence’.

One of the issues with this practice is that truth at best is implied – that a hypothesis is set up, an experimental test administered and if the cards fall right then the implied truth of the hypothesis is ‘proved’. It can be argued that deep within the ideological position taken by empiricism is in a fact a gnosticism argued by many cognitive neuroscientists, that there is a grand human project to excise it’s entire knowledge into exograms – or sites of memory outside the person (a book, a computer, a map, hierloglyphs etc) - and it follows that the final manifestation of this project is exporting all knowledge into data. A final outcome of this act is as yet un-theorised by cognitive neuroscientists but I have proposed the concept of velocitisation to help describe acts on the internet that express behaviours that speak of human change. With a simple gesture like the Harlem Shake, one person gestures mimetically that everyone should ‘do their own thing’ and later in the piece, all then gesture mimetically that difference. What this describes is a positive response to change, rather than a dystopian response. But there is not theoretical position on this behavior and the social sciences have only just begun to take up the challenge.

Enter Complexity Theory, born of mathematics and physics and the human response to the multifarious comprehension of complex behaviours. Complexity theory seeks to theorise the complex and has a set of strategies to deal with this apparent limitlessness, by limiting it’s possibilities through rules drawn for the complexity that has been witnessed. Of course what seems limitless is actually limited and so this is a mathematics ntended to pick up at the point at which human systems given up on numbering and categorizing. It is the point at which we might say ‘I saw many starlings in a murmuration and they seemed to act together as they flew’ or that a weather system is too complicated to describe but that it worked through a series of states that derived from prior states – this is where we know that a system is complex and may do one of several things and science does not yet know which way it might go and possibly that we will never be able to predict its exact outcome.

So since the 1940’s when Illya Progogene began thinking about complexity, we now theorise that a system can be ‘complicated’ but is not necessarily to be described as ‘complex’, where complex does mean complicated but can go one stage further by being able to enter new states, through ‘emergence’. A car engine is complicated but will only remain as such. A storm is both complicated in terms of the many factors that come together to form it, but it is ‘complex’ because several other states may emerge – a hurricane for instance. So complexity is about richness – chaos is no longer being ‘chaotic’ because in that chaos lay a set of variables which can result in ordered states it can become also then become further ordered, or disordered.

But the main point here is that what seems too much for the human system to ‘count’, but can now be mathematically modeled and therefore described – at least in some part. Before we described something as having a number too large to be counted – now we can say we no longer need to number something in its description – suffice it to say that it is now to be considered as complex and can act in different ways that could be one of the following. This too has to have an impact on human consciousness with regard the introduction of different frame rates, dynamic ranges and resolutions – right now the young express a preference for higher frame rates but the old prefer slower frame rates. Why is this? (is it related to higher frame rates in computer games?) What does this preference say about human evolution? Is it temporary or indicative of eye brain development? And so on and so forth…

So when the research centre begins its activity we will look as much toward future technologies as towards the past (a critical issue will be the re-investigation of how past histories have been told and hat they have included as ‘important’ in the telling). We will look at technology as much as at the human system that utilizes that technology, we will take account of the biology of the human system – the equipment that each human is endowed with – and we will also look at the cultural systems that encompass the individual that help create meaning and significance in the production and consumption of moving images. We will look at the cognitive systems employed by human species, and the situation the individual finds him or herself in, with regard the cognitive distribution of information.