As cinematographers, we are certainly more than whatever equipment we have available to us. However, we should not, for a moment, underestimate the endless ways our camera equipment defines how we work and conceptualize our craft. If we suggested to the Lumière brothers an underwater shoot, using remotely controlled equipment, they would laugh in our faces. But if we showed them, it would have opened their eyes to the fantastic video production gear that is available today. Of course, this innovation extends far beyond the camera, but that is our focus today; cameras themselves and their recording formats. From Chronophotography and Bolex to IMAX and consumer-cinema cameras, let’s have a brief look back at the technologies that evolved into the array of cinematic devices we use and operate today.
Part I: Cameras
Although the concept of a camera can be traced back to the Middle-Ages (and even further), the cinematic image begins in the 1800s. In 1845, Francis Ronalds could be said to have produced the first “time-based images”, creating a mechanism driven by clockwork, which pushed a photosensitive surface past the camera’s aperture. More complex variations of this concept emerged in the 1870s/80s in the form of Eadward Muybridge’s motion-studies and Étienne-Jules Marey’s Chronophotography. In 1888 Louis Le Prince likely created the world’s first motion picture, using a 16-lens camera. Still, the motion picture camera as we know it (more or less) emerged by the mid-1890s, independently produced by the Lumière Brothers and William Dickson (working under Thomas Edison).
These large image-taking wooden boxes spurred on the relatively quick adoption of motion-picture entertainment, resulting in numerous rival devices being created. Such devices would see the hand crank be removed in favor of a (somewhat) standardized 24 frames-per-second output, accommodating different format sizes and lenses. However, the two immense developments in the filmmaking process that would leave the most significant mark on early cameras were the introduction of sound and color.
Although both have long development processes, the pair came into heavy rotation during the 1930s. Sound came first, having various contenders for “first”—from Don Juan’s (1926) sync score, The Jazz Singer’s (1927) moments of synced dialogue, or even way back to The Dickson Experimental Sound Film (1894). Earlier films recorded sound externally, via a sound-on-disc device like the Vitaphone, which would then be manually synchronized while projecting the film. But the 1920s films marked the turn and introduced optical sound on the filmstrip itself, which became a mainstay. This, again, came in many flavors from ‘Movietone’ to ‘Photophone,’ but all of them resulted in part of the filmstrip’s real estate being taken up by the sounds track. Putting all these technicalities to the side, the introduction of sync-sound captured on set had a more considerable impact on the filmmaking business; it reduced film sets to places of silence. Where a 1910s director would be directing camera and action in real-time—calling out during the performance, but the addition of sound required a quiet set, which, arguably, placed a more significant focus on blocking, rehearsing, and pre-planning the action. Moreover, loud cinematographic machines also needed to be hushed, which resulted in their encasement within huge muffling boxes, making them completely unwieldy and, ultimately, condemning them to remain on a tripod.
Despite color introducing an entirely new dimension to the audiences, it remained restrictive for camera operators. Of course, it has a similar history of innovation until much of Hollywood settled on Technicolor and its three-strip process. This process saw a single camera recording simultaneously to three strips of black-and-white negative, which were each only sensitive to a single color of green, red, or blue. Despite it being a technical marvel, and the ability to produce larger-than-life film epics through the use of color, it too greatly affected the videographer. Running three film strips, and thus three magazines and mechanisms, proved much louder, making the process mostly incompatible with sound recording. Thankfully a specially produced muffling box, known as a ‘Technicolor blimp’, was developed to hush the camera, but when in use, it resulted in the camera’s weight totaling over 200kg (around 500lbs), making it even more unwieldy. As such, cinematography of this era remained locked down, covering the majority of scenes with static shots, occasionally spiced up by pans, tilts, and dollies. Thus, the clinical and somewhat bland, camerawork of Hollywood’s so-called “Golden Age” ensued, graced by the novelty of color and sound but lacking freedom in camera operation.
At the same time, away from the studios’ expensive and cumbersome equipment and mindset, a different evolution was occurring. Cameras were getting smaller, and available to more people. As early as 1911, motion-cameras were becoming handheld, with Kazimierz Prószyński’s fully automatic Aeroscope, allowing for whole new applications from handheld, reportage, and aerial photography. But the landmark device here is undoubtedly the Bolex H-16 released in 1935; the name still resonates today thanks to its classic design and constant updates over the ages. This camera arguably had a much greater impact on the public conception of cinema than any of its studio counterparts, as, due to its small recording format (9.5 or 8mm) and comparatively low price point. The Bolex H-16 offered many individuals outside of the studio-system the opportunity to produce independent films. So the motion-picture slowly became an art form that the common man could also partake in.
Leaping forward to more recent history, we can see a similar divide in the advent of cassette and digital media-making devices. Initially, video cameras were designed for television broadcasts resulting in their large, massive bodies limiting them to studio pedestals. However, as technologies developed, the devices quickly shrunk in size to allow consumer use—found first with Sony’s Betamovie BMC-100P in 1983 that promptly evolved into smaller camcorders such as Sony’s infamous Handycam series which continues today.
As innovation continued, tape slowly became digital, and recording devices became more and more available to consumers. The introduction of video on the DSLR was a game-changer. Despite the vast variation in quality, with many low-end cameras offering poor functionality, the DSLR allowed consumers and entry-level professionals to the opportunity to capture video within a somewhat upgradeable package—thanks to interchangeable lenses. In this sector, cameras like the Canon 5D Mark II and Sony a7s reigned supreme. Still, the unbelievable wealth of movie-making devices on the market meant that almost anybody could now afford to point, shoot, and become a filmmaker.
Of course, to suggest that the masses adopted the Bolex in 1935 would be hugely reductionist, as the camera was still relatively expensive, niche centric and in short supply. However, it is undoubtedly one of the initial first steps in bringing filmmaking to the masses.
It is also important to acknowledge the influence of the camera phone, which, today, allows the majority of the population to reach into their pocket, take out their phones and take a picture or film a video. Again, this has a significant impact on our conception of image taking. In recent history, we have seen a considerable leveling of the playing field. For example, the image of a teenager dancing in their room, or a dog falling down a flight of stairs can draw more views than many feature films, making them important cultural relics in their own right. Internet culture aside, the prevalence of video-taking devices undoubtedly sees us producing and consuming more video than ever, exposing most of us to both the complicated responsibilities of creating media and the constant problems of consuming it.
But, of course, the majority of what we would classify as “cinematography” doesn’t originate on camera phones, camcorders or DSLRs, despite their ever-encroaching inclusion in the cannon—from Tangerine (2015) to Blair Witch Project (1999). Instead, our minds jump to RED, ARRI, or IMAX cameras. And, although these cameras make up the cinematic staples, I would argue that perhaps their impact is less than the previously mentioned developments as each of them incrementally increases technical fidelity in smaller packages, with few introducing an entirely new technological phenomenon. However, that is not to downplay how the rise of the owner-operator has overhauled much of the industry and moved it towards freelancing, rather than studio-based works. Cameras like the RED ONE were instrumental in this change after its 2007 release; it revolutionized digital filmmaking. Although still a hefty and expensive camera, the ability for dedicated freelancers to purchase their equipment offered newfound freedom to create theatre-grade content entirely independently, and on a much broader scale than before. Not to mention the fact that digital quickly introduced a wholly new filmmaking workflow. Again, the value of the RED ONE comes in the form of liberating its users.
In short, the cameras of the 2000s onwards have managed to decentralize film production of all levels, freeing professional productions from institutional shackles and permitting newcomers a cheap and easy way of creating their first films. To be clear, a gap still exists between the equipment used at the top and bottom of the industry, but it now appears more of a gully than the canyon it once was.
Part II: Recording Formats
Although cameras are the machines that we interface with directly when shooting, it is essential to remember that each camera is crafted to accommodate its recording format. As such, the history of recording formats has primarily lead many revolutions in camera design, and thus cinematography.
We evolved from photosensitive plates and early film stocks to Technicolor’s complex three-strip color process, which requires not only three film strips to be recorded but also an intricate post-production process to combine the negatives into a singular multi-colored image.
Looking at the film stocks themselves, we are familiar with an array of formats—35mm, 70mm, 16mm, 8mm, cinemascope—and less familiar with others—9.5mm, 13mm, 8.75mm. Each format is shot on a specific gauge (stock size) at a particular aspect ratio (height x length of frame). Our familiarity aligns strongly with their popularity, and as early as 1932, Hollywood became largely unified by the Academy Format—35mm, with a 1.375:1 Aspect ratio.
With formats endlessly trying to better each other, and with the popularisation of wide-screen by Cinerama in the 1950s, aspect ratios began to shift wider. Many of these new wide-screen formats had a remarkable effect on the way films were shot. In many cases, requiring multiple cameras shooting simultaneously and then projecting the film on curved screens, these techniques were employed for both Cinerama and Vitarama. Much like with Technicolor, operating became extremely cumbersome, resulting in a large volume of scenes being shot statically. Nevertheless, Cinerama, Cinemascope, and Panavision remained unmatched due to their incredible cinematic output.
Concurrently smaller formats were also getting released. These scantier formats, similar to the portable cameras they intended to work with, reached wider audiences, particularly outside of the studio sphere, and until the advent of cassette and consequently digital these formats where of preference to a large body of experimental filmmakers.
Cassette and digital brought with it unimaginable freedom. As both are re-recordable formats, meaning one can delete and replace old footage with new, the options for filmmakers became almost endless. Instead of relying on expensive, cumbersome film reels, one could now shoot almost endlessly, only interrupted by the capacity of the tape or memory card. As such, again, the entry price of being a filmmaker diminished. Russian Ark (2002), an actual one-shot feature film, is a demonstration of this digital miracle. By recording uncompressed onto a hard disc from the Sony HDW-F900, the crew was able to accomplish it’s 90-minute single take without requiring hidden cuts, as found in Rope (1948) or Birdman (2014).
Furthermore, the film’s post-production workflow displays another mainstay of digital cinematography—the importance of post-processing. The shot finds many moments reworked through compositing, color correction, and digital focus pulls. The relative ease of producing these effects within a digital framework, compared to its difficulty in film, again has had a substantial trickle-down impact whereby amateurs and professionals alike can work with green-screens and special effects compositing.
But it is easy to attribute digital filmmaking to magic, and celluloid to physics. Hence, we must consider the concrete component of the digital sensor—the film of our digital cameras, if you will. Although I won’t explain the immensely complicated physics here, it is essential to remember that these sensors, much like the film stocks before them, come in vastly varying formats ranging from the ARRI Alexa 65’s 6k 65mm sensor all the way down to micro four thirds and camera-phone CMOS sensors. But, as time goes on, as with all these technologies, better cameras become more abundant in the consumer market, allowing individuals to experience the relative glory of full-frame mirrorless cameras, that pale the capabilities of the consumer alternatives of even five years ago.
But with the introduction of new recording formats, comes a need for an institutional change in projection technologies. At the moment, the industry has shifted from film to digital projections. The Digital Cinema Initiatives (DCI) of the mid-2000s pushed for massive adoption of these new technologies and resulted in a shift to digital projection by 98% of worldwide cinemas today. However, given the renaissance of celluloid cinema pushed by auteur directors, and the industry’s waning abilities to shoot and project those formats, it is hard to foresee the future of celluloid—especially as it sits in the shadow cast by digital.
Part III: The Cinematographer
Technology is, and always will be, a fundamental of cinematography. And as such, it will shape the tools available to us, and how we can apply them. However, in the face of this, we must remember two important caveats. Firstly, the equipment itself will always change and advance—a skilled cinematographer will know which equipment to select and where to apply it, occasionally holding back the urge to reach for the shiniest new gear. Many even develop their own technologies to fit their specific needs, think of Garrett Brown and the invention of the Steadicam, Gregg Toland, and the color chart or Vittorio Storaro’s Jumbo light. Thankfully, in a world of digital RAW and LOG images, the ability to select the right tools has been extended into post-production, allowing for extensive reworking and, on occasion, fixing of images after the fact. But secondly, and more importantly, we must remember that we are the ones operating the equipment—the equipment is not in control of us. It is our hands and eyes which guide the tools towards the ultimate goal - storytelling. The apparatus may, to an extent, define what is possible. But the real innovation lies within the videographer.