Machinations of the Senses

Daniel Deshays

“I don’t believe what people say to me, I believe the way they say it,” Christian Bobin said. This is where the sonic is located, in the manner of speaking, in the way in which bodies move and silences occur.

How Conscious of Sound Are We?

It would be hard to argue that sound is something the public or professionals, whose work is to create sound, actually think about carefully and consciously, as strange as this might seem. Sound in film is rarely addressed from a theoretical perspective. Few books are dedicated to it beyond purely technical works, and those do not often consider the aesthetic conditions of sound production. The question of the existence of sound is abandoned in favor of normalizing procedures for producing it, absent any consideration of content. But isn’t sound a central element of all exchanges among living beings and therefore a powerful indicator of the very nature of those exchanges? If this is so, then the most exacting attention ought to be given to it. How is it that cinematic sound is so rarely consciously addressed? Faced with the normalization of forms, is there a place for a poetics of the sonic?

A great dichotomy between different manifestations of sound exists in our world. The most visible one is the radical distinction between noise and music.[1] Just like ugliness and beauty, chaos and pure spirit, sound is envisaged in the form of oppositions. The noise of chaos, perceived as annoyance, is opposed to the world of music, a domain of art that enchants listeners the whole word. At the very heart of cinema, on the sound mixing console, this approach can be found in the division of sound into three distinct strands: 1) “voices” derived from literature (script and dialog), 2) “music,” another type of literature (musical scores and productions created by studios), and its derivative, the original soundtrack,[2] and finally, 3) “noise,” which belongs to sound design, covering over silences (individual sounds, ambient sounds, sound effects).[3] An approach that breaks the elements of sound writing into pieces like this, which all possess nonetheless the same faculty for constructing meaning, accepts the categorizing of sound through oppositions. The absence of a conscious and more global reflection about sound stymies the kind of work that should be taking place concerning the common qualities of all sonic forms. And what remains is simply to view sound indifferently by presenting it universally as a “technical object” distinct from artistic objects.

Does the immateriality of sound induce us to abandon it to the purview of tools, techniques for manipulation, the black box of technological machinations? The creative work we undertake with sound, however, is no more technical than what we do with the world of images, with its beautiful army of soldiers drafted to create the techniques of camera movement, framing, point of view and lighting on set. And isn’t script writing itself even more technical? Have we forgotten all the hours spent learning how to read, write, and refine the calligraphy of letters, the hours spent crafting a sentence or a story? Doesn’t an apprenticeship in sound require a similar process? Why is this object, “sound,” not considered in its own right as one of the givens of the creative process, something as fundamental to cinema as the image? Why does the teaching of cinema not reflect this? Have the teachers never listened to the films they analyze—and I don’t mean the original soundtracks, a separate topic in its own right. Have they not heard the sonic matter emanating from those overexcited bodies in their dramatic games—running, collapsing, crying, laughing or vomiting in films? Wouldn’t that sonic matter be the very life of the living, the motor of action, the very heart of the expression of relations portrayed on screen, which constitute what we have been calling since the 1930s “synchronous sound cinema”?

It is doubtless easier to imagine the succession of technical tasks required to make the sound of a film: precise manipulation of the microphone boom, controlled management of recording levels, long and attentive sound mixing, the skillful constructing of sound effects and musical recordings, in short, the patient requirements of editing. They determine successive work stages, but it is hard to understand why there is such a cruel absence of connection among these stages, which directors could and should provide if only they were trained to do so. Every director emphasizes the importance of sound. But what are they actually talking about? How often are aesthetic choices at decisive moments actually absent in the making of a film: scripting, shooting, editing, sound mixing. Wouldn’t every moment in the construction of sound actually require this fundamental attention? Far be it from me to claim that one cannot find any create thinking about this in the history of cinema, nonetheless there is a relative paucity of it given the large number of films.[4]

The dawn of the digital, despite the new potential it has revealed, has not changed things substantially. We are witnessing a cruel separation of work on image and sound in editing. A lack of budget (but who decides how it is spent?) has sidelined the very experts who were in charge of the final stage of sound mixing, of matching images and sounds.

If the director does not think of sound as a determining factor in the global cinematographic construction of a film, does this mean that sound is a matter of course, that it is implicit? Is the sonic simply deduced from the image? The image would simply and naturally entail a sound associated with it? How soon until we have roboticized sound?

How did we get here? We need to remember Paul Falkenberg, who, if we are to believe him, was the first to assume the role of sound editor. The role was created at the beginning of the 1930s not to work on the image but on the synchronous sound that had just appeared and that added an enormous task to the editing of images.

We are very far-removed from the original importance accorded to sound editors. In our present moment, the sound editor, now curiously called the “image editor,” addresses sound only in the “directs.” During the silent era, directors edited their images. Chaplin, Griffith, Flaherty, Vertov or Keaton edited their own films, aided only by an assistant and a few hands maneuvering the reels. Why should we separate sound mixers from their sound, if sound is an independent element, and leave sound to editors who are almost never trained in sound film editing? Falkenberg worked with Lang to edit Die Nibelungen, and he then took charge of both image and sound in M in 1931. If one listens to Lang’s first two sound films, M (1931) and The Testament of Dr. Mabuse (1932), one can grasp how much he thought about the sound/image link as an integral part of production. In sum, in terms of creativity and diversity of forms, what will be the result of an internationally-shared perspective that approaches sound under the auspices of separation?

The Reasons for Neglect

There are, of course, reasons for this neglect. The first would be a human lack of consciousness about our own bodies, about our perceptions, particularly about listening. Our body needs to sustain a certain economy, which requires that we suppress full consciousness about its workings. The brain has to choose the strictly “necessary and sufficient” information amidst the data presented to it in the world. According to the physiologist Alain Berthoz, the brain must dose its global efforts when confronted with all of the solicitations assailing our senses in a world saturated with them.[5]

The object perceived by the eye is better defined, more precisely grasped by an organ that benefits from more neuronal connections than the ear. The image is a persistent object, comfortably allowing a reassessment, if necessary, to furnish more detail. This does not mean that the ear is absent, but rather that the brain is simply content to use the sonic as a complement to what the eye perceives. It stores sounds in immediate memory in order to come back to them, given their ephemeral nature.

The necessary memorization of sound is an important element, because it implies much more interpretation by the listener than is the case for the image seen by the perceiver. This subjective process is a major factor in the unconscious appropriation of sound. Once sound dissipates, we are left with its memory trace. And we embellish ??? not always it can be suffering too it with models already present in memory, accumulated during our lifetime, models created out of lived singularities particular to each of us. Each trace is related to existing affects within us, because we remember especially what has marked our experiences: pleasure or pain. If we can agree about the nature of this or that object, the same cannot be said about the emotions that we experienced along with it. Thus, an important affective dimension, different for each spectator, accompanies sound, combined with elements of vision, themselves imbued with affects.

Compared to the acuity of the eye, our faculty for localizing hearing in space is weak, but the speed of auditory analysis exceeds that of vision and makes the ear the primary organ of watchfulness. A few milliseconds suffice to apprehend a sonic phenomenon. The ear, like the eye, processes information in a discontinuous manner, jumping from stimulus to stimulus, staying ahead, by protention,[6] of a future that is already presaged in the barely perceivable detail of an emergent phenomenon. It is a rapid sensor and thus the ideal protective organ of the human animal. The ear alerts us when confronted with tiny nuances foretelling a more or less uncertain future. Only big sonic events rise to the level of consciousness if they are sufficiently powerful or repetitive, as music is, for example. Each of us lives a different experience of the world, an individual experience resulting from the various paths followed or strategies used against dangers, developed over time. If there is one thing we all have in common, it’s this animal and human capacity in the depths of our genome, which has protected us and ensured our daily survival for centuries. Although bears and wolves may have disappeared from our streets, other savage machines have replaced them.

Big sonic events, powerful ones, are not what keep us listening, but rather tiny bits of information, indications of something uncertain that invite us to “go look” more closely. The sonic plays an essential role as a secret agent, like a sleeper agent in espionage. It acts without our knowledge, no listeners consciously hear what they are listening to, and this is precisely the secret force of sound, its capacity to act while we generally remain unconscious of it. When we leave a film screening, what sounds do we remember? None. This unconsciousness is at the heart of our problem, it is the main reason that leads us to neglect the question of listening.

Sound or Noise?

Maleficent/beneficent object of analysis: why is our relation to the flow of sound so ambiguous? A residual event belonging to an action that just took place, the result of an act, sound is the noise that so many have complained about since ancient times, mentioned throughout history by a cohort of authors. It is an event that has meaning: it expresses the nature of what just happened (a glass breaking on a concrete floor), it also carries meaning in the inflection of a voice or a musical nuance. Sound contains meaning, expresses what is being spoken about, how the voice speaks or the quality of the sound produced by an instrument.

Viewed as a dichotomy, placed in different vocal categories, music and noise can no longer be analyzed in a way that demonstrates how they belong to the same phenomenon. Subsumed under the category of “interpretation” are two objects to which we do not listen in the same way, about which we do not seek the same information: interpretation of a literary text or of a musical work. What is common to the two, however, is their primary function: human relations mediated by the sonic, sound is the materialization of a relation. Whatever the sound may be, it carries with it the trace of a gesture. Intimately embedded within it is a direct or indirect relation with another or others. The force of sound is to be found precisely in the nuance carried by this gesture: the meaning of what is spoken (softly or violently), music interpreted more or less emphatically or functioning like the accelerator of a motorbike, ridden by an adolescent, screaming through the housing projects in the middle of the night… Examples of gestures to analyze, each with a nuance: from extreme gentleness to the violence of revolt. Somewhere between the very soft and the unbearably loud lies the perceptible, the making and the sharing of sonic matter.

Let’s consider once more the conditions of reception of sound, because the situation of listening is not simple, its character is double, located between self-protection and the desire to hear/understand.[7] To sound (an alarm) means to warn. Isn’t any emerging sound understood as a warning? Any unexpected sonic occurrence is seen as an indication of something that is to come and that must be clarified rapidly. The quest to resolve that meaning allows us to carry on and shift our attention toward the possible occurrence of another sound, and so on.

A warning about what? In any phenomenon, multiple meanings appear. But doesn’t the very idea of uncertainty—as to the origin and nature of the event—allow the undifferentiated openness of the word “sound”? If the term “to sound” has lost much of its force, we need to analyze what content is at stake in this terminology. As much as one might hope that the term would return to its earlier dimension, alas, the opposite has occurred. The notion of sound is presently used to characterize the indescribable nature of a perception: we speak about the sound of the Stones or of Miles Davis. To have or not to have “a sound” is the major preoccupation of the musician. We should note that music is most often at stake here, since the chaotic universe of noise belongs to no aesthetic categories. The daily evaluation of the quality of the world, however, is closely linked to the nature of its sounds, and we constantly consider how the world sounds during our daily movements.

In fact, in a general sense, it cannot be said that sound benefits from a broad terminological foundation. Does it need one? I do not think so. The common terminology of the sonic belongs to perception in general, and that seems to work well. Hot or cold, a noise is dampened, a sound is dry, thick, transparent or rough, fluid, granular—the sonic possesses a plastic dimension that always borders on the sense of touch.

Meaning is associated with the sound of language, and the imbrication of sound and meaning is at stake in listening.[8] But the sound of an object that is moved is also replete with meaning through the nature of the gesture that was used to move it. The articulation of the bow on a string carries with it all of the finer nuances of an interpretation. Gestures producing a word, or a bodily movement, or a musical sound carry meaning, we call them interpretations, and they all belong to the same level of human relations.

Timbre versus Sound

In the case of music, the idea of sound has changed in the modern era. An aesthetic break at the beginning of the twentieth century forced us to reconsider the concept of timbre, a change provoked by the emergence of a new type of instrument tied to Lee Forest’s 1907 invention of the triode amplifier. Electro-acoustical instruments produced novel sonic colorations (the theremin, the ondes Martenot…). The sounds of certain artists emerged, Bix Biederbecke, Armstrong, or the “orchestral” sounds of Duke Ellington, Count Basie or Glenn Miller, leading to further periodization (jungle, swing). Jazz and blues were at the origin of these ideas about sound. After the 1950s, things accelerated as a result of the commercialization of vinyl editions of popular electronic music (rock and pop). Each period reworked the preceding one with new categories of “sound” to distinguish itself, although these sounds were all constructed around the same chords in compositions that were harmonically meager. Thus, the idea of “sound” made its entry as a new category whose role was strengthened by the diversity of new commercial types of music. With the advent of new tools—recording studios and then home studios—a musical aurality emerged marked by new sonic concepts produced by a staff of producers associated with composers, as well as sound production engineers. Little by little the idea of sound became more diverse and formed a category that has nonetheless been under-conceptualized despite being a defining factor: “that’s a rock group with a sound.”[9]

Qualities that until recently belonged to the domain of musical notation and sound, articulated freely as readings of a harmonic framework, are now replaced by new creative tools that require new classifications. The birth of sampling, which originated by looping grooves of old 78 rpm vinyl records, a technique pioneered by Pierre Schaeffer, and the invention of the sequencer as well, allowed “non-certified musicians” to invade a territory that was heretofore inaccessible to them. New sampling constructions emerged as accumulations, juxtapositionings and overlaps of prefabricated passages, culled from diverse works, inscribed as so many recyclings and appropriations. A succession of more or less clearly marked breaks has ensued, with each new nuance exhibiting a particular sound. Those who produce a dissidence are rewarded. It’s not a question of a big discontinuity, just a small one, at the limit of what is acceptable by the major producers, who are always looking for the next step, which is generally defined by a minimal difference. Producers can then get to work making a new product, which will immediately be duplicated by a cancerous reproduction.

It’s clear that the term “sound” in this sense is a new object that sound studies ought to analyze more carefully. The social phenomenon is the most important part of this. Economics, sociology, or anthropology: which can best aggregate this data, which although it may not be new, has not yet received the attention that the cultural hydra of Entertainment merits?

This social phenomenon places the interpretation of the world within a musical paradigm. Cycles turn in circles and repeat their “lounge” ambiance for whomever would listen. The fact that in the face of this ambient music the social concept of ambiance should emerge is hardly surprising. A society of ambiance is actually what is promised us, an ambiance of misery, of planetary warmth. Between pockets of war and terrorist attacks happening unexpectedly, it has been able, under the umbrella of “world music,” to appropriate and dissolve the singularities of the most archaic micro societies. This ambient music has a history. It is related to what became in 70s “music to get high on,” associated with the advent of the culture of hallucinogens—another type of dissolution—which allowed people to become enclosed in a more or less mystic interiority, a compensation against the daily apocalyptic vision of the Vietnam War. Le son des instrumentations indiennes (chant, sitar et tempura) émergea alors, rompant avec l’harmonisation verticale pour d’autres modalités, plus aptes à la production d’une horizontalité tant mélodique qu’onirique. Diphonic chanting, like the contemporary rediscovery of the didgeridoo, is a recent sign of primitiveness, the kind that entails the rituality necessary to unite new age tribes as they tinker with “the sonic re-enchantment of the world.”

A return to the essence, that is, to the noises of the world, is what music has always sought throughout its history. It had already accomplished this metaphorically, alluding to the living forces of the world (water in the case of Debussy, natural forces in the case of Wagner, cavalcades of horses in the case of Beethoven). With the birth of ambient music (Tangerine Dream, Popol Vuh, and Brian Eno) came the will to dilution and abstraction, a simultaneous slowing down of diversity—all the diverse musics of the world tend to be brought back to a single music, and this is accompanied by a vision of a “continuous and infinite temporality” breaking with the idea of discontinuity. Harmony was abandoned in favor of the modal (in the jazz of Gato Barbieri, Pharoah Sanders, John Coltrane…), which, like repetition, represents a defiance of time. At stake was a preference for temporal movement of a kind other than the measured rhythm of salaried work, to attain an orbital position, a cosmic dimension to which one should give all of one’s attention and confidence. The composer Sun Râ, in a return to ancient Egypt, practiced a mysticism opening onto the universe, his free music contained a will to return to primitive chaos with a kind of music that would travel to an interplanetary space beyond time…

Continuous/discontinuous

During this same period, experimental cinema also abandoned the era defined by the break—frame-by-frame editing—to move turn toward an open time that had never been explored. Warhol’s Empire was a response to the “Lumière minute,” the basic form that had constituted all of cinema. It was a single 6-hour-36-minute take, extended to 8 hours, 5 minutes by slow motion, filmed from the top of the Empire State Building as the lights go on and off in the continuity of a time-machine, a new contact with the world created by a system that imposes a manner of seeing, the opposite of the permanent discontinuity that is indispensable to sight. This linearity was transformed by the notion of “real time” as soon as the digital and the Net became universal values—in the same space-time as the credit card, Paul Virilio reminds us. Henceforth sound is submitted to this new clock, all digital recordings are linked to it, and microphones placed in the four corners of the planet permanently allow us, via the Net, to follow the so-called “sonic continuity of the world,” which reigns everywhere by neutralizing geography.

Will field recording make us believe once again in the continuous accumulation of sound on a recording device? The globalization of sonic density assembled by microphones creates a non-differentiated sound. It’s a return to the deepest slow temporal movements of the universe, the chaotic sounds that emerge are reduced to the dimension of what cinema calls “ambiance.” The concept of ambiance becomes a category, that is, a certain take on uninhabited space, even if it is inhabited, defined by a distance that reveals neither asperities nor desiring relations in the exchanges that normally accompany sound. A space alive with subjectivity now encounters the idea of “ambiance” and of “sonic landscape,” characterized by distance, where the dynamic is extinguished, flattened, lacking everything tangible, in short, a space of the non-event. Every sound leads back to touch, to an unexpected close encounter of bodies, to a listening-gaze that forces me to feel a relation that overwhelms me. But the era of automated compression and dissolving produced by editing machines becomes the new normal.

The Re-Centering Mechanism

Let us turn now to the spatial factor, discreet but ever present, which has invaded movie theaters throughout the planet and furnished the model for a type of representation of the world, invented by a company to which the major producers have granted a monopoly for sound systems in screening rooms, namely, Dolby sound. At stake is the 5.1 format, first used in Apocalypse Now. This was an implementation of the concept of the “environmental,” which places the viewer at the center of the universe. “Surround sound,” a ring of speakers placed around the screening room, is a framework that puts the viewer carefully at the center. The auditorium space, egalitarian since the invention of theater, and which remained as such throughout the silent film era and even into the period of monophonic sound, has been transformed into a space constructed for each individual. The goal is to create a mass market for home theater, which is an individualized version of the same framework. This is simply another, more concentrated, version of what stereophonic sound had already created in the 1960s, when it reduced the listening area to a narrow medial cone. The “immersive” is born with 5.1. This concept is linked to the notion of the “ambient,” now reinforced by the idea of centrality in order to create an impression of reality at the very center of the action, at the place of maximum solicitation of the senses. This central space is not really a throne, a place for a prince, but rather, the space of the soldier, who must imagine how to finds ways out of the infernal and permanent battle that is raging all around. The “sweet spot” is conceived as a central place, akin to video games, which the viewer occupies. Behind this mechanism it is easy to detect the image of the Net. If indeed we weave a part of the web every day, it is a much rarer experience to move, to be de-centered from it (museographic experiments) and to occupy a series of strategically moving places. The spider strategy has been superseded by Dolby Atmos (2012), a product of Ambissonic recording, which has extended the listening zone in all directions. Previously, this had only exceptionally been exploited (IMAX Geodes exploiting 12+4 channels). We were situated in the center of a crown or two-dimensional circle, but now we are placed under a celestial dome, a half sphere of sound, no longer situating us at the center of the world, but rather in movement in the universe that surrounds us—and we are plunged into the middle of it (Gravity).

The Sonic University

In terms of theory, the second half of the twentieth century has witnessed an explosion of research and applications: concrete and electroacoustic music, sonic landscapes, sonic repertories, plastic sonic art, ambiance, sound effects, synthesized sounds, multisource diffusion, live spectacle sound. But is this diversity sufficiently studied? Notions, concepts and theories have been attached to vague constructions—occasionally deconstructions—envisaging sound from a generalist angle that suited researchers more or less for the purposes of their own analyses. Since researchers’ work has depended on their particular points of entry, multiple fields exist that do not always complement one another. What can sound studies do with this disparate set of approaches if it deals only with produced sound, as a defined object? Why should we not address the problems of sonic conception? Beyond contemporary practices and faced with a history of sound that is difficult to write (unsurprisingly), how do we return to a sonic life whose historical practices have disappeared? How do we find the right perspectives and avoid entry points, such as theories of cinema or theater, which will result in parallel research worlds whose practitioners are concerned only with operative usefulness for practical applications (cinema studies, theater studies)? What would be the nature of a link established among these practitioners in order to envisage studies that begin with the very practices at stake?

The differences among the ways in which sounds are expressed seem to be the reason why they are considered separately, and this hinders any common vision of a “globally sonic” phenomenon that one should perhaps gather together under a single denomination, namely, “the world of noises.” I do not mean to suggest that the global field is one of nuisance (which actually does exist, particularly in music that dominates the market). Nor do I mean to submit the field to the position adopted by Luigi Russolo in his Art of Noises, which would claim that all sounds are musical, as John Cage and R. Murray Schafer later maintained. The musician’s stance is not the sonic paradigm I would adopt. The confusion between the terms noise and sound in general usage indicates an indifference toward nuances. Noise is both a soft rustling and a loud clatter, just as sound, when it is not the “sound of” something, awaits qualifications that will determine with more certainty its level, its substance and its duration.

The origin of the sonic event in an impact or a friction puts sound and noise in the same place. The two originate in the same phenomenon. Whether we consider it to be verbal, or musical, or noise does not change the fact that a voice that insults or a door that slams carry the same meaning: an argument. They thus have the same value. If sound is disaggregated into different sectors, we cannot consider the modalities of its inscription more globally nor raise the question of differences, as I have argued. “Those who work in recording believe they are faithfully transcribing the real. However, one does not record rock music in the same way as classical music, and the sound track of a film is very unlike that of a play” as I wrote on the back cover of my second book.[10] I wanted to highlight the fact that when one simply compares practices side-by-side, differences are revealed, allowing us to see the staging of sound specific to each practice, that is, the result of a certain kind of writing (I wasn’t so sure of this at the time). Only in this way, by looking conjointly at practices in different fields (music, theater, cinema…), can we make the common variables become apparent. Only by placing ourselves thus can we succeed in thinking theatrical or cinematic discontinuity compared to musical continuity, for example. Thinking the sonic should be built around specific observations about its construction. The object of study is complex because it is shifting, ephemeral, ever changing, and even unconscious. In addition, it is an object in play among others—the play of sounds against images, for example.

What is True in Recording?

The discontinuity of perception is a process that allows us to jump from event to event through breaks, from emergence to emergence through ellipses. This faculty is necessary not only for an economy of energy, but also to maintain an indispensable state of alertness—to what is new and what disappears. Tools that capture sound, microphone and recording device, do not have a brain to act as a filter and to place what is acquired within this economy of the “necessary and sufficient.” As a result, a continuous sonic flux, an overload of useless information, accumulates on a recording fed by a microphone incapable of filtering? Parsed?. A next step is necessary to remove the excess: sound editing. But things don’t quite work that way. Sound editors would seem to be tasked with breaking and tearing up continuities, but more often their work is to supplement, rather than to pare down, the original recording. It’s a question of reconstructing rather than of cleaning the data provided by direct recordings of sound, data that cannot be disentangled a priori, since what is recorded is jumbled together. Only because multitrack recorders are used from the beginning can data be parsed, and it therefore becomes possible to re-mix it a posteriori. The goal of what I have called “staging sound” is not to create a reconstruction of all the events that would co-exist in a continuous manner in a given space.[11] On the contrary, it is to construct a progression that resembles listening, in other words, a sonic deconstruction in which the succession of events is scattered in a discontinuous manner. Our brain is able to capture and analyze only a single event at a time within its lived reality. What’s more, the brain cannot manage its filtering choices by means of a reconstruction: we start with an omelet, and it is not possible to disassociate the yolk from the white.

Listening to a recording produces a truth effect: “Since it is recorded, this content is true and must thus be listened to as truth.” Everything deposited on a recording comes to be valued as a fundamental truth. It has a globality that cannot be filtered or parsed, it is unquestionable and undeniable: what is recorded is a truth that we must believe entirely. The loudspeaker commands us, “Listen to me,” listen (ouïr) and obey (obéir) have the same root in French: the voice of the master.”His mastervoice”?

Action films use the same ingredients as other forms of cinema to achieve their effects. But it’s the particular mix of elements used in the composition that differentiates sound in blockbusters from what is found in other sorts of films. A quantitative excess of sounds characterizes them: level, compression, music, and aggressive sonic matter. A climax, maintained beyond any plausible reality, is the master product of the stress thus created. This “sound for the masses,” created for collective gatherings in the mercantile space of the multiplex, is not simply a cluster of energy. It is also an expert composition of “sound details,” and it’s the details, almost exclusively, that captivate us. And if indeed it’s the detail, such that only touch offers a comparable experience, so precise that it can seem present to us, the essential is nonetheless missing: the moment of uncertainty, of hesitation, of awkwardness, of silence, which is essential to the fragility and truth of exchanges. Missing in the sound composition of blockbusters is the silence of the actors, the true engine of listening so necessary to dialogue, the long, precious silences that replace the word. It’s through restraint, through rarefication, that the sonic takes on its full dimension and attains the power of which it is truly capable. The accumulation of sounds saturates the images and prevents us from living in them. The excess of sounds alienates us from the world that the images offer us.

We must allow the image to remain silent: what the ear does not find on the sound track will be sought by the eye. The role of sound is not to comment on everything that is moving in the frame or happening outside it. On the contrary, what we should do is place sonic events, one by one, along the path of the temporal unfolding that defines each take, inserting only what is useful in order to concentrate attention on the principal active elements that construct meaning. It’s not a question of re-creating reality, but rather of simulating the conditions of listening, listening in a state of alert and discovery, which unfolds along the path toward an unknown destination. Sound must offer only what is necessary for the image. It must be limited to what is sufficient, exploiting the brevity of sounds in general—it must evoke rather than state. We need only a few brief instants to understand the essential of what a sound contains, only microseconds. Sonic continuities defeat the purpose, they are much too frequent, and this is linked to that fact that recording machines favor flows. It’s a mistake to believe what tools automatically produce. Continuous flows are rare in nature and our listening abandons them very quickly. They are the results of motors and their residual electrical effects, placed at a fixed point. The brief repetition of an emergent event is significantly more effective than any continuous flow.

The multiplying of recording tracks to which the advent of the digital has given us easy access has had a deleterious effect on film sound tracks. Cinema finds itself stratified into too many layers that drown out the act of looking. To produce an image and organize listening is to pick something out of the multitude of possibilities, and sometimes to make us hear something different from what is shown. To construct the sound of an image does not mean to capture everything that moves, but rather, to highlight what one should hear. Reducing the quantity of sonic matter that accompanies film images is urgently necessary. Cinema is now marked by a saturation that interferes with its readability. Excess made us believe that it was a way to get at the truth—we would be nearer the truth of representation if we multiplied the number of sounds. This was a clumsy mistake. Reality is entirely subjective, and mine is quite different from yours. Technicians claimed that there was a universal approach to the question. It’s not simply that this approach is impossible, but also that it turns out to be completely boring.

Reflection devoted to sound is characterized by technical and technological considerations about its production, about the process that seeks to make it more “itself,” so that there is more “impact” on viewers placed in increasingly constrained listening positions—in a plane, on the phone, using a tablet. This desire for efficiency acts on the body in increasingly effective ways, based on a domination that could truly be called a “machination of the senses.” “Action” films subject our animal being to an intense experience of stress in a terrifying atmosphere that prevents the listener from thinking—with enough force to destroy our judgement.

A relation to the body is quite the opposite of this way of imagining the correlation between listening and looking. It would offer viewers the chance to reflect back on themselves, in the silences that might provide space to respond to what is offered and to dream it. And this is quite removed from the present triangulation proposed in collective spectacles, built on a certain correlation among the screen, the other, and ourselves.

How can a poetics of sound emerge and find a space of play when confronted with constructions so formalized in present cinema production that they affect even the elocutionary properties of off-screen voices? A poetics of sound can appear only in a fragile space of listening, and this space is made up of all the places where the sonic exists. Sound lives in and constitutes a space, a tactile space of distribution. The fragility of exchange is at the center of the question of listening: listening is the place of access to the distribution of the sensible.[12] As an evanescent memory, sound is in the unsayable in each of us, and it is here that the distribution of listening finds its possible poetics.

  1. Noise music is clearly a recuperation of this distinction in the musical domain.
  2. Original soundtracks are sold separately on the market.
  3. Translator’s note: Jane Knowles Marshall’s brief introductory page, “An Introduction to Film Sound,” on the Filmsound.org website reflects this tripartite division perfectly: http://filmsound.org/marshall/.
  4. See the partial list I have compiled here: http://www.deshays.net/www.deshays.net/Proposition_de_cinematheque_du_son.html
  5. Alain Berthoz Le sens du mouvement, Ed. Odile Jacob, Paris, 1998.
  6. Ibid.
  7. Translator’s note: the French verb entendre, which ends this sentence, means both “to hear” and “to understand.”
  8. Language, like all movement, is defined by the fact that sound organizes meaning.
  9. See Makis Solomos, De la musique au son. L’émergence du son dans la musique des XXe-XXIe siècles, PUR, 2013.
  10. Daniel Deshays Pour une écriture du son, Klincksieck, Paris, 2006, 2019.
  11. Daniel Deshays Sous l’avidité de mon oreille, Klincksieck, Paris, 2018, p. 65.
  12. Translator’s note: Jacques Rancière’s expression, “le partage du sensible,” is used here. It is difficult to translate, and Rancière specialists in the United State and Great Britain generally use the English phrase “distribution of the sensible,” which, admittedly, does not help the reader who is not familiar with Rancière’s work. Roughly speaking, it is what we can perceive or understand within the framework of perception and understanding available to us at any given moment.