Should ‘Audio’ be taught as part of Music?
 
This article appeared in Audio Media 1996 & Sonic Arts Network Agenda 1997

The subjective perceptions which differentiate Music from other types of sound are among the most contentious of all the judgments we make around the issue of taste. I can imagine that the title alone might have some people reaching for their revolvers!

But the point is that we live in an age when music has largely become audio whether we like it or not. (For the purposes of this article I’m defining audio as all sound emerging from a loudspeaker, whether of recorded or broadcast origin.)

During the 70s and 80s I composed over a hundred scores for live and broadcast drama. I was also successively a commercials producer, a BBC producer, Deputy Head of the Music for the Royal Shakespeare Company and Head of 20thC Studies at the Royal College of Music Junior Department. I therefore tended to move freely between the worlds of audio and music, and would often wonder why they coincided so little, since both were effectively doing the same job – displacing airwaves.

Obviously the function of the displaced airwaves varied as did the technicalities of the displacement, but the fundamental similarity is that, viewed in abstract, every sonic experience is a time sculpture with a unique shape and character. I take it to be the job of the composer, musician, producer or audio technician to make each such experience into the most satisfying s/he can, relative to the æsthetic under which each is created. (That this is an ideal somewhat removed from commercial reality doesn’t stop it being an ideal!)

One of the deadening or desensitizing effects of exposure to so much audio today is that most people are no longer conscious of the periodicity of either music or audio experience. Music has become audio wall-paper in a way far exceeding the Surrealists’ pre-wireless fantasy of musique d’ameublement, music as furnishing. Today even that guardian of the nation’s musical morals, BBCr3, has thrown in the sponge(-cake) to produce the trifle most people desire.

By treating audio experience, be it radio or pre-recorded audio, as an interchangeable, interruptible continuum of sound rather than as a sequence of discrete experiences ‘listening’ is reduced to ‘hearing’. And by this process we do a great disservice to the ‘bio-diversity’ of sonic experience and to listeners’ consciousness: for if all sound is reduced to superficial ear-candy then music is deprived of its elemental powerful to transport us and to recreate our senses.

I see the solution being to educate people to the nature of audio in order to be more deeply aware of how music differs. In exploring this distinction we have a unique opportunity of understanding not only Music and how it contributes to society but also the functioning of cognition itself.

Some might say, ‘oh but this is absurd, we know perfectly well what the difference is.’ Yet Audio has already changed the way we hear, just as Film has altered how we see. Today the capacity of information technology to interact with music is inexorably transforming ideas about the essential nature of Music. Like it or not we already have a hybrid art-form which belongs more to the middle way of Audio than either to music alone or technology alone.

So What’s New?

In fact sound engineers instinctively know that such an art-form has existed ever since the art of recording progressed beyond being a simple megaphone gaffer-taped to a stylus on a wax cylinder. From the moment the BBC Home Service broadcast Beatrice Harrison playing her cello to the nightingales in the Surrey woods live in 1924 -a phenomenal technical achievement- technology has increasingly tended to integrate our sonic experience of music and the ‘natural’ world.

Her contemporaries, the experimental composer George Antheil and his mentor Ezra Pound saw that latent within recording is an Æsthetic of Pure Sound, which bypasses the specific cultural associations of Music(s). Antheil was an American who found fame in Paris with a mechanistic style of music. He made a number of endeavours to harness such recording technology as was available to him – including a projected electronic opera with James Joyce, and Ballet Mécanique, an attempted film collaboration with the artist Fernand Léger five years before the arrival of sync-sound, But Antheil never escaped from the orbit of musical literacy and ultimately fetched up in Hollywood.

With the spread of magnetic tape after WWII fresh impetus was given to ‘audio composition’. Some will know of the pioneering ‘radiophonic’ compositions for Radiodiffusion Française by Pierre Schaefer, probably the first true technician-composer. Calling them musique concrète (stuck-together music) and taking his cue from film montage Schaefer was the first to exploit the non-linear possibilities of recorded sound by surrealistic and ear-sharpening juxtapositions of documentary sounds. His idea was extended in 1957 by the (hitherto acoustic) composer Edgard Varèse in Poème Electronique. The other early pioneer of direct-to-tape composition was the legendary Karlheinz Stockhausen whose Gesang der Jünglinge (Song of Youth) 1955 was the first piece to use sound processing to fundamentally transform acoustic sounds into a blended tonal synthesis.

Since 1951, when Stefan Kudelski perfected the Nagra, high quality sounds from the natural world, eg whale vocalisations, have become so common-place that it seems quite old fashioned to point out that what we take for granted is a profound and recent innovation to historical ideas about music.

Prior to the era of tape it was perfectly clear what was Music and what wasn’t. In that era, human activity had been thought to be infinitely superior to the activity of nature. Yet now, within a single generation, we have been forced to acknowledge that humans are only one species amid millions on the globe and have begun to want the sound-world we humans inhabit to reflect an integrated view of our environment, not one separated into the ‘natural’ and the art-ificial.

This leads directly to the approach I call an Æsthetic of Pure Sound. Listening to audio we can no longer say ‘this is music, that isn’t’. For whatever else it is, whatever we experience in recorded form is all audio – what we are hearing is not the actual sound, it is an audio signal that has travelled through a reproductive chain and now emerges from loudspeakers in a situation completely remote from its origins. We re-code that signal in our brains and take it for the real thing, but actually it can never be more than a metaphor, a surrogate. The functioning of stereophony itself is a psycho-acoustic illusion.

Take a visual example. We’re accustomed to seeing excellent natural history films on tv, but go to the countryside and actually attempt to study fauna and you’ll find it a tremendous let-down … cold, boring, and very likely nothing will happen. The same is true of a live concert.

The synthetic reality has been so heavily promoted that it has overtaken the probabilities of what is likely to occur. If we offer the public an idealised picture of music which music itself can only rarely live up in real life, are we doing it any favours?

Issues like these are completely outwith any conventional musical or technical discussion, and yet they are central to what music -in the broadest sense- means in the present age. Technology has already enhanced sensory and cultural awareness, music teaching must catch up and exploit this reality if it wishes to retain the attention of students or anyonelse for that matter.

The word composer conjures images of long-haired young men with floppy handshakes, the french equivalent compositeur is far more suggestive of a ‘putter together’. I find this constructive in terms of demystifying the process of ‘putting together’ sound. We tend to revere the ‘art’ of a composer as superior to the ‘craft’ of an audio editor or compositor. Why? To create a satisfying audio experience is to respond æsthetically to pure sound, even if the ‘compositor’ isn’t consciously aware of doing so. One can invent long words for it, and academics have, but fundamentally good composition is about the simple ebb and flow of sonic experience as it impacts on human cognition.

In drama film immense control is already possible over every element of the soundtrack but now a sound design /resampling programme like Kyma offers unlimited power to audio supervisors to ‘recompose’ and/or manipulate all audio material to achieve effects that formerly would have been the province of music. The minimalist composer Steve Reich’s Different Trains for sampler and string quartet is an example of how surprising the ordinary can be when simple phrases of speech are integrated into a compositional framework.

In this regard the ear is an infinitely more powerful sensory gateway than the eye. Sadly, in the explosion of info-tainment media in recent years the imaginative power of sound has been reduced to a very superficial level as producers and broadcasters have competed for attention in an ever-growing, ever more complex environment.

The fact that words weigh radio down like millstones round the neck of innocent children may deceive us into thinking that it is about words, but it isn’t, any more than music broadcasting is solely about music. As programme planners know, it’s about sequencing a flow of experience that lead listeners from one perspective to another. The art is to hide the art, so that what is broadcast matches the ‘consumer’s’ preconceptions and is therefore accepted without question … anything else would frighten the horses!

Ironically, when there was less diversity in society and in broadcasting, radio (then only the BBC) was not afraid to make people think. Nowadays it’s absolutely taboo to make people think, let alone to question their surroundings. ‘Aren’t things confusing enough already?’ is the attitude. Moreover thinkers are not the docile consumers which commercial radio needs to deliver to its advertisers.

It’s ironic that the increasing potential of technology has been accompanied, by a very public retreat into a stereotyped linearity - emphasis on undemanding narrative such as melody or plot line. Is this perhaps a response to public exhaustion at the level of complexity in society which these very developments have brought about? Ordinary people are -and may always been- very fearful of non-linearity, of exposure to the ‘sense-lessness’ of ideas that depart from easily-understood lines of thought. (Is this why all television drama is now about the police?)

In fact the IT revolution has offered unprecedented empowerment of ordinary workers in the audio industry. This in turn has erased the division that once existed between technicians in white coats, relegated to subordinate tasks, and artistic types in who were put in charge of productions – often, apparently, because of their total ignorance of the technology involved!

As I pursued my career I frequently reflected on how the worlds of audio and music would be enriched if either would take the trouble to explore the æsthetics of the other. But because each was hermetically sealed in its own pattern of careers and finance no common vocabulary existed which would allow those in either world to explore the ‘reality’ of the other. (And in any case they were grown-ups too busy playing at being important to worry their heads over the kind of philosophical issues I’m raising here.) However, the conceptual barriers that once restricted audio-production to certain limited areas are breaking down. Today as more and more freelances acquire high quality production equipment it’s more and more possible for independent viewpoints and music to make themselves heard BUT, and it’s a big but, many of the æsthetic attitudes haven’t caught up. Many of the indie productions never achieve any significant level of penetration because, however technically competent they are, the limited horizons of their makers restrict their vision to their own minority viewpoint.

This then articulates the central dilemma – how can we educate technologists so they’re aware of æsthetic issues, and æsthetes so they’re aware of the creative possibilities of technology?

The (next) Generation Game

People in music and broadcasting whose careers are at their zenith tend to view what happens in education with amused disdain. They feel the exhilaration of being on the foremost tip of the speeding arrow of ‘culchur’ as it lurches through the air and laugh at the way Academe grabs at their slipstream (spliffstream?) in a vain attempt to freeze the ephemeral. But after last week’s Music Week has been used as chip wrapper there remains the serious question of what sort of environment we want to create for ourselves and, more importantly, our children?

At the RCM in the late 80s, I proposed a Broadcast Music Studies course which would have been dedicated to the art of applied audio (and by ‘art’ I don’t mean what vanishes up its own backside, I mean what makes the best forms of communication). But the disdain of the hierarchy was such as to make clear I was wasting my time. There are now so many absurd forms of paper qualification and so many politically-inspired ‘educational initiatives’ that academics are more than ever bound up in an introverted value-system where the only thing that gets addressed is the qualification itself – not the reality it should embody.

In the world of chart music the technician long ago emerged from the misty obscurity of the control booth to take hir place alongside the performers, or rather, the division disappeared as the musicians invaded the control room and insisted on controling their own sound-destiny. But then chart music cannot be chloroformed and pinned down in display cases for students to dissect … unless of course Neil Diamond cuts another album.

Still, the question remains, do we simply want the next generation to acquire the vices which the present one is so busy multiplying – the market-place as god, marketing mania as the new religion, and spin doctors as priests? Or is it worth trying to build into our future ideas which may encourage the creation of a saner, more balanced, happier society? Silly question I know. After all the grandaddy of experimentalism, John Cage, wrote an aptly-titled work How To Improve the World (You will only make matters Worse).

All the same I believe that if the teaching of Audio were to be integrated into the teaching of Music in schools (don’t say ‘what school music’ you pessimist) we would discover certain significant advantages in both the short and the long term. And here are my reasons …

Firstly, whatever classical musicians think, audio is now an integral part of our sensory world. Since technology amplifies the intention of the user we must come to terms with the power that that gives, and teach people how it may be used to best and most constructive advantage.

Secondly, music isn’t chess, there’s no winning and losing. If an experience of music is truly to enrich people it must grow out of a common reality, and not remain divided into the state-subsidised preserve of the educationally-privileged on the one hand, and urban desperation of society’s rejects on the other.

Nowadays, like it or not, technological thinking is already a major part of the mind-set of nearly all young people, failure to integrate our cultural past into the present is simply to condemn ourselves to repeat the mistakes of previous generations. Integrated, the past can act as a conceptual counter-weight to the present, balancing the welcome diversity of contemporary ideas with a reminder that actually there are certain fundamental rules to the nature of sonic experience which you break at your peril.

And thirdly, we should study audio because of the immense amount it has to teach us about how we process not just sound, but even how we experience experience. In other words it has the capacity to hold a mirror to the structure of our consciousness, and thereby to make us wiser about ourselves.

Lastly, from handling digital technology we learn that every thought-form is structured in a hierarchy of values; and as we learn to analyse and identify correctly the sequential processes involved in music-audio production so we find that that discipline of analytical thought clarifies our perceptions in other areas of thought as well.

Practical implications of teaching audio in schools

Many students have considerable blocks about learning. Generally such barriers emerge as a consequence of students’ inability to learn academically, rather than because of the subject. If one can take them ‘outside’ the classroom environment many of the psychological difficulties fall away.

Recording and editing audio does just that. Instead of being constantly brought up against their own ‘failings’ students come to closely observe other people’s patterns of behaviour, musicianship or speech – from which a skilful teacher can draw inferences that may help the student to bypass whatever barrier has inhibited hir learning.

Anyone who has had experience of improvisation with teenagers will know that the presence of a good quality recorder makes a colossal difference to their mental focus. (It’s equally true of adults, just think of the effect a microphone has on politicians!)

In class music-making, particularly improvisation, I have found an invaluable corrective to people playing out of time is to encourage students to focus their hearing onto the sound itself, away from the muscular or mental activity of generating it. This always has an immediate result. And it’s very much a function of what I am proposing here: a system by which students are encouraged to become consciously aware of the phenomenal world by interacting with recording technology.

It will always remain important to continue teaching physical musicianship; the value of learning an instrument to the mental development of a child is immense, but in an age of World Music -music as an international commodity and the cultural interaction of different musical styles- it is imperative that music tuition should include learning about the context of music-making and encouraging students to relate what they’re doing to cultural strands beyond their own. Deliberate use of recording has central role in this.

But of course audio is not just music, it’s speech too. This is a much more tricky issue, for words are like a cuckoo that drives everything else out of the nest. The moment you use speech it drives out sense! The cognitive focus immediately addresses itself to decoding the verbal logic rather than attending to the phonemé, the quality of sound. Teaching young people to balance both would free up a whole range of creative attitudes in society.

I have in mind that as part of a balanced programme students should be encouraged to record and edit documentaries that would incorporate both music and speech, as well as other elements, all of which they would be encouraged to generate.

Apart from allowing students to express their own musical preferences, such projects would bring to the forefront of consideration the question of meaning. That is, what has value for them as individuals and what they believe will have value for an audience. This in turn obliges them to consider the expectations of people beyond their peer group, and to experience the odd salutary bucket of cold water in the face.

Rough Cut

From audio we discover certain things that music doesn’t teach. For instance anyone who has edited tape knows that rhythm is not synonymous with strict metre, a beat; there are varying emotional or experiential values to different ‘paragraphs’ of sound which define their relative strengths and suggest how a programme should best be assembled. In an editing suite you rarely theorise about such things because it’s usually pretty obvious. However the underlying reality is that ‘value’ relates directly to the amount of cybernetic stimulation contained in the information delivered to the brain. That stimulation derives from two components, content and that phonemé (the quality of a sound).

The linguistic and developmental values of learning/teaching audio-editing of speech may be summarised as follows:

  • Programme-making creates consciousness of time, and of the relationship of value/meaning to duration, which should stimulate students to refine and focus their own forms of expression to the pithiest – hence to develop their minds.
  • It teaches story telling, and story telling teaches timing. It stimulates the dramatic instinct, which itself obliges consideration of the value systems by which people prioritise information they receive.
  • Audio is neither exclusively speech, nor is it merely music. By obliging students to balance the requirements of the two languages, music and speech, it offers them a path to acquiring balance in their own lives.
  • It stimulates grammatical and phonetic awareness which in turn demands consideration of moral awareness - does meaning lie in the words used or is there a non-verbal component? If so how should this be handled? The close psychic interaction involved in interviewing and editing cannot fail to stimulate intuition and enhance interpersonal skills.
  • Audio production demands good management of time and resources and teaches the importance of analytical thinking in planning an appropriate framework for the project. More importantly, actual experience of the hazards of recording should encourage students to balance hypothesis (advance planning) with the mental flexibility needed to respond to changing circumstances and priorities in the field.
  • Audio editing demands the development of skills such as methodical habits, technological dexterity and very considerable hand-brain coordination, not to mention interpersonal skills when collaborations require participants to reconcile opposing viewpoints. Nor can anyone edit successfully without considerably improving both conscious and subconscious memory. Editing immerses one in the thought-forms and speech-patterns of other people which (should) offer the student editor insight into the working of the mind and thereby enhance hir personal understanding.
  • The supreme question which audio editing raises is ‘what is truth?’ – when often the technology itself imposes limitations on how things need to be recorded. What is the relationship between content and the medium which carries it? In editing speech the desideratum is to make the shortened version sound ‘natural’. But of course this is an illusion. So what then is the relationship between illusion and reality – indeed which is which?
  • The most advanced forms of digital audio editing demand the consideration and development of a personal morality, since nowadays it is possible to ‘make’ an interviewee say almost anything the editor wishes.

The questions here are: who has ‘ownership’ of what is said? How far is it permissible to alter recorded speech in order to enhance the interviewee’s own, perhaps inadequately expressed, meaning? How far can or should an editor/producer go in using recorded material towards an agenda which is contrary to subject’s viewpoint?

(The approach I’m suggesting differs from journalism per se, since that discipline is exclusively concerned with verbal reasoning. Here we’re talking about phonemé, the æsthetics of sound, as an integral part of the final experiential equation.)

  • Perhaps the highest value that handling audio can teach is ‘awareness’. It demands the ability to distinguish between foreground and background information, which in turn demands that the student begin to listen non-literally, that is, to listen not merely to the information-content but the whole context within which it is delivered. In other words is the tone of the speaker’s voice in tune with the actual words used? If not what is the implication? (Imagine the electoral effect if the population were observant enough to detect lying by simply listening to vocal quality!)
Porcine Aviation?

Yes of course this idea is absurdly Utopian. There isn’t any money in the State system for equipment. It would take years to win over the professional nay-sayers, and by the time that had happened the concept would be so diluted as to be tasteless. And then there are the logistics of teacher training … and blah, blah, blah …

The purpose of this article is deliberately ‘unrealistic’. Unless we create for ourselves visions that lead us to want to embrace the future and make it in some way better than the present we might as well pop another tablet, curl up in front of the telly, shut out the world and allow ourselves to be consumed by consumerism.

Our government has just sleep-walked its way through the second industrial revolution without a clue about what has been happening or ever attempting to retrain people or capitalise on it! Those who whinge that ‘the market alone decides, we control nothing’ are the very people who surrendered Britain’s technological supremacy in the first industrial revolution and have recently arranged the fire-sale of British assets to all comers. Our European partners educate all their children properly and as a result discover that quality of life comes from social integration not the maintenance of privilege.

My hope is that these ideas might encourage imaginative teachers, colleges or commercial institutions to experiment with them in a cross-disciplinary way and begin to create initiatives which could in turn inspire others with practical achievements. Maybe what I’m suggesting is already occurring somewhere in the world, I don’t know.

Already in the increasingly post-Gutenberg world of advanced technology old ideas of hierarchy and social stratification are disappearing. A recent survey showed that Britain has the highest per capita number of micro-computers in the world. Surely this tells us that as a nation we have the capacity to understand and harness technology and to use it discriminatingly?

What interests me is to see live and recorded activities synergising new cultural forms that represent all sections of society, not battling against each other, each ignorant of the other’s true character, to maintain or assert privilege or to drown each other out. It could happen!

Google   Enter search:    all web   msteer.co.uk      |   Site by Sam Steer   |   ©2016 Maxwell Steer
<