Johns Hopkins Magazine - September 1996 Issue

Extending the Orchestra

Dale Keiger
David Wetzel opens an oblong instrument case and pulls out what at first glance looks like a clarinet. It's black and approximately the right shape, but lacks any silver hardware and is about two-thirds the size of a clarinet. It has keys, but they're all black plastic. There's no visible reed on the mouthpiece, and no bell at the other end. The last clue that Wetzel is holding something other than a conventional instrument comes when he walks over to a bank of electronic consoles, and by cable plugs into what is known as a MIDI patch bay--MIDI for Musical Instrument Digital Interface. He is about to interface, so to speak, with a Kurzweil K2000 synthesizer--to play computer music.

Wetzel is a conservatory-trained clarinetist and a master's candidate at Hopkins's Peabody Conservatory. The musical instrument in his hands is, in the less-than-poetic terminology of computer music, a Yamaha WX-7 MIDI Wind Controller. It fingers like a saxophone, but has a small wheel in the back that he can control with his thumb to change the pitch of a note, like a guitarist bending a string. Nothing in the wind controller vibrates, but nothing has to--it doesn't produce a musical tone. Instead it generates a silent data stream of 0's and 1's. In its mouthpiece there is a sensor that monitors the velocity of the air Wetzel blows through it. Other sensors note the keys depressed by his fingers, and a pitch-bend lever in the mouthpiece (where a reed would go on a sax or clarinet) measures the pressure exerted by his lip, for bends and vibrato. All of this data goes via cable to an electronic interface that sorts it out according to programmed instructions from Wetzel, then commands a synthesizer to produce an acoustic signal. This signal travels by more wires to a set of speakers and, milliseconds later, emerges as music.

That is, if you're open-minded about what constitutes music. Because the tones are electronically synthesized, Wetzel can make his "clarinet" sound much like a piano, or a violin. Or in this case, like wind sweeping across a prairie and leaves scuttling across sheet metal. With his mouth and fingers, Wetzel may be making like Benny Goodman, but what he's actually doing is manipulating white noise--demonstrating how to make expressive musical use of the sound that comes through your radio when you're between stations.

At Peabody, Wetzel, his grad student colleagues, and faculty member Geoffrey Wright work in Room 316, a studio above Friedberg Concert Hall, exploring the electronic manipulation of sound for musical ends. Wetzel's white noise from a black instrument doesn't, at first, seem musical, at least not in the same way as the Mozart emanating from a practice room down the hall. But when he used this instrument recently to perform "Mists," a piece by fellow grad student C. Matthew Burtner, he was following a score, playing tones organized by pitch, duration, rhythm, and dynamics- -music, albeit idiosyncratic music.

That such sounds can be produced and manipulated by a skilled classical clarinetist is a fascinating technical achievement. But Wetzel and the 25 or so other men and women in the graduate program are not out to amuse technogeeks. "Remember that Peabody is a conservatory," says Wright, the coordinator of the computer music program and founder of the studio in 1982. "What's important here is not audio digital signal processing, but music." And using computers to make music is, as it turns out, a very complicated business.

Applying electronic equipment to music has a longer past than you might suspect. In 1895, Thaddeus Cahill tried to perfect an electronic instrument he called the telharmonium; it consisted of rotary generators and telephone receivers, and it never worked very well because it needed amplification, which hadn't been invented yet. Paul Hindemith composed for an electronic keyboard instrument called the trautonium in 1931. Electronic organs have been around for decades; the Hammond was patented in 1934.

Using computers to compose and perform music is much newer. Wetzel, a serious young man, cracks the smallest of ironic smiles when he notes that the "classical" computer piece he played at a recent recital was composed in 1992. Against one wall in Room 316 there's a Moog synthesizer that is, in computer music terms, a relic...and is all of 30 years old. The students in Wright's program are all highly skilled classical musicians, well-schooled in the Western musical tradition. When it comes to the computer music tradition, they're pretty much making it up as they go along.

In fact, they're making up everything as they go along. When Arnold Sch”nberg introduced his radical atonal music in the early 1900s, audiences had never heard anything like it, but they had, at least, heard the tones before--violins, cellos, brass, etc. Unlike those acoustic instruments, a computer has no inherent sound. Change the pattern of 0's and 1's in its data stream and you can make it can sound like anything. It can sound like David Wetzel's white noise, or an ersatz violin. It can sound like a jet engine or a series of electronic plunks or a piano at the end of a long, resonant corridor.

Grad student Jozef Bezak made it sound like the mountain valleys of his native Slovakia. When he composed his piece B from B to B (which stands for "Bezak from Bratislava to Baltimore"), he included a part in the first movement for fujara. The fujara consists of a short wooden tube with a mouthpiece, attached to a much longer wooden tube with sound holes. It's played much like a recorder. In Slovakia, shepherds play the fujara in mountain pastures, and as a composer, Bezak wanted his concert hall audience to hear it as it would be heard in its natural setting. So he played his fujara through an electronic signal processor that manipulated the sound, according to his programmed instructions, to simulate the echoes of a mountain valley and create a sense of vast aural space. He made further use of computer-generated tones to suggest the sounds of the jet that brought him to America.

"Is good for me," Bezak says, speaking of the technology. "I don't have any limitations."

Such an infinite palette tantalizes composers. But it also presents them with a large, difficult task. They can't just write a passage and say, "This is what the trumpet plays, and this is the computer part." A trumpet always sounds like a trumpet. A computer must be told what to sound like. Every musical sound that comes from a computer must be crafted by someone from 0's and 1's.

By musical sound, does one mean anything that can be organized by pitch, dynamics, and duration? Well, yes. But not all such sounds are the sort of rich, emotionally affecting tones that never grow tiresome.

Part of what makes the sound of a violin so compelling and expressive (in the right hands) is the remarkable complexity of its tone. To demonstrate this complexity, Wright takes a piece of chalk and on a blackboard draws a diagram of a pure tone. It's a simple sine wave: a single line curving above and below a horizontal axis, the period between waves and the amplitude of the curves always precisely the same. This drawing resembles an acoustic diagram of a single violin tone about as much as a child's stick-figure resembles a photographic portrait. Wright pulls out such a violin tone diagram. The first thing you notice is that the single wave form has been replaced by many forms, called partials, one for each of the harmonic elements that combine to make this a note on the fiddle; typically, a person can hear up to 20 partials in a single tone. Each partial is jagged, complex, and irregular. And the diagram has a third axis, to accommodate the numerous frequencies generated simultaneously when a violinist plays a note.

The human ear is notably sensitive to all of these complexities. Back in 1928, R.R. Riesz calculated that under ideal conditions we can distinguish 370 separate degrees of loudness. In 1972, researchers H.P. Van Cott and M.J. Warrick determined that people can distinguish 1,800 different pitches. M. Clynes found in 1984 that human hearing can tell when the duration of a musical note changes by as little as two milliseconds. Wright cautions that different studies have produced somewhat different results, but these numbers give a hint of the ear's sophistication. To further complicate the matter, Wright adds, hearing is not linear. It is more responsive to some elements of a musical tone than to others.

Today's orchestral instruments have endured because, year after year, they appeal to this aural sophistication with tones as complicated and fascinating as the subtle hues of a great painting or the complex taste of a fine wine. That's why orchestras have violins, oboes, and pianos instead of pennywhistles, or rubber bands stretched across cigar boxes. Faced with the capacity of audiences to hear such subtleties, most computer composers want something more out of a computer instrument than electronic burps.

One approach to creating computer tones has been to sample the tones of acoustic instruments. That is, a violinist plays a B- flat, and through a microphone, an electronic sampler takes audio snapshots of that tone--44,100 snapshots per second per stereo channel. The sampler stores those snapshots as digital data, and when the composer next wants the sound of a B-flat on the violin, he issues a computer command that tells a synthesizer to "play" those data, which create an acoustic signal that plays a violin B-flat over a speaker.

All of which generates a violin sound that may be acceptable when heard on a car radio as you speed down the freeway, but has shortcomings when heard in a concert hall. In Room 316, Wetzel depresses a key on a synthesizer that has been programmed to play violin tones. The resultant sound can't be mistaken for, say, a flute or a mandolin, but it lacks the warmth, fullness, and immediacy of a violin; if an acoustic violin tone can be imagined as a large sphere of sound with thousands of colors, the synthesized tone is a small sphere of a few hundred hues. Plus, every time Wetzel hits the key, the violin note swells slightly, a sound that a violinist would want sometimes but not every time. "Any string player listening to that would just laugh," Wetzel says. "What you find out right away is these electronic sounds get really boring."

It's not enough to sample each note on the instrument and then provide the computer with an algorithm that, say, makes the note become louder or softer in volume as the performer wishes. When a trumpet player blows harder, for example, the note not only gets louder, its timbre changes. It becomes brighter, brassier in tone. All these subtleties must be sampled, too, if one is to convincingly synthesize the sound of a trumpet.

Then there's one final limitation: all synthesized notes must be played through speakers. Pop musicians, who have embraced computerized instruments (indeed, they drive the market for such technology), all play through sound amplification systems anyway. They care about their sound, but they play in bars and hockey rinks and stadiums. They and their audiences have much less concern for the nuances that matter to Wright: "The biggest curse I have in computer music is that the sound still comes out of a paper cone. How can a cone reproduce an orchestra?"

Says Wetzel, "An acoustic instrument is far more sophisticated than any electronic instrument we have. But you have to approach electronics for what they are. Any object has musical possibilities. I'm giving up on creating orchestral sounds with electronic instruments. Why not make electronic sounds that are expressive?"

Why not, indeed? To accomplish that means doing far more than figuring out how to generate interesting tones. Those tones must now be manipulated in expressive ways by the hands of a musician. Every fingering, every bow stroke, every change of embouchure and breath, is in computer music parlance a "control gesture." And a computer instrument must be programmed to respond to every control gesture that applies to the type of instrument the musician happens to be playing, whether it's a keyboard or a wind controller or a violin MIDI controller. You can pick up an acoustic violin and, if you know what you're doing, get all sorts of sounds out of it with just your hands and a bow. But if it's a violin controller that generates synthesized tones, for every variation of bow stroke, vibrato, and fingering, there must be a set of instructions that tells a computer how they should affect the synthesized sound. And a musician's touch is a remarkably subtle business.

Gastone Coccia Vinsi is a concert violinist and another of Wright's grad students. For several months, he has been working with an acoustic violin fitted with a ZETA MIDI retropack system, an electronic bridge and tailpiece that replace the normal bridge and tailpiece and transmit data, much like the wind controller played by Wetzel. Coccia Vinsi picks up this modified violin, plugs it in, and draws his bow across its first two strings. For the G string, the synthesizer sounds violin tones, but for the D string it emits an electronic sound that seems to be from another instrument entirely and travels across the room from one speaker to the other and back. Capabilities like this intrigue Coccia Vinsi. He could, for example, play simultaneously on one violin two contrapuntal parts that sound like a duet between different instruments.

He has been studying the ZETA to see how violin technique applies to this new form of controller. Does the ZETA hardware sense the full range of his vibrato, and then does the synthesizer generate the correct sound? Does the technology detect a change in that vibrato and alter the tone fast enough for the violinist to feel in complete control? What about the different ways of vibrating the strings--pizzicatto, sautill‚, bariolage, martel‚ : Does the computer respond to them accurately, with all their nuances? How does the instrument feel in his hands? For example, he says, "When you play a violin, the sound is right here, right under your ear. When you play the ZETA, the sound isn't right here, it's right there"--and he points to the speaker in the corner of the studio. This creates, to the fiddler's ear, a delay that he must adjust to.

Wetzel, the clarinetist, has encountered difficulties with the spontaneous control of electronic instruments. His MIDI wind controller cannot match the dynamic range of an acoustic clarinet (though he notes, in fairness, that few orchestral instruments can). He's also dependent on how well the synthesizer has been programmed to respond. Today, for example, the synth is programmed for keyboard control, to produce what Wetzel calls "plonk and decay." When he blows hard into his wind controller, the sensor in the mouthpiece measures this initial intensity of the air stream and sets the volume of the note. If he blows hard at the first instant but then immediately tries to drop the loudness of the note, the synthesizer is slow to respond to the change. With a clarinet, the response would be immediate. "In expressive possibilities, the clarinet is way ahead of this," he says, gesturing with the controller. "But the clarinet has 200 years of development behind it. This has been around about 10 years."

"We study conventional instruments because they're great blueprints of what people like," Wright says. "Our goal is not necessarily to imitate orchestras, but to extend them."

Wright points out that adding valves to a trumpet was a technological innovation that extended the expressive capabilities of the instrument. In 1788, Haydn wrote to his publisher to explain why his latest sonatas required purchase of a new fortepiano built by Wenzel Schanz. Coccia Vinsi notes that the contemporary violin is, technologically, not the same instrument for which Bach composed partitas. "There's always a quest for the new," Wright says.

He speaks of "hyperinstruments" that extend the capabilities of musicians. One such hyperinstrument is the polyrhythmicon, developed by Peabody's Burtner. Burtner spent part of his childhood in Nuiqisut, a village in Alaska near the Arctic Ocean. He describes native drumming as his first meaningful musical experience, and though now his primary instrument is the saxophone, he has remained fascinated by rhythm. He developed the polyrhythmicon because he wanted to compose polyrhythms that human percussionists cannot play.

A competent drummer has no trouble handling rhythms based on divisions of two or three: quarter notes, eighth notes, 32nd notes played in 3/4 time or 2/4 or 6/8. But what about 15th notes? Or ninth notes? They're mathematically possible, but devilishly hard to play, because a drummer has great difficulty keeping in mind what 1/15 of a measure sounds like, especially if everyone else is playing in conventional meter. And if you want to layer, say, 11th notes over 17th notes? Forget it. Burtner doesn't know anyone who could accurately play that for longer than a minute.

But his polyrhythmicon can. The polyrhythmicon is not something a musician picks up and plays, like a flute. It's actually a computer interface between a musician and a synthesizer, created on a Macintosh with special computer music software called MAX. A musician can "play" the instrument by using a Mac onstage to alter various settings that control rhythm, accents, dynamics, or melody. The computer tells a synthesizer to generate specified, complicated rhythmic patterns. The synthesizer might produce tones that are merely percussive, like a snare drum, or melodic, like a marimba or any other instrument.

Last year Burtner composed a piece called Taruyamaarutet (Twisted Faces in Wood) for soprano, marimba, bass clarinet, percussion, and polyrhythmicon. In the first section, Burtner layered four rhythms, one atop the other, in a ratio of 6 : 8 : 9 : 12. This sounds fairly conventional because the ear sorts it out into rhythmic pulses with ratios of 3 : 4 and 2 : 3, which occur routinely in Western music. Then Burtner complicates things. He layers a new rhythmic ratio of 5 : 11 : 7 : 17. The ear still picks up pulses, but not in the familiar sequence of the first section. The most complicated part includes polyrhythms in the ratio 15 : 17.5 : 20 : 27.5, with each rhythmic voice accenting the sixth, eighth, ninth, and twelfth note, respectively. The result is a dense thicket of rhythm that seems to pulsate, sometimes at regular intervals, sometimes at startling intervals that seem rhythmically dissonant as Burtner modulates the pattern.

"Human performers lack the accuracy to perform very complex and exact rhythms," Burtner says. "The computer is infallible in this respect, and easily accomplishes the most amazing polyrhythms." Having a "percussionist" that can play such complicated patterns extends what Burtner can do as a composer. It also expands the possibilities for the listener, who now can hear music that no one had been able to play before.

Not everyone in the computer music department thinks in terms of musicians playing computers as instruments. Doctoral candidate Forrest Tobey deals with computers playing themselves. He says, "My model is you have the computer next to the third bassoonist. Just a member of the ensemble."

Many computer music compositions have employed human players performing onstage alongside a tape deck that plays the computer part. The principal drawback to this setup has always been that the machine plays its part the same way every time, with none of the spontaneity of live performance. It's oblivious to what the other players are doing, and oblivious to the conductor.

Tobey is developing a system that will make a computer "player" responsive to the movements of a conductor. In one room of his Baltimore apartment he has an array of equipment that includes a Macintosh computer, a Kurzweil K2000 synthesizer, and an infrared sensing system called the Buchla Lightning, originally developed by Don Buchla, who also invented the first modular synthesizer in the early 1960s. The Lightning consists of an infrared wand and two monitors. Tobey uses the wand, which resembles a longer version of a cigar tube, as a conductor's baton. The two monitors track the movements of the baton and relay this information to the Macintosh. In the Mac is a program of Tobey's invention that translates these movements, tracked on three axes, into instructions for the synthesizer. The program embodies professional conducting technique, and an ensemble's response to that technique. Part of Tobey's instructions to the Mac looks like this:

In the midst of all these tabletop appliances, Tobey raises the wand and executes the downbeat. The synthesizer sounds the first note of a computerized orchestral arrangement of Samuel Barber's Adagio for Strings, which Tobey uses for demonstration purposes. When he beats faster, the music increases in tempo. When he leans forward and waves the wand closer to the monitor positioned in front of him, the "orchestra" plays louder.

A conductor's gestures are every bit as complicated as a musician's, and for now Tobey's system can only respond to some of them. Though conductors are trained to make maximum expressive use of the baton, in practice they use both hands, their facial expressions, and their eyes to communicate with an ensemble. The Buchla Lightning hardware responds only to the infrared wand, which limits Tobey's vocabulary, so to speak, as he tries to communicate with the computer. But he's working on it. He's been thinking about some sort of glove he could wear on his left hand to enable the monitors to sense his gestures.

"Each step is a hard-gained step," says Wright.

Coccia Vinsi recalls his first glimpse of the computer's potential for making music. "I became instantly fascinated," he says. "Even in my ignorance I saw all the possibilities for a musician. We can bring the tradition we've inherited from the past into the future. We cannot just overlook the innovations that are around us today. The task is to somehow fuse the past and the future into the present moment."

In Room 316, Bezak looks at the computer and electronic gear stacked everywhere and muses, "It would be very interesting if Bach could live at this time and have all of these machines."

Dale Keiger is the magazine's senior writer.

Send EMail to Johns Hopkins Magazine

Return to table of contents.