INDIGENOUS TO THE NET: Early Network Music Bands in the San Francisco Bay Area

by Chris Brown and John Bischoff

 

This article documents the work of two bands that were active in the San Francisco Bay Area between the mid-1970s and late 1990s. The League of Automatic Music Composers and The Hub were two of the first ensembles to investigate the unique potentials of computer networks as a medium for musical composition and performance. Both groups came about as associations of computer music composers who were also designers and builders of their own hardware and software instruments. Their approach to the computer music medium was strongly do-it-yourself, a characteristic common both to the electronic technology community of the San Francisco Bay Area, and the experimental instrument-building tradition of Harry Partch, John Cage, and David Tudor. They approached the computer network as a large, interactive musical instrument in which the data-flow architecture linked independently programmed automatic music machines, producing a music that was noisy, surprising, often unpredictable, and was definitely more than the sum of its parts.

This article, co-written by two members of the Hub, provides an audio/visual tour of the music, instruments, and networking designs produced by these bands. In assembling together the sounds, still images, video, programs, and diagrams that are the artifacts of twenty-odd years of creative work,, we are struck by the ways in which both the recording and performance technologies represented reflect the character of their times. But we also hope to point out how many of the issues that were confronted by these bands are still relevant today to composers working on ways to make the internet a medium for live, interactive musical performance. This article will be followed in August 2002 on this same Crossfade site by the premiere of two new on-line network music pieces, one by each of us, pointing in that direction.

 

Introduction: Experimental Music in the Bay Area

In the 1970s and 80s, the San Francisco Bay Area was fertile ground for composers experimenting with microcomputers as musical instrument automata. For musicians of that time and place it was but a small step from the practice of acoustic music realized by the rigorous application of algorithms, including chance (Cage), stochastic (Xenakis), or minimalist processes (Reich), to the application of similar methods by machines in live electronic music. As yet unnamed, the Silicon Valley was springing to life from the garages and bedrooms where the potentials of solid-state electronic devices as building blocks for information systems could be investigated by individuals working in the shadows of the mainframe-dominated electronic industry. With the flowering personal computer industry in the Bay Area, access to the new digital technologies and to the people who developed them was perhaps the best in the world. But for all the young men with fortunes in the back of their minds (and in their futures) who pursued the addictive excitement of building electronic machines, there were also the political utopians whose dream was of a new society built on the free and open access to information, and on a comprehensively designed technology based on embedded intelligence.

This was also the culture that gave the world "New Age" music, a watered-down and commercialized version of the musics based on modes and drones that Terry Riley, Pauline Oliveros, and LaMonte Young invented here during the late fifties and early sixties. But West Coast music-making also included a free-wheeling, noisy, improvisational edge left over from the counter-cultural revolutions of the sixties. Defiantly non-commercial, and practiced by musicians coming from classical, free jazz, or experimental rock backgrounds, its aesthetic preferred compositions that changed with each performance, textures that emphasized a simultaneous multiplicity of voices, and a practice based on collaborative, communal or group-oriented activities. Another ingredient in this musical stew was the influence of the West Coast tradition of composer as instrument builder (Harry Partch, Lou Harrison, and John Cage) which emphasized taking control of the means of making music itself, including the tuning systems and the instruments. Why NOT extend this approach to the new electronic technologies? Finally, the lack of significant opportunities on the West Coast for the support and presentation of art music made composers in the Bay Area more likely to embrace underground, experimental aesthetics. Since the audience was so unfocused, and opportunities for careerism so futile, why not spend one's efforts following the potential of fantastic ideas, rather than worrying about the practical applications of those ideas within traditional musical domains?

 

The League of Automatic Music Composers, by John Bischoff

Early League History

 

The League came about through a confluence of technological change and radical aesthetics. In the mid-1970’s, composers active in the experimental music scene centered loosely around Mills College in Oakland were greeted by the arrival of the first personal computers to hit the consumer market. These machines–called microcomputers because of their small size compared to the mainframes of academia and industry–could be bought for as little as $250. Their availability marked the first time in history that individuals could own and operate computers free from large institutions. To the composers in this community it was a milestone event. Steeped in a tradition of experimentation, they were busy at the time building homebrew circuits for use in 'live' electronic music performance. The behavior of these circuits often determined the primary character of the music. The idea of using the electronic system itself as a musical actor, as opposed to merely a tool, had started with composers like David Tudor and Gordon Mumma. A natural continuation of their example could also be found in the local composers who performed with self-modifying analog synthesizer patches as well. One of these players was the late Jim Horton (1944-1998). Horton was a pioneering electronic music composer and radical intellectual who was first out of the blocks in purchasing one of the new machines—a KIM-1 in 1976. Horton's forward-looking enthusiasm for the KIM quickly infected the rest of the community. In a short time many of us acquired KIMs and began teaching ourselves to program them in 6502 machine language. Programs were entered directly into the KIM's 1K of memory via a hexadecimal keypad, and saved onto audio cassette—the cheaper the cassette machine the better. Loading programs back into the KIM's memory from cassette was a notoriously flaky proposition often requiring frequent re-tuning of the control circuit onboard the KIM. There was a strong feeling of community among the composers who were learning to program these tiny computers. This shared spirit was particularly helpful when it came to getting a foothold on the more esoteric, and sometimes pesky, aspects of KIM-1 operation.

John Bischoff's KIM-1 computer music system circa 1980 photo: Eva Shoshany

"The scene at Mills seemed worlds away from the electronic music studios I had been exposed to. They still had the public access studio going at that time, and they let me try out the electronic equipment myself and showed me how things worked. David (Behrman) was rehearsing with Rich Gold, John Bischoff, and Jim Horton, who were using tiny computers called KIMs. They were not exactly my image of what computers were like–a board about the size of a sheet of paper with a tiny keypad and a few chips."

George Lewis, quoted in Composers and the Computer, p. 79, by Curtis Roads, William Kaufman pub. 1985

 

Silicon Orchestra

An informal discussion group sprang up during this time. A number of us got together on a regular basis to listen to the music we were creating, some of it made by our KIMs and some by analog circuitry in conjunction with other instruments. There was much discussion about new musical ideas as well. The group met at a house on Heinz St. in Berkeley that was being rented by a couple of ex-Mills graduate students. In addition to Horton and I, the group included composers Rich Gold, Cathy Morton, Paul Robinson, and Paul Kalbach among others. I remember a discussion one evening where Horton talked excitedly about the possibility of building a "silicon orchestra"—an orchestra of microcomputers linked together into an interactive array. The concept sounded impossibly far-out to me at the time.

Here's an musical example of the League from a rehearsal in 1980.

Jim Horton, Tim Perkis, and John Bischoff (left to right) preparing for a concert at Fort Mason, 1981. photo: Peter Abramowitsch

In 1977, Gold and Horton collaborated on a piece in which they linked their KIMs together for the first time in a performance at Mills College. Gold interacted with his artificial language program while Horton ran an early algorithmic piece based on the theories of 18th century mathematician, Leonhard Euler. Early in 1978, Horton and I developed a duo piece for our KIMs where the occasional tones of my machine caused Jim’s machine to transpose its melodic activity according to my "key" note. I recall that these initial computer-to-computer linkages took us hours to develop and debug as we experimented with different methods of transmission, each method often requiring us to learn a new technical facet of the KIM. Typically, connections were made via the 8-bit parallel ports available on the KIM’s edge connectors <picture of Jim’s KIM at CCAC>. In such a case, the program on the receiving end would either periodically check the port for new data or more casually retrieve whatever data was there when it looked. At other times we connected via the KIM’s interrupt lines which enabled an instantaneous response as one player could "interrupt" another player and send a burst of musical data which could be implemented by the receiving program immediately.

 

Early League Concerts

In the spring of 1978 the three of us played as a networked trio at the Blind Lemon, an artist-run space in Berkeley started by composer and instrument builder Erv Denman. David Behrman, who had moved west to become Co-Director of the Center for Contemporary Music (CCM) at Mills, joined us later that year in a "Micro-Computer Network Band" performance on November 26, 1978, again at the Blind Lemon. We did a 4-track recording of similar material that was edited down for one side of an EP and released on Lovely Music (NY) in 1980. By that time the group had become The League of Automatic Music Composers. The new group name was in part a reference to the historical League of Composers started by Aaron Copland and others in the 1920s. It also sought to convey the artificial intelligence aspect of the League's activities as we began to view half the band as "human" (the composers) and half "artificial" (the computers). As stated in our concert program, "the League is an organization that seeks to invent new members by means of its projects."

In the spring of 1979, we set up a regular biweekly series of informal presentations under the auspices of the East Bay Center for the Performing Arts. Every other Sunday afternoon we spent a few hours setting up our network of KIMs at the Finnish Hall in Berkeley and let the network play, with tinkering here and there, for an hour or two. Audience members could come and go as they wished, ask questions, or just sit and listen. This was a community event of sorts as other composers would show up and play or share electronic circuits they had designed and built. An interest in electronic instrument building of all kinds seemed to be "in the air." The Finn Hall events made for quite a scene as computer-generated sonic landscapes mixed with the sounds of folk dancing troupes rehearsing upstairs and the occasional Communist Party meeting in the back room of the venerable old building. The series lasted about 5 months as I remember.

Tim Perkis' homebuilt computer-driven sound synthesis circuitry used in early 1980s. photo: Eva Shoshany.

 

The League in the Early 1980s

By 1980 Gold and Behrman had left the group to pursue other projects, and composer Tim Perkis joined the band. Tim had been a graduate student in video at California College of Arts and Crafts in Oakland and was an active player in local gamelans. Perkis, Horton, and I continued extensive development of the fledgling network music form between 1980-82 and concertized widely in the Bay Area, including a performance at New Music America in 1981 at the Japan Center Theater in San Francisco. Don Day also brought his Serge Modular analog synthesizer into the group for a time. During this period we would spend months working up a concert.

At our Shafter Ave. house in Oakland, an entire Sunday afternoon would consist of setting up our computer systems in the living room and laboriously connecting them together. As we desired more flexibility in configuring interconnections between machines we started to use "solderless socket" strips to patch our port pins together rather than hard soldering them—a electronically dangerous method as one misaligned connector could blow out an entire port. With wires running everywhere and our computer programs finally debugged, we eventually got the system up and musically running. For two or three hours we played, tuning our systems and listening intently as our machines interacted. When surprising new areas of musicality appeared, we took notes on the parameter settings of our individual programs with the hope that recalling those settings in concert would yield similar exciting results. The structural form of our concerts was essentially an agreed upon series of such settings, the moment to moment details, of course, always remaining in interactive flux.

In 1982 the League joined forces with the electronic music band the Rotary Club to develop a concert of works under the name "Rota-League." The Rotary Club, which consisted of a younger generation of graduate students just finished at Mills, based their performance style around an automatic switching box designed by member Brian Reinbolt. Using an industrial timing wheel scavenged at a local surplus outlet, Reinbolt interfaced the switching box and the wheel in such a way that the turning wheel would affect the configuration of switches in an ongoing fashion. As the band members played, their sounds were routed through the switching box and chopped into a stunning, real-time collage of bits and pieces. The results fit well with the League’s devotion to algorithmic music structures coupled with live human interaction. As the combined group Rota-League, we shared audio and control lines in various cross-processing schemes and performed an evening of music on September 25, 1982 at Ed Mock's Studio in San Francisco. Members of the Rotary Club included Sam Ashley, Kenneth Atchley, Ben Azarm, Barbara Golden, Jay Cloidt, and Brian Reinbolt.

Around 1983 Horton developed severe rheumatoid arthritis and performing became difficult. The League's activities slowed to a halt and the group finally disbanded later that year.

A video example of the League in action, shot by Don Day at their Shafter St. house in Oakland.

page from the 1980 catalog of 80 Langton St. (a San Francisco artist-run gallery)

 

The League's Working Process

The League didn't compose network "compositions" as such but rather whole concerts of music. We didn't give titles to these concerts—we thought of them as public occasions for shared listening. Initially, we let the networked stations run on their own in performance, unattended, and retired to the sidelines to listen along with the audience. After awhile it seemed more fun to perform along with the network so we began to sit around our large table of gear, adjusting parameters on the fly in an attempt to nudge the music this way or that.

League members generally adapted solo compositions for use within the band. These solos were developed independently by each composer and were typically based on algorithmic schemes of one kind or another. There was a distinctly improvisational character to many of these as the music was always different in its detail. Mathematical theories of melody, experimental tuning systems, artificial intelligence algorithms, improvisational instrument design, and interactive performance were a few of the areas explored in these solo works. More often than not, the composer designed real-time controls so that a human player could adjust the musical behavior of the algorithm in performance. These "openings" in the algorithm became important features when adapting the solo within the network band context—they were natural points where incoming data from other players could be applied. The solos, played simultaneously in the group setting, became interacting "sub"-compositions, each sending and receiving data pertinent to its musical functioning. In actual practice, at the start of a new project members would begin with an informal meeting over coffee at a local café where we would throw around ideas for linking "sub-compositions" together. One composer might say: My program generates elaborate melodic structures–does anyone have pitch information to send me? Another might respond: Yes, I generate occasional sustained tones–how about if I send you the pitch I’m playing encoded as a frequency number? The first person might respond: Yes, I could retune my melodies to that frequency whenever it comes in. And so the structure of interconnections would be created a link at a time.

The League of Automatic Music Composers (Perkis, Horton, and Bischoff, left to right) performing at Ft. Mason, San Francisco 1981.

photo: Peter Abramowitsch

A musical example from a rehearsal around 1981. modem version.

 

 

The Concert at the Blind Lemon

An early example of a League configuration was the concert presented at the Blind Lemon in Berkeley in Nov. 1978 by Behrman, Gold, Horton, and myself. The flyer publicizing the event features a diagram of the data paths between computer stations and indicates the musical algorithm running at each.

Behrman’s program scanned the audio spectrum of Horton and Gold's output using outboard filters and then sequentially pulsed those filters at the detected frequencies—in a sense reinforcing memories of harmonies just past. Gold’s station executed circular readings at audio rate of a virtual 3-D landscape that resulted in looping patterns of tuned noise. His part tended to create a sustained fabric upon which other parts were traced. Horton’s algorithm spun a thread of continuous melodic invention built from just-intoned pitch relations, and Bischoff’s machine played a punctuating role as it looked for chance tunings between Horton’s melodies and Gold’s timbres, beeping in agreement when it detected them.

As can be gleaned from the diagram, the links between stations in this particular topology included both audio signals and digital data. Audio was generally shared for the purposes of real-time analysis–to map certain characteristics in the music and then use that information to determine response, for example. In this case, Behrman and Bischoff analyzed different aspects of the frequency content of Gold and Horton’s audio and used that information as a main determinant of their musical output. Their stations depended on the sonic activity of other stations in the network in order to fully operate. The remaining connections transmitted information in digital form. The majority of these encoded some aspect of pitch or tuning, the exception being Bischoff’s "state flag," a single bit which indicated whether his station was sounding or silent. One example of the way data was used by individual stations was the "digital pitch info" Behrman sent to Horton. As Behrman's station analyzed frequency content it sent Horton a series of numbers representing the current set of the most prominent pitches. Horton's program used that information to redirect his melodies away from those pitches—a kind of negative musical feedback—which tended to diversify the pitch concentrations in the music overall.

The digital messages themselves were usually quite simple and often consisted of 1, 2, or 4 bits. A 4-bit connection between two machines, for example, could be used to communicate which one of 16 different scale steps a station was currently playing. The amount of information was not as significant as the complex timing of its application relative to other interactions.

 

 

The Music from the time of the Blind Lemon

Listening to the combined result, one hears independent musical processes at work–each station has its distinct musical viewpoint–along with the coordination of those processes through a real-time choreography of data flow. The whole can be seen as a kind of expanded polyphony, though in this case a polyphony of "musics" rather than "notes." And just as in traditional polyphony, the League’s music makes use of many styles of vertical alignment between parts—from strictly synchronous, to closely proximate, to distantly related in time.

"What we noticed from the beginning was that when the computers were connected together it sounded very different from pieces just being played simultaneously. If you imagine four pieces of music together at the same time, then coincidental things will happen, and just by listening you make some musical connections. But by actually connecting the computers together, and having them share information, there seems to be an added dimension. All of a sudden the music seems not only to unify, but it seems to direct itself. It becomes independent, almost, even from us."

— John Bischoff quoted in "Big Things From Little Computers", p. 20, by Dale Peterson, pub. Prentice Hall 1982

"(The League) sounded like a band of improvising musicians. You could hear the communication between the machines as they would start, stop, and change musical direction. Each program had its own way of playing. I hadn’t heard much computer music at the time, but every piece I had heard was either for tape or for tape and people, and of course none of them sounded anything like this. I felt like playing, too, to see whether I could understand what these machines were saying."

— George Lewis quoted in "Composers and the Computer", p. 79, by Curtis Roads, William Kaufman pub. 1985

 

music example

modem version

 

 

The Ear Magazine Benefit Concert

A later incarnation of the League developed music for a concert to benefit Ear Magazine, a grass-roots experimental music periodical. The concert was held at New College in San Francisco on March 28, 1980. Band members Jim Horton, Tim Perkis, and myself configured our machines as outlined on the front of the printed program for the concert.

Martian Folk Music example

Martian Folk Music example modem

From the program notes:

"The musical system can be thought of as three stations each playing its own ‘sub’-composition which receives and generates information relevant to the real-time improvisation. No one station has an overall score."

"Bischoff’s station directly generates various noises, glissandi and tones through a Digital to Analog converter. It usually makes its decisions (i.e. play or rest, hold or continue, faster or slower, etc.) by consulting data that encodes aspects of the states of both Perkis’ and Horton’s stations. Perkis’ computer calculates this information and Horton’s program signals the moment when it is accessed."

"Perkis' station can be described as a software implementation of a three dimensional network of virtual machines each of which plays one of nine voices. The state of a machine depends on its past state, the current state of its neighbors and the pitch of the present or last note played by Horton's station. The envelope generators can be adjusted so that Perkis’ program can play in chordal or percussive modes. His program is an illustration of how coherent activity can result from the intersection of randomness with cooperative structure."

"Horton's station plays part of Max Meyer's psychological theory of melody. It uses a 29-tone to the octave justly intoned scale. The program contains a group of (if conditions are met)->(make a change) modules that calculate rhythm, tempo, octave, rest and repetition. The conditions are set by the histories of other modules, the number of rests entered by Bischoff's station, the amount of time since the last change, etc. as well as a random factor."

 

 

Reflection's on the League's Music

As the program notes indicate, the ways in which data were encoded by the sender and employed by the receiver were quite open. Data representing one musical feature could be applied freely to another. For example, Perkis’ 4-bit "key change" was used by Bischoff to determine "event rates" from slow to fast among other things. In addition, control structures built upon contingency were common: the incorporation of new incoming data from one station could depend on a "take" signal from another. In the case being discussed, Bischoff’s program waited for a signal from Horton before reading Perkis’ "key change" data. Such schemes seemed to increase the feeling of interaction in the music as 3-way coordination of musical change became more likely.

Audio synthesis techniques used by the League were highly idiosyncratic. Given the relatively slow speed (1 Mhz) and data width (8 bits) of these early machines, high-bandwidth sound generation was not possible. Working within these limitations, the band developed timbral depth through simultaneous use of multiple techniques and audacious application of raw synthesis—a quality that brought listeners into visceral contact with the nature of the medium. Direct digital-to-analog conversion was one technique that yielded unconventionally dense and highly articulated noise. Digitally controllable pulse wave generators running in tandem, and slightly detuned from each other to produce a "chorusing" effect, was another. The emphasis was on exploration of the technology at hand–technology that could be personally acquired or built from scratch–rather than the endless wish for better tools.

The League's approach could hardly have been more different from the prevailing tradition of computer-generated tape music of that time. Academic computer music of the 1970’s focused almost exclusively on the development of non-real-time timbral sophistication and devoted little attention to incorporating computers into a 'live' musical setting. In contrast, the League always played their machines in real-time and put great emphasis on evolving structure and surprise in performance.

musical example standard

musical example modem

The League rehearsing at CCAC in 1981. From left to right, Tim Perkis, John Bischoff, Don Day, and Jim Horton. photo: Eva Shoshany

 

 

The Hub, by Chris Brown

Formation of the Hub

 

Hub Origins

Driving up Franklin Street in San Francisco in 1981 in my 1965 Ford Falcon van, listening to KPFA radio on my way to a piano-tuning appointment, I'm hearing this noisy music, like nothing I've ever heard before. It sounds like an electronic wilderness, with more going on than I can really ever hope to keep track of, lines going off crazily in different directions, in different tunings, but somehow part of the same organism, an environment of musical plants each going in their own directions but part of the same world, dependent on the same food, water, soil, light. This was my first hearing of the League of Automatic Music Composers, my introduction to computer network music.

By 1986 I was collaborating on producing concerts with members of that band, now defunct. We worked under the umbrella of "ubu, incorporated" (named for Alfred Jarry's anti-art hero "Ubu Roi"), producing experimental music concerts at galleries and community music spaces. In the summer of 1986 we decided to produce a mini-festival at "The Lab", which ran a space at 1805 Divisadero St. in an old converted church building, devoted to Automatic Music Bands. This was a collection of composers working with computers who were collaborating in duos and trios, connecting their computers in various ways in networks to share sound, control data, or both. We called the festival "THE NETWORK MUSE - Automatic Music Band Festival".

"Falling Edge", Chris Brown and Mark Trayle duo, during the Network Muse Festival, the Lab, San Francisco, 1986.

photo: Johanna Poethig

 

The first Hub

One of the groups from the Network Muse Festival, the duo of John Bischoff and Tim Perkis (original members of the League) called their performance "The Hub", because they were using a small microcomputer as a mailbox to post data used in controlling their individual music systems, which was then accessible to the other player to use in whatever way and at whatever time he chose. This was the beginning of the band, "The Hub": the other composers who joined to become The Hub were also performing on different nights in different groups using uniquely different network architectures. After the festival, the idea of using the standalone computer to serve as a mailbox for a group (which Tim Perkis had initiated) seemed like the best idea as a way to continue.

To quote from an early program note by Perkis:

"The Hub originally came about as a way to clean up a mess. John Bischoff, Jim Horton and myself played for several years in a group called The League of Automatic Music Composers, the first microcomputer network band. Every time we rehearsed, a complicated set of ad-hoc connections between computers had to be made. This made for a system with rich and varied behavior, but it was prone to failure, and bringing in other players was difficult. Later we sought a way to open the process up, to make it easier for other musicians to play in the network situation. The goal was to create a new way for people to make music together.

The solution hit upon had to be easy to use and provide a standard user interface, so that players could connect almost any type of computer. The Hub is a small computer dedicated to passing messages between players. It serves as a common memory, keeping information about each player's activity that is accessible to other players' computers."

Perkis and Bischoff's original Hub microcomputer was a Kim-1 microcomputer, a vintage 1976 product of Commodore, which later developed the Commodore 64 and then the Amiga. The original price of the Kim-1 was $250, It was a single board computer based on a 6502 8-bit microprocessor with 1K RAM, running at 1 MHz.

"The KIM-1 has 1152 bytes of RAM, 2048 bytes of ROM and 30 I/O-lines. Some of these lines are used to drive six 7-segment LED-displays and others are used to read the little hexadecimal keyboard....The KIM-1 has the ability to load and store programs on paper tape and/or cassette recorder."

The KIM-based Hub had four UARTS to allow four players to network using 300 BAUD serial connections. Perkis and Bischoff also used the Kim-Hub in a trio with Mark Trayle called "Zero Chat Chat".

The KIM-1 microcomputer, platform for the first Hub. For a great source of information about early microcomputers, see http://members.cox.net/obsoletetechnology/kim1.html

 

The Experimental Intermedia - Clocktower Concert

In 1987 composers Nick Collins and Phill Niblock invited members of the Hub to create a performance that would link two performance spaces (Experimental Media and The Clocktower) in New York City, to exemplify the potential of network music performance to link performances at a distance. This commission was the impetus for the six of us, John Bischoff, Tim Perkis, Mark Trayle, Chris Brown, Scot Gresham-Lancaster, and Phil Stone to begin to collaborate together as a group. Two trios would perform together in each space, each networked locally with new, more robustly built, identical Hubs, and the Hubs would communicate with each other automatically via a modem over a phone line. The result was the first concert of the Hub, which was reviewed by the Village Voice. A short video document of the concert was made by Experimental Intermedia.

to read the text of the article, go to this site: http://www.o-art.org/history/Computer/Hub/HubTel.html

The repertoire for the New York premiere had to be improvised quickly since the network only came to life a week or so before the performance. The idea of two trios in different rooms gave us the idea of making each ensemble distinct — the Experimental Intermedia trio naturally fell to the "Zero Chat Chat" ensemble of Bischoff, Perkis, and Trayle, while the Clocktower position was taken by a trio of Brown, Stone, and Gresham-Lancaster. Three of the pieces, "Simple Degradation", "Borrowing and Stealing", and "Vague Notions", were designed as network pieces, that would use the modem network to create the acoustically divorced, but informationally joined sextet. Then three other pieces would be independently performed, that could take full advantage of the improvisational predilections and local interactivity of each ensemble.

 

The second "Son-of-Hub" SYM Hub

The "Son of Hub" Sym-1 by Synertek, (1978, original price $239) (http://online.sfsu.edu/~hl/c.synertek.html) was the basis of the 2nd Hub. The SYM was also a 6502 processor with an on-board keypad data entry, 2K onboard RAM, and a 6850 ACIA (asynchronous communications interface) chip, for serial communication. The SYM-Hub was made by building an expansion board that held four additional 6850s with lines connected to the 8-bit databus, 7 address lines, and system clock and R/W control signals. << insert scan of SYM expansion board schematic>> The wire wrapped homemade circuits were installed underneath the SYM in a clear-plastic box, and four DB25 connectors (such as are still used for PC printer connections) were mounted on the outside. Three of them were used to support RS232, 1200 BAUD serial connections from three players at a time. The fourth 6850 was used to connect this box with the 2nd identical-twin SYM-Hub, which hosted three more players. The twin Hubs could communicate with each other using BAUD rates up to 9600 BAUD, but most modems at the time could only support 1200 BAUD, so the New York performance used that rate. An assembly language program was written by Phil Stone and Tim Perkis that received and transmitted messages to store and retrieve data in the Hub from players on each serial port; and that also sent copies of that stored data from one twin-Hub to the other, so that each Hub contained data-bases from all six players. Here is the description of the protocol that was used, from Stone and Perkis' program comments:

"HUB multi-channel mailbox control program:

Devices connected to each channel make requests to write to the hub processor table memory, and to read it. Each makes its request by sending command bytes of which the high four bits form a command field (CF) and the low four a data field (DF). In the hub processor there are three variables kept for each channel: a current WRITE.ADDRESS (12 bits); the current READ.ADDRESS, (12 bits) and the current WRITE.DATA (8 bits). These variables for each channel can be set only by commands from that channel. All channel commands are dedicated to setting these variables, or initiating a read or write to the hub table memory."

"Son of Hub" schematic diagram by Tim Perkis, of the SYM-Hub.

 

Hub Aesthetics

The NYC debut of the Hub was a success, and provided a notoriety for the group that launched a 10 year career. But the beginning of the band was a commission for a musical stunt, which became both a blessing and a curse. The idea of having musicians play with each other from distant locations was then, and has been ever since, of considerable interest to promoters, publicists, and audience. Kyle Gann's review title "musica telephonica" emphasized the idea of the physical disconnect, the capability of creating music without being physically present, "phoning it in". But the band itself was always far more interested in the aspects of performer interactivity, algorithmic complexity, and the web of mutual influence that the network provided. The network was a way for computer musicians to create a new kind of musical ensemble that allowed them to interact in ways that were unique to their medium. We were interested in the sound of idiosyncratic, personal computer music instruments that could influence, and be influenced by each other. The Hub became a way to extend compositional ideas from the solo electronic performer to an ensemble, creating a new form of chamber music. (The fact that the chamber could be expanded in distance was not entirely irrelevant, but never really the point). It was also a mission to point the development of computer music away from the paradigm of dominance to one of creative anarchy. To quote from Tim Perkis' liner notes to The Hub's first CD (1989 Artifact Recordings 1002):

"I see the aesthetic informing this work as perhaps counter to other trends in computer music: instead of attempting to gain more complete control over every aspect of the music, we seek more surprise through the lively and unpredictable response of these systems, and hope to encourage an active response to surprise in the playing. And instead of trying to eliminate the imperfect human performer, we try to use the electronic tools available to enhance the social aspect of music making."

Yet what Perkis later called "the gee-whiz aspect" never really escaped us. Constructing and coding were the way we practiced, and were "the chops" that were required to make the music happen. But, as in any music, the mechanics need to be transcended to reach to the aesthetic goal; and in the technology dominated context that fed our publicity engine (modest though it was), it became hard to get the audience, much less ourselves, to always focus on the musical issues. The real musical work the Hub was able to achieve can nevertheless be described as the sound of individual musical intelligences connected by networked information architectures. What is the sound of the network? It goes beyond whatever sound producing means we as artists chose in voicing the compositions we made, to the ways in which those individual voices interacted with each other. These modes of interaction were themselves the specifications for Hub compositions: a Hub piece was defined as a protocol defining the types of musical information to be automatically shared within the group, and the means of sharing it between the members. Each composer was responsible for programming their unique computer/synthesizer instrument to communicate within these protocols. The history of the Hub will I think best be told as the progression of the ideas in these compositions. They will become the spine that the rest of the story can hang on.

The Hub: from left to right, Chris Brown, Scot Gresham-Lancaster, Mark Trayle, Tim Perkis, Phil Stone, and John Bischoff

photo: Jim Block Photography

 

 

Early Hub Repertoire

Simple Degradation

 

Simple Degradation, designed by Mark Trayle, may have been the first Hub piece: it exemplified the idea of using the Hub as a common memory which would contain information that all players would use directly to control the sound output of their systems. Its interactive architecture was one-way: one player essentially conducted the ensemble electronically by feeding information through the Hub that governed the behavior of all the other players. At the same time, as in most Hub pieces, the instructions specified only one aspect of the sound each player could produce, in this case, the moment-to-moment volume ( "amplitude modulation"):

"One performer generates and processes a waveform, simulating the response of a plucked string. This waveform is then broadcast on the computer network, the other performers using it for amplitude modulation (loudness variation). The rate at which the waveform is played back by the performers is determined by the performer who generated the waveform. The performers are free to choose whatever timbres and pitches they wish. The waveform may only be used for amplitude modulation. Pitch may only change after one complete cycle of the waveform."

— Mark Trayle

Such a simple idea was a great place to start. When a new interactive musical instrument (like the Hub itself) comes to life, it makes good practical sense to start by creating a piece for it that demonstrates both that the concept is working technically, and that it can provide new musical resources. The key to the success of this piece was that it provided a simple constraint that defined its musical character while leaving everything else open for each musician to determine. This established a character for the band: while our individual computers are machines which slavishly wish only to follow instructions, and the network is a means by which multiple machines may co-ordinate this behavior, we as players remain free to voice their behavior individually, both ahead-of-time in the way we program them to follow the specification of the piece, and in real musical time, by providing them with interactive controls that allow us to adjust them as they play. In Simple Degradation, only one musical parameter is constrained. Not only are the players freely improvising their choice of pitch and timbre (though only one pitch per waveform cycle is allowed), but the timing of when they begin to play is open: this produces a music that is canonic, rather than monophonic. And although Mark (who always produced the waveform) provided the materials that controlled the shape of everyone's amplitude thus controlling the shape of phrases and the form of the piece, the moment-to-moment mix of amplitudes resulted from timing choices made by each individual player.

Audio example

Modem version

The Hub, performing live at Mills College on Oct. 6, 1989. From left to right, Perkis, Stone, Brown, Gresham-Lancaster, Trayle, and Bischoff. In the center, on the floor are the twin SYM-Hubs.

photo: Jim Block Photography

 

 

Borrowing and Stealing

Borrowing and Stealing, designed by Phil Stone, was the second piece defined for the premiere New York performances which proposed a much more ambitious protocol for using the shared memory resource that the Hub provided. Its subject and title anticipated what would eventually become the battleground for networked musical technologies — the plunderphonic reality that digital information as symbolic representation of sound is all too easily and perfectly replicated, that such information cannot really be owned, but to be kept alive must be continually transformed.

"Melodic riffs are composed by individual participants and sent to the Hub's shared memory, where they become fair game to be appropriated by the other participants. A "borrowed" (or "stolen", depending on one's perspective) riff may then be transformed in any of a multitude of ways, and replayed. The transformed riff is in turn sent to the Hub and made available to the other players. In this way, musical information flows instantly and reproducibly among the members of the ensemble without regard for copyright, attribution, or other proprietary notions."

— Phil Stone, from "The Hub" Artifact 1001

CD program notes

Technically, this was accomplished by providing each player with a separate area in the Hub's memory to which the melody that he was playing had to be published. Essentially, it was a shared database marked off into territories over which each individual player owned write privileges, but where reading (and copying) was free. (Of course, this was only a protocol that each player was individually responsible for programming on their own computer, there was no "server" that enforced the concept of territorial rights!).

"Pitch and duration information, as well as a loop-start synchronization flag, will be sent to the Hub as the melody plays. Each composer may, at any time, copy information from another composer's melody data and use it for his or her own melody data."

— PS, from piece specification.

Musically speaking, this was a tight spec: it defined a texture made entirely from repeating melodies, a kind of metamorphosing minimalism. As in Simple Degradation, The Hub once again was used to deliver common musical material to all the players; but in this piece not one but every member of the group was contributing to it. This made the piece more evolved in an interactive sense — there was no one conductor controlling the musical direction, but six equal players borrowing and stealing equally from each other but still intertwined and interdependent. Phil had coded a graphical input system for melody generation on his Commodore 64 (or Amiga?) computer, in which he drew melodic contours with a mouse that would be instantly rendered as melodies published to the Hub. In practice, most of the rest of the group simply grabbed his melody and started transforming it, so that the music rarely had more than one melodic source for the whole texture that evolved. This could have led to a very formal, tightly controlled music made entirely out of subtle variations, or reflections, on the same material. But instead there was a more anarchic response as our musical proclivities favored radical transformations, such as changing the tuning of the melody, stretching its shape by exploding interval sizes or rhythmic relations beyond recognizable connection to the original, applying frequency and rhythmic proportions of the melody to timbral control parameters of un-pitched sounds, etc. The piece specification did not prevent individual members from creating realizations that became an algorithmic free-for-all spinning off from its central core. Realizing a good performance of the piece became a balancing act between revealing the organizational process at its core without making it too dominating of the music. Phil liked to emphasize the former by directing additionally that the melodies become shorter and shorter towards the end of the piece. Our renditions of this piece probably reflected a creative internal tension in the band behind its minimalist and improvisational tendencies.

audio of Borrowing and Stealing

modem version

Phil Stone with his "Axe-Thing" controller, standing, with the SYM-Hub in the foreground. Tim Perkis is on the left. From a concert at Mills College in 1988.

Photo: Jim Block Photography.

 

 

Text based pieces

Vague Notions of Lost Textures (1987) and Role'm (1987)

These two pieces are examples of the use of the Hub for text communications. In both cases messages were used to coordinate free improvisational music from the ensemble. It is interesting that this simple feature has been an important aspect of many other network music projects since then, including for example Georg Hajdu's quintet.net software (www.quintet.net) which was used in Manfred Stahnke's (www.manfred-stahnke.de) opera Orpheus Kristall premiered in May, 2003, Munich Biennale.

In Vague Notions, (designed by Scot Gresham-Lancaster) players sent text messages to their own data area in the Hub, and then read all other players' data areas, scanning for new messages from every other player. The topic of conversation in this primitive chat-room was co-ordination of the improvised music around a formal shape: a simple ramp of increasing note density, timbral brightness, and amplitude that peaked at around 80% of the agreed upon duration of the piece, followed by a smooth return to a texture of low density, brightness and amplitude, where the music stopped. Chats kept track of the progress of the band through this shape, and were often used to describe the character of the music that resulted, providing a running commentary on how the performance was going. During the New York performance the audience was free to wander around the band, observing the band's evaluation of its own performance.

In Role'm (designed by Chris Brown) the text messaging facility was used as a conductor of the improvisation. The conducting decisions were all made by one computer, which chose at random one of six descriptors for each of the five categories of musical textures, which together created a "group change"):

ENSEMBLE: solo, duo, trio, quartet, quintet, tacet

ROLE: boss, lead, accompanist, filler, wall-flower, submissive

TUNE: chromatic, 8-note, 6-note, 4-note, 2-note, drone

RANGE: hi, hi/mid, mid, lo/mid, lo, all

TIME: gallop, trot, walk, crawl, drift, snooze

The amount of time that the band was to allot to each set of changes was also chosen randomly from a choice of 10 to 60 seconds, and the "downbeat" for each change was transmitted through the hub. Two additional rules defined the theatrical and acoustical character of this piece:

"1) if you are playing the role of BOSS or LEAD, you must stand up while playing, like in Big Bands. 2) if you are not playing the current group (ENSEMBLE = TACET), you are free to play any other acoustic soundmakers (bells, whistles, washboard, souzaphone, whatever) that you wish."

audio of Role'm

modem version

A live performance of "Role'm", at Mills College on Oct. 6, 1989. From left to right, Scot Gresham-Lancaster, Mark Trayle, and John Bischoff. photo: Jim Block photography

 

 

HubRenga

Itself a collaborative ensemble, it was a natural development for the Hub to collaborate with other composers, acoustic musicians, and artists. One of the first and most ambitious of these came about in 1989 through the association of several members of the band with the San Francisco composer, writer, and performance artist Ramon Sender. Ramon was also involved in artistic collaboration using computer networks, in his case with writers on a poetry conference hosted on the Bay Area's local computer network called "The Well". Ramon's poets had been extending concepts from the traditional Japanese collaborative poetry form called "Renga", which is related in its syllabic structure to haiku. In Renga the participants trade writing lines, linking each line to the next using common themes.

With the support of a grant from the InterArts Program of the National Endowment for the Arts, we produced a unique poetry/music/radio performance called "HubRenga". KPFA, the flagship Pacifica radio station in Berkeley, was the radio sponsor for the project, and this description of the project appeared in their Folio program guide:

"HubRenga

Thursday Sept. 7 [1989] 9-11 PM Music Special hosted by Charles Amirkhanian.

Tonight's show is a live performance from KPFA's sound studio of "HubRenga", an audience-interactive, music/poetry piece made possible by the communication between two computer networks. The collaborators in the creation of this piece are Bay Area computer music band The Hub, novelist and musician Ramon Sender, and poets from the poetry conference of The Well.

The Well (Whole Earth 'Lectronic Link) is an electronic network that operates in the Bay Area facilitating communication between people interested in arts and alternative lifestyles. The poetry conference is a continuing forum about poetry which subscribers to The Well can join to exchange their ideas and work. A participant uses a personal computer and modem to phone up The Well, and then browses through contributions that other users have made about the topic of interest, and can leave responses, or start new ideas that remain in the Well for others to read.

Ramon Sender, a co-founder of the seminal San Francisco Tape Music Center in the 1960's, has been the moderator of this conference in the past few years. The Hub is a band of electronic music composers that uses a small micro-computer (also called "the hub") to share data among each of it's member's independent computer-music systems. The Hub has developed a repertoire of pieces that uses this interdependence in a lively, performance oriented way.

During the performance poets will submit poetry to the piece through the Well. At KPFA, Ramon, as moderator, will browse through the submissions as they come in, reading them aloud as a part of the music. One Hub member will be also receiving the texts on his computer, which will be programmed to filter it for specific "key words" that have been determined in advance of the performance to trigger specific musical responses from The Hub. During the performance, poets will be listening to the piece over the radio while they are shaping it through their communication with The Well. The purpose of the piece is to create with this technology a situation in which a large network of collaborators is tied together from various remote locations in creating an interactive performance.

The piece was made possible by an InterArts grant from the NEA, administered by New Langton Arts in San Francisco."

 

Here are the words that were chosen by the Well's poets as the keywords, or themes, for the HubRenga performance:

embrace echo twist rumble keystone whisper charm magic worth Kaiser schlep habit mirth swap split join plus minus grace change grope skip virtuoso root bind zing wow earth intimidate outside phrase honor silt dust scan coffee vertigo online transfer hold message quote shimmer swell ricochet pour ripple rebound duck dink scintillate old retreat non-conformist flower sky cage synthesis silence crump trump immediate smack blink

In 1990 John Bischoff and Mark Trayle co-authored an article called "Paper HubRenga" that uses these "power words" , as well as Chris Brown's favorite lines from the original HubRenga performance as themes for discussing the Hub, and network music generally.

A video of the Hub (shot by Johanna Poethig) was made during the performance in KPFA's studio in Berkeley.

 

 

The MIDI-Hub Repertoire

The MIDI-Hub

In 1990 Scot Gresham-Lancaster was chosen to beta-test the new Opcode Studio 5 MIDI interface which combines the functions of a computer interface, MIDI patchbay with 15 inputs and outputs, processor and synchronizer in a single box. While investigating its capabilities it quickly became clear to him that it could be programmed to function as a MIDI version of the Hub, which would allow faster, more flexible messaging between computer players than our homebuilt RS232 Hub provided. It would also implement the concept of the group on a standard music technology platform, which we hoped would make the our work more open and accessible to other musicians.

The group decided to "upgrade" the Hub. And like electronic musicians everywhere eventually find out, upgrading the system meant either changing the existing music so that it could play on the new instrument, or else creating a new repertoire made specifically for it. We took the latter route. But changing the messaging system also changed the kind of music we made. Working within the MIDI paradigm had its own limitations; in Tim Perkis' words (from the booklet notes to the Hub's 1994 CD release "Wreckin' Ball", also on Artifact), "In certain ways MIDI is inappropriate for our uses, and we use it in a way it was never intended to be used: as a medium of communication between players. MIDI was designed to allow one master - typically a keyboard player or computer serving as a sequence player - to control a complex orchestra of synthesizers, without any interaction with anyone else."

The MIDI-Hub worked as a switchboard, not as common memory. Instead of depositing data (which could be in any custom format) into a place that anyone could read, the MIDI-Hub protocol provided the ability for each player to send any other player a MIDI message tagged with an identifier of who had sent it. No longer was it up to each musician to specifically look at information from other players, but instead information would arrive in each player's MIDI input queue unrequested. Information about current states had to be requested from players, rather than being held on a machine that always contained the latest information. This networking system was more private, enabling person-to-person messaging, but making broadcasting more problematic. To send messages to everyone, a player would need to send the same message out individually addressed to each player. If a player failed to handle the message sent, its information was gone forever. And messages were sent more quickly under the MIDI-Hub, leading to an intensity of data traffic that was new in the music. The MIDI-Hub pieces reflected the nature of this new aspect of the band's network instrumentation.

"The Javanese think of their gamelan orchestras as being one musical instrument with many parts; this is probably also a good way to think of the Hub ensemble, with all its many computers and synthesizers interconnected to form one complex musical instrument. In essence, each piece is a reconfiguration of this network into a new instrument"

— Tim Perkis, "Wreckin' Ball" CD notes

For more information on the MIDI-Hub, see Tim Perkis' article originally written for the Electronic Musician magazine.

Data-flow diagram of the MIDI-Hub.

 

 

Waxlips

Waxlips, designed by Tim Perkis (1991)

This piece could perhaps be considered the prototype MIDI-Hub piece, in that it sought to directly sonify the architecture of this networking system. Again quoting from Perkis' "Wreckin' Ball" program notes:

"Waxlips was an attempt to find the simplest Hub piece possible, to minimize the amount of musical structure planned in advance, in order to allow any emergent structure arising out of the group interaction to be revealed clearly. The rule is simple: each player sends and receives requests to play one note. Upon receiving the request, each should play the note requested, and then transform the note message in some fixed way to a different message, and send it out to someone else. The transformation can follow any rule the player wants, with the one limitation that within any one section of the piece, the same rule must be followed (so that any particular message in will always cause the same new message out). One lead player send signals indicating new sections in the piece (where players change their transformation rules) and jump-starts the process by spraying the network with a burst of requests.

The network action had an unexpected living and liquid behavior: the number of possible interactions is astronomical in scale, and the evolution of the network is always different, sometimes terminating in complex (chaotic) states, including near repetitions, sometimes ending in simple loops, repeated notes, or just dying out altogether. In initially trying to get the piece going, the main problem was one of plugging leaks: if one player missed some note requests and didn't send anything when he should, the notes would all trickle out. Different rule sets seem to have different degrees of "leakiness", due to imperfect behavior of the network, and as a lead player I would occasionally double up — sending out two requests for every one received — to revitalize a tired net."

What is left out from the above description is that the playing of "notes" really did not imply what it usually does in terms of pitches and tunings. The actual "notes" could be any mapping of MIDI note numbers to sounds of any kind. But Waxlips was still used as tune-up piece for the Hub in its tours of the early 1990's — what was "tuned-up" was the integrity of the Hub's interconnections and software. Once the piece reached a state when it would continue to generate its barrage of sounds without the necessity of providing further input to the system, we knew that everything had been correctly interconnected, and that everyone was functioning correctly.

video from Berlin and the Moers Festival/Germany 1995: "No Holes !"

audio of Waxlips, recorded during the first Hub European tour on 3.12.92 in Brussels, Belgium.

modem version of Waxlips

Tim Perkis, circa 1988. photo: Jim Block Photography

 

 

The Glass Hand

The Glass Hand, (1991) designed by John Bischoff

This piece exemplifies John's idea and practice of extending the sonic architecture of music originally developed as a solo to an ensemble piece for the Hub. (Two other earlier pieces, "Perry Mason in East Germany" and "Action at a Distance" took similar approaches.) In addition to the Hub version on this page, his solo version of the Glass Hand can be heard on his CD of the same title, Artifact 1014 . Here is his description of the work:

"The idea of this piece is to create a multi-layered texture where each layer is transforming itself continually at a variable rate, and the rates of transformation are determined by network interaction. Each Hub player:

1) comes up with a set of predetermined sounds or textures;

2) devises a method of segueing from one sound to the next by smooth transition;

3) in performance, links the speed of their transitions to information coming in from other players and sends speed information to other players based on characteristics of their current sound.

The overall effect is a rather dense, orchestral texture with an internal ebbing and flowing which shapes the piece. "

John's pieces typically used simple data exchanges between players which could be very openly interpreted, similar to the way that the League of Automatic Music Composers worked. Here is a description of the data sharing design for "Perry Mason":

"Each player is running a program of his own design which constitutes a self-sustaining continuous musical process. All players continuously report to the Hub three variables which indicate something about the current state of their activity, and each also has designed his program to be influenced by one variable from each of three other players. The complex web of mutual influence gives the music structure beyond any individual's planning."

In The Glass Hand, each player similarly made their own musical realization of the simple idea of creating smooth transits at variable speeds from one musical texture to another. Each player was responsible for sending a trigger message to one other player, and a speed message to one other player in the group. When a trigger message was received, the player was supposed to begin a transition from the current to a new musical state, using a transition rate determined by the value of the speed message they had most recently received. The assignment of whom to send these messages to was preset in such a way that everyone would receive two signals from two different players; such as player 1 sends triggers to player 2 and speeds to player 3, player 2 sends triggers to player 3 and speeds to player 4, etc.

audio from the "Wreckin' Ball" CD, recorded in 5/29/92 in Seattle, Washington.

modem version

John Bischoff's hand. photo: Jim Block Photography

 

 

Wheelies

Wheelies (1992) designed by Chris Brown, used the computer network band as a giant, multi-limbed, rhythm machine. The purpose was to create variations on complex rhythmic synchronizations among the players' machines, while allowing players to control each other's rhythmic performance. Each player programmed the implementation of their system, and could change the voicing of their instrument's rhythms, but all were locked into a tempo set globally for the group, while their ICTUS, METER, and DENSITY parameters were set and changed during the performance by other players, but never by themselves.

MIDI clock signals were generated by Chris and broadcast to the whole group, which set down a common, but changing tempo that all players locked to. This tempo was divided by each player independently by a number from 1 to 10 set by the ICTUS parameter. METER set the rhythmic cycle, implemented as the repetition of notes or timbres around a cycle of numbers of beats, where one beat equals ICTUS number of timing clocks. Complex polyrhythmic textures not normally performable by humans could develop when up to six METERS with different ICTI cycled at the same time. The DENSITY parameter specified a percentage of beats to be sounded by playing a note, so a low DENSITY meant only a low percentage of beats would be played.

Every player could send at any time STOP, CONTINUE, or START messages, which all players had to obey. The STOP message functioned as a group mute: everyone had to program their machines to stop playing, but not to stop counting cycles and beats. A CONTINUE message meant "un-mute", and every machine resumed playing at the same point in their cycles where they had stopped. START meant first to send three messages out to other members of the group (one each of ICTUS, METER, and DENSITY, each sent to any member of the group), then to read new values of those parameters into the rhythm generating instrument, reset the timing clock counters, and start playing anew.

Characterized not just by its spunky rhythmic character, the multi-dimensionality of the networking protocol in Wheelies was unique. While Chris controlled the global tempo, dominating that aspect of the conductor's role, any player could start or stop the whole group and thus conduct the phrasing of its rhythms. And an unusual kind of ensemble behavior resulted from the rule that players could only change each other's rhythmic behavior and not their own. One listened to recognize which part of the whole group sound each player was creating, and then tried to shape it by specifying parameter changes for them. Everyone could affect everyone else's sound, but no one could control everything about even their own sound. Because data exchanges happened only at START signals, these were like moments when the cards were shuffled for a hand which would proceed unchanged in character until the next STOP. The tempo and rhythm of this rhythmic change was under the control of any player, all the time, which meant that the whole group became attuned to shared control of this most important structural element in the piece.

A screen shot from Chris Brown's Wheelies software, showing the HMSL shape editor (a software language written at Mills College by Phil Burk, Larry Polansky and David Rosenboom) for editing the tempo curves in the piece.

audio track of Wheelies

modem version

Brown, Gresham-Lancaster, and Trayle, 1988.

photo: Jim Block Photography

 

 

Variations II

In 1995 David Bernstein produced at Mills College "Here Comes Everybody: A Conference on the Music, Writing, and Art of John Cage". One of the concerts was devoted to realizations of Cage's live electronic music, and the Hub decided to implement a real-time version of Variations II.

VARIATIONS II (1961) is one of a group of eight works titled "Variations", composed in the two decades between 1958 and 1978. Rather than conventional musical scores, these works are sets of instructions that specify disciplined activities to prepare a performance. The instructions for VARIATIONS II include transparent sheets that contain either lines or points. The performer(s) are instructed to superimpose these transparencies on a surface and "drop perpendiculars from the points to the lines" to determine readings for six variables of the music. These variables are frequency, amplitude, timbre, duration, occurrence in time, and structure of the sound event. From this (rather tedious) preparation, the performer obtains a score from which the performance is rehearsed.

The graphical algorithm defined by Cage in VARIATIONS II, with its painstaking method for making decisions about musical parameters, seemed to be ripe for adaption to our algorithmic, network music. We decided to make a "live" version of this work, in which the casting of superimposed lines and points would occur as part of the performance, the music being automatically computed through the network according to Cage's instructions. A central MIDI interface, the Opcode Studio 5, makes possible the simultaneous distribution to all musicians of measurements made from virtual lines and points, which are video-projected as part of the performance. Each musician computes his own performance independently, but all the data used in these computations arises from the same graphical matrix. In addition, as each musician requires more information for creating new events or for determining the details of a complexly structured event, he may request the graphics computer to "nudge" the overlay, thereby creating the necessary data.

In contrast to the "traditional" compositional method, this realization allows us to create music that is unique to each performance, and that need not be rehearsed; both goals served for us to update Cage's concept to the current sociological and aesthetic situations. The process of realizing the work in new form has re-emphasized to us the appropriateness of his compositional strategies to the medium of electronic music.

One of the strands in the musical philosophy of the Hub was the interest in defining musical processes that generated, rather than absolutely controlled, the details of a musical composition. An acknowledged influence on this interest was the work of John Cage, and it seemed a natural extension to us to try to automate the indeterminate processes used in his work. Many of these processes are extremely time-consuming and tedious; and given that Cage was himself involved for a long time in live electronic performance, we felt a real-time realization of these processes during the progress of a performance was not only feasible, but aesthetically implied.

A video of the opening moments of the performance in the Mills College concert hall.

 

 

Collaborations

The Hub collaborated with many different acoustic musicians by using pitch and amplitude trackers to collect information about their performances and distributing this to all the players in the group to use in different ways. In the Hub's first San Francisco performance in 1987 Tim Perkis' piece "Spray or Roll On?" featured saxophonist Larry Ochs and violinist Nathan Rubin improvising with the Hub . Here are the instructions to that piece:

"Pitches played by an instrumentalist are recorded by one player's system and then distributed at his discretion to a common data area in the Hub, for use by the other players. The computer players are instructed to use this information for pitch and duration decisions, and to not play more than 20% of the time."

This was also generally speaking the principle behind the Hub's collaboration with composer Alvin Curran in his composition "Electric Rags III". Curran performed improvisationally on the Yamaha Disklavier piano, and the MIDI output from that instrument was broadcast through the MIDI-Hub to all the players to use as they wished.

A similar system was used for Scot Gresham-Lancaster's "Vex", an arrangement for the Hub of Erik Satie's proto-minimalist piano piece "Vexations". Here, the score of the piece was sent to the Hub in synchronization with a performance of the piece by both Curran and the Rova Saxophone Quartet. The Hub freely rendered the notes as they arrived as an electronic filigree, accompanying the acoustic ensemble.

"Vex" audio example from "Wreckin'Ball", Art 1008 (www.artifact.com), modem version

Another collaboration with Alvin Curran was a studio recording of Curran's "Everet Verbum" (1993) . This work is derived from the "Delta" section of "Erat Verbum", a 6 part sound work commissioned by the Studio Akustischer Kunst of the WDR. Here sections of John Cage's illustrious Norton Lectures, or "I-IV" (1989), read by Cage are fed to the Hub for perusal and instant re-translation into Morse Code. The resultant dot and dash fantasy is mixed live by Curran "on his way to the Hub Concert".

audio excerpt of Everet Verbum

modem version of Everet Verbum

postcard from the first San Francisco Hub concert in 1987

 

 

The Late Hub

Stuck Note

The MIDI-Hub repertory became the basis of the Hub's second CD "Wreckin' Ball", (Artifact 1008) which appeared in 1994. Most of the tracks were recorded live in concert during national and international tours, which took place in the early 1990s. There were two European tours, the first in 1992, which included dates at the Royal Conservatory in the Hague and Apollohuis in Eindhoven, Holland, the Free University (VUB) in Brussels, and at the Logos Foundation in Ghent, Belgium; and the second the following year 1993 which included two evenings in Berlin as part of the USArts Festival produced by the Akademie der Künste, and ended with an appearance in the Workshops of the Moers Festival, in West Germany. The Hub also performed in 1992 at Sound Work in Seattle, and at the International Computer Music Conference in San Jose. A collaboration with the Rova Saxophone Quartet also took place in San Francisco in 1993.

By 1995, after a performance at California Arts Council Conference on Technology in the Arts in Santa Clara, this work had run its course. Mark Trayle had moved to Southern California, and getting the group together regularly became more problematic. As Hub members got involved in other projects and as technology changed, the effort required to maintain the existing repertoire, much less to develop new pieces, became prohibitive. In 1997, the Hub was invited to do a short residency and concert at the Georgia Center for Advanced Telecommunications Technology (GCATT) at Georgia Tech in Atlanta. Phil Stone created an audience interactive work "Luv Connection" that took advantage of the high-tech concert hall there which had ethernet connections at every seat. A special on-stage hub web server funneled audience preferences about the on-going music to the group, while a video projector displayed a score, indicating progress through the piece.

As an antidote to the increasing complexity of Hub projects, Scot Gresham-Lancaster designed a piece that re-focused the band on simple interactions, with specific sonic results. His piece "Stuck Note" was designed to be easy to implement for everyone, and became a favorite of the late Hub repertoire. The basic idea was that every player can only play one "note", meaning one continuous sound, at a time. There are only two allowable controls for changing that sound as it plays: a volume control, and an "x-factor", which is a controller that in some way changes the timbral character or continuity of the instrument. Every player's two controls are always available to be played remotely by any other player in the group. Players would send streams of MIDI controller messages through the hub to other players' computer synthesizers, taking over their sounds with two simple control streams. Like in "Wheelies", this created an ensemble situation in which all players are together shaping the whole sound of the group. An interesting social and sonic situation developed when more than one player would contest over the same controller, resulting in rapid fluctuations between the values of parameters sent by each. The sound of "Stuck Note" was a large complex drone that evolved gradually, even though it was woven from individual strands of sound that might be changing in character very rapidly.

Stuck Note audio and modem version.

 

 

Points of Presence

At the XI/Clocktower Hub premiere in New York City in 1987 there were a number of "techies" in the audience who commented to the band afterwards about the primitive nature of our serial communications network, and asked us why we were not using ethernet instead. While we were aware of that technology, it was simply not within our means at that early date, as the hardware and software that supported it were not yet available for our personal computers. This story emphasizes an important point about the Hub: we were musicians first, and technologists second, and so we implemented solutions that were practical for musicians in our time and place. As such, we were the first (as far as we know) to make interactive, live electronic music in a computer network, and despite the primitive nature of that network (compared to those available at present) we were the first to experience its potentials and its problems.

One of those problems has to do with distance. As instruments, and ensembles, get more complex, the direct interaction of people with sound becomes difficult to maintain. Computer music instruments are best when they take on a life of their own, surprising their creator/performers with a liveliness and character that could not be predicted; but there remains a need to guide them directly, to nudge their behavior in this direction and the next by physical gestures, and to hear the results of those gestures begin to emerge immediately. When the network mediates those gestures further, a disconnect can take place that alienates the player from interaction with the music. The need is to maintain a balance between independence of behavior and direct responsiveness in the design of electronic instruments and musical networks.

Another problem has to do with that other form of distance: the Hub's first concert, and the publicity we got from it that fueled our career, happened because of the public's fascination with the idea that musicians can play with each other in spite of being physically separated by great distances. "Points of Presence", a live performance produced by the Institute for Studies in the Arts (ISA) at Arizona State University (ASU), linking members of the Hub at Mills College, California Institute for the Arts, and ASU via the internet, became our swan song. Here is a description of the project from the program notes:

"Electronic music begins with the disembodiment of sound - the connection between sound and the physical body that may have produced it is severed, and becomes arbitrary both in time and space.

A century of electronic music began with Thaddeus Cahill's Telharmonium, an overambitious effort to provide continuous electronically generated music over telephone wires to subscribers. The hardware to produce this music occupied a full city block in New York City.

The Hub was born in 1987 as the result of a commission from two New York new music producers to present a concert in which performances in two spaces several New York City blocks apart were connected electronically, sharing data that influenced the outcome of the music in both locations. The hardware used for this event consisted of a motley selection of home-brew microprocessors, including two that were jury-rigged to update each other's RAM via a telephone modem connection providing a shared memory for the two groups, assorted analog and midi synthesizers, and a hairy mass of patch cables to connect all of these into circuits.

Since that event we have continued to receive requests for concerts to be performed remotely, that is, without all of us being physically in the same space, but have always declined, in part because we really prefer to be in the space where we can hear each other's sound directly and to see each other and communicate live. The Hub is a band of composers who use computers in their live electronic music, and our practice has been to create pieces that involve sharing data in specific ways that shape the sound and structure of each piece. We are all programmers, and instrument builders in the sense that we take the hardware and software tools available to us and reshape them to realize unconventional musical ideas.

Now in 1997 new tools have become available that allow us to re-approach the remote music idea - telharmonium, points-of-presence - in a new way. Personal computers are now fast enough to produce high-quality electronic sound in real-time, allowing instrument-builders like Mike Berry to choose a purely software environment to produce home-made musical instruments. His Grainwave software, a shareware application for MacOS PowerPCs, was adopted by the group for this piece because it allows each of us to design our own sounds, and these sounds/instruments can be installed at any physical location that has a PC on which they can play - we can be independent of the hardware that produces our music, our instruments have become data which can be replicated easily in any place.

At the same time we, along with the rest of our culture, have been spending more and more time in our lives and our work communicating and collaborating on the internet. Why should we not extend our musical practice into this domain? Can we retain here the ability to define our own musical worlds, avoiding the commercial, prefab, and controlling musical aesthetics of the technological culture?

Points of Presence is a collaboratively designed instrument/network made to address these questions. It brings together software written at Mills College in Oakland (Mike Berry's Grainwave) and the Center for New Music and Audio Technologies at Berkeley (the Max objects written by Matt Wright that translate MIDI control signals into udp packets that travel the internet), for a research grant supported by Institute for Studies in the Arts at ASU. Our six-member group is divided, two-each between three locations - at ISA/ASU in Tempe, California Institute of the Arts in Valencia, and the Center for Contemporary Music (CCM) at Mills College in Oakland. Each member of the group plays a computer at each of these sites by sending control data over the internet that starts, changes, and stops sounds on their own software instruments. We call the machines we control at distant sites "remote-renderers", and the ones we sit next to are "local-renderers". Additionally we use algorithmic programs that run on our laptop computers to communicate through the internet with each other, running the hub-protocols that define the interaction of our systems for each of our pieces. In the past, these data connections were made through a MIDI patching box, a piece of hardware, the Hub itself - now we can use the net and our IP addresses to communicate with each other from any distance. Using this instrument, our band is virtually present, sounding the same music, at each of three point-locations. In addition to being able to see each other, one of the main things we give up with this dislocation is a small delay time (probably 2 or 3 seconds) between the time when we indicate a change in our instrument's performance and the time when it is heard at each location. For a traditional musician, this would be unthinkable - for us, we choose to make a music that reflects the nature of our instruments. We are controlling directions in the flow of an automatically generated music, and our will to control every detail of the sound has been suspended. We are conducting a musical experiment, and the music that results is a part of the process which we embrace."

— Chris Brown, 11.16.97

In fact, in this performance, and subsequent ones using the same technologies, the small internet time delay referred to turned out to be much smaller: within the U.S. 100 milliseconds, and to Europe 300 milliseconds were average. But the performance was technically and artistically a failure. It was more difficult than imagined to debug all of the software problems on each of the different machines with different operating systems and CPU speeds in different cities. In part, because we weren't in the same place, we weren't able to collaborate in a multifocal way (only via internet chats, and on the telephone); and in a network piece, if all parts are not working, then the whole network concept fails to lift off the ground. We succeeded only in performing 10 minutes or so of music with the full network, and the local audience in Arizona had to be supplied with extensive explanations of what we were trying to do, instead of what actually happened. The technology had defeated the music. And after the concert, one by one, the hub members turned in their resignations from the band.

Postcard of "Points of Presence", 1997, ASU/ISA.

 

 

After the Hub

The Points of Presence project was for me a difficult lesson, but one that nonetheless I wanted to use in the process of bringing network music to the wider audience on the internet. I felt that the basic architectural plan was sound: that software synthesizers could render sound locally on each player's machine, as streams of control data provided communication and synchronization between players on the net. Using the same combination of software (Mike Berry's Grainwave software for synthesis, and Max for data networking) I developed a project called the Eternal Network Music Site. Here is a description, written first in 1998:

"This is an ongoing Computer Network Music website which will allow an internet audience to experience Computer Network Music... in which automated electronic music instruments interact with each other through computer networks, leading to emergent sonic behaviors that reflect the interdependency of their systems, as well as the interaction of individuals within the group. A repertoire of new pieces is being created for the internet, each with different sonic and interactive character, and client/participants will select at any time from a growing menu of compositions created by different composers to listen to and interact with. The music on the website will change continuously like a live sculptural installation. The algorithms continue indefinitely (conceptually, eternally), slowly evolving, and tuned-in by clients via the web-site. Clients can also play by manipulating graphical controllers on the web-site, which will audibly influence parameters of the music. The behavior of the music can be influenced at the same time by participants tuned into the piece, but also by a history of past participation in the piece. The emergent sonic behavior of each piece is thus controlled in part by the present, and in part by the past interaction on the network. The sound reflects the ongoing structure of each algorithmic composition, as well as the level of traffic and interaction on the site."

I developed a suite of three pieces, and traveled in the Spring of 1999 to Stetson University in Florida, Rensselaer Polytechnic Institute in New York, Oberlin College in Ohio, and Grinnell College in Iowa where I trained students in the concepts and performed the works locally. In November 1999 I "premiered" these pieces on-line as "Eternal Network Music", a concert of computer network music on the internet linking 14 live performers at 6 different locations (Mills College, California Institute of the Arts, Princeton University, Rensselaer Polytechnic Institute, Stetson University, and the Zentrum für Kunst und Medientechnologie (ZKM) in Karlsruhe Germany). The performance was part of the "net_condition - Art in the Online Universe" exhibition produced by ZKM, Karlsruhe. There were two full performances, one each for public concerts in Karlsruhe and Oakland, 9 hours apart. The concerts were both technically successful, (except that the Karlsruhe contingent fell asleep and missed the later concert !) as I had ample opportunity to debug the software, and I felt good about the musical results. Pieces included my own "Invention #5", Scot Gresham-Lancaster's "Bignote", which was a realization of the "Stuck Note" hubpiece, and Ted Coffey's "Muka Wha?".

Audio recording of "Invention #5", and modem version, from live performance at Mills College, November 1999.

But the pieces were still not robust enough for installation permanently on a website. In 2000-2001, I developed three more pieces, this time using James McCartney's SuperCollider software synthesis language. In addition to the wonderful synthesis features of this language, it also contained a very flexible implementation of Open Sound Control, which is a udp networking music specification developed at the Center for New Music and Audio Technologies (CNMAT) by Matt Wright. In these pieces I was able to go far beyond the use of MIDI based communications methods, which I had needed previously to allow communications between Grainwave and Max; and to ratchet up the speed of interaction between machines to produce very intense interactivity between six machines at a time. I performed these pieces in local networks in workshops in Dresden and Berlin produced by Golo Foellmer in October, 20001. But these pieces were also prone to crashing, and would run only on Macintosh Power PC's, and then required certain system versions only, so they were far from website-worthy !

Audio recording of CloudStreams_Bellwethers, by Chris Brown, from live performance at Mills College, January, 2002.

modem version of CloudStreams_Bellwethers.

 

 

The Future

Since then I have been looking into the use of Phil Burk's JSyn, which is an extensible software synthesis language written in Java. Phil maintains its platform compatibility with Mac, PC, and Linux platforms, and it runs within a standard web browser. Phil himself credits the network music of the Hub as his inspiration in developing this system that seems perfect for developing network music on the internet. Phil's software networking system using a client/server architecture is called "TransJam", and Phil has developed a "WebDrum" application for it that is currently on-line and available for use.

In August, 2002, two new pieces, one each by John Bischoff and myself, using the TransJam/JSyn system are scheduled to premiere on the Crossfade site. Perhaps this will finally be the launch of the "Eternal Music Site" .