Jaap Blonk and Joan La Barbara
Messa di Voce

February 17, 2009
Suzanne Thorpe

On Monday, February 23, 2009, Electronic Music Foundation will present the North American premiere of Messa di Voce, an innovatively intuitive multimedia presentation that combines extreme vocalizations with dynamically generated visualizations and audio processing. The technical designers of the state-of-the-art application are known as Tmema (Golan Levin and Zachary Lieberman), and the virtuoso vocalists performing the groundbreaking work are Dutch improviser Jaap Blonk and New York-based vocal pioneer Joan La Barbara. Curious about what it is like for the artists who work with the freshly developed technology behind Messa di Voce, Arts Electric posed a few questions to the vocalists to give our readers an insight into this unique experience.

Jaap Blonk and Joan La Barbara

AE:  You each have long histories of performing interactive vocalizations with musical compositions and improvisations. How do you interact with Messa di Voce and how is it different?

JLB:  In a work of mine, Events in the Elsewhere, I used the voice (through a pitch follower and Interactor®, a program developed by Mort Subotnick and Mark Coniglio) to trigger video sequences on laser disc players to locate a point on laser discs, then run forward or backward at different speeds, depending on both frequency and amplitude. As I do not have perfect pitch, I chose pitch ranges that I knew I could predictably nail, so that I could control this activity with some degree of accuracy. The problem with voice and pitch followers is that the vocal sound is so rich in overtones, that the device can mistake a strong overtone as the primary pitch instead of the fundamental. In the case of Messa di Voce, the voices are used in a number of different ways to control and affect visual patterns projected on a large screen behind the performers. We can see the affect of what we are doing immediately, and can change or alter how we are singing to achieve the results we are after. For example, in the opening segment, the amplitude of my breath is controlling the amount of light filling a rectangular shape, while Jaap's voice is affecting rotation of that same shape. So in a sense, my voice is causing the object to appear, while Jaap's is upsetting its spatial stability. It gets to be a kind of game. If I don't breath, there is no object and Jaap cannot play with it.

In another segment, there is a camera following my location onstage and superimposing an amorphous blob-like, bouncy body-shape over mine, so that what the audience sees is a strange, stubby, other-worldly creature. When I create jagged, crackly sounds, the body has sharp, jagged edges that protrude from it and become larger as I sing louder. When I make a hissing, sibilant sound, a stream of white smoke or steam is emitted from the top of the creature. Jaap is, at the same time, giving a speech, in imaginary language, describing aspects of the creature. His speech is translated into curving cursive shapes, as if translating his speech into a graphic representation.

In the final segment, Pitch Paint we almost literally paint with our voices. When we sing a pure tone, a line appears onscreen and continues in a direction until we change pitch or it hits a predetermined edge. By singing particular intervals, we can create shapes and, depending on what vowel we are singing as we close the shape, the interior of the shape is filled in with a particular color. If we ululate or gliss through different pitches, the line is also affected. The width of the line is affected by dynamic changes, and when we stop singing and restart, a new line appears. The drawings are erased by sibilants. This is one of the most fascinating and satisfying of the modules for me as I have always thought of myself as painting with my voice, on to tape or on air, in time.

JB:   I can add that for the modules of the performance we have to focus on specific aspects of sounds. Some are about vowels, some are about deep sounds verses high sounds, and others focus on dynamics or isolated short sounds. We develop ways of stretching the sounds that fall within each category. For me it is the first time I am generating images in real-time. I have been in situations where others took my vocal sounds to influence visual material, but I was not in control of what was happening. In that way it is a new experience for me.

Joan vocalizing jagged edges

AE:  How long did it take you to develop a relationship with the software application?

JLB:  Jaap and I worked together for the first time on this piece. Our initial sessions were spent improvising, exploring each of our vocabularies and coming up with a group of sound properties that we felt would be interesting to explore. We then demonstrated these to Golan and Zach, who were duly fascinated and impressed. They then vamped on how they could develop apps that could best explore specific sounds. So the apps were developed specifically for the sounds we wanted to make, both extended techniques and pure tones. There were occasional frustrating moments when an app did not behave as predicted and Golan and Zach had to make adjustments. I think it is an ongoing process, of fine-tuning the apps, and our learning to fine tune what we are doing to make the app behave the way we want it to.

JB:  That wasn’t too hard for me. I wasn’t quite sure what was under the hood, as you say, but a lot of things that I could do with this software were easy to follow for me.

AE:  Did Golan and Zach customize the software for you both?

JLB:  We did a lot of testing and they did a lot of retooling and adapting to best utilize what we could do. We created a storyboard of the events and modules. Certain modules were developed specifically for ideas that we wanted to explore in solo situations. For instance, Jaap has this wonderful cheek-flapping thing which we all loved. Golan and Zach developed a module that created bubbles onscreen and recorded fragments of Jaap's sounds into these bubbles. At a certain moment, when Jaap has filled the upper part of the screen with these sound bubbles he stops to admire his creation and they start falling down, emitting his sounds as they bounce on the floor. It is very funny and very effective. For me, they created a module that generates vertical, vibrating light columns that appear onscreen as I sing pure tones. My voice is recorded and continually played back as I sing new tones and create new columns. The sound and image continue to play as I build the layers of sound. It's quite beautiful and, again, very effective.

JB:  In the process I came up with some ideas that were too complicated to execute, but in the end it turned into a collaboration. We collaborated with what we thought the software should do, in a back in forth of trying things out. Golan and Zach did a lot of work with recordings of Joan and me, because travel was too expensive.

AE:  Have either of you developed new extended techniques to work with the software?

JLB:  No, I have not yet developed any new techniques specifically for Messa di Voce.

JB:  Around the time that we started working with the bubble module I started making the cheek sounds, or what I call the cheek-synthesizer, blowing air around my cheeks in a very childlike way, which Joan mentioned. With the bubble module I conquered some of my fear of making childlike sounds.

Jaap's cheek synthesizer

AE:  Do you think of the software as a player or an instrument?

JLB:  I think of it as both.

JB:  I think of it as more of an instrument. There isn’t an aspect of the software that functions by itself.

AE:  How do you feel about graphic representations of your voice?

JLB:  Many years ago, I developed what I call sound painting, a direct result of the fact that I see sound when I sing. I see shapes and sometimes colors. So the idea of graphically representing my voice is something that I have been doing for many years. As I notate my work, I have developed certain graphic shapes to represent specific sounds. To be able to do this in a direct way, as in the Pitch Paint segment, is tremendously fulfilling.

JB:  I’ve been making graphic representations myself for some time. Before I became a vocalist, I played free-improvisation on the saxophone, and performed Dada sound-poetry, which is how I got into using my voice on stage. I began writing sound poetry and text sound work, in made up languages, but I felt there was a huge gap between this material that I wrote and improvising. So I made a notation system to get a more accurate representation. I studied the phonetic alphabet and created my own extension of it for sounds not represented there. So I combined phonetic notation with personalized graphic notation, which I find very useful.

Jaap and Joan vocalizing

AE: Do you feel that the software reflects your vocal gestures faithfully?

JLB:  It is as faithful as it can be. I'm sure Golan and Zach could fine tune the software to make it more sensitive but I'm pleased with what it does at this point in time. I think as we develop the work, we might choose more individual moments to explore each of the modules. One of the problems is that the other’s sound can get picked up by the other singer's microphone, so there is some crossover and some visual confusion that can occur. We've tried to compensate in several ways for this difficulty.

JB:  No, but that might not be so interesting. I think it is perhaps not the best thing to represent the gesture directly with the visual, as that is done with the sound. It is too obvious, and maybe an area in between is more interesting.

AE:  How do you see this project evolving? What are the plans for its future?

JLB:  The modules could obviously be fine-tuned and marketed for other performer's needs. If Golan and Zach were interested in this, they could probably make a small fortune in the commercial concert arena. As for this specific work, we are always looking for venues that would be interested in presenting it in performance. Each time we perform it, we learn a great deal and develop the musical aspects further.

JB:  In the future I would be interested in developing more of a player, so that we could be surprised by what the software is doing. We look forward to seeing where Golan and Zach want to take it, and more opportunities to perform the piece. We are very happy with the chance to perform it in New York!

For information about Messa di Voce as presented in The Human Voice in a New World, visit the past events files at EMF Productions:

EMF Productions