electronics

An Intro to a DAW

A sizable part of the world is isolated at home. The internet has been extremely lively as a result as people try to keep up social interactions. This could also be a great time to try out new (or any) music software, and here’s my take on my latest software exploration and an introduction to one of the two primary forms of music-making on a computer today (I’ll explain music notation software soon, but it deserves its own post). My comments are aimed towards a general public not familiar with the program with a nod or two to those who might be more experienced in this regard.

This digital audio workstation (DAW) has been around for some time, but I finally started using it intensively for my dissertation project The Story of Our Journey. Despite ample work in electronics, I have not written a true fixed media piece (a piece without live manipulation of sound) since 2014. I was worried that jumping into such a large project without a deep understanding of the software would come at great costs. But the only cost I have seen so far was the $450 student pricing ($700 for non-students).

Ableton prides itself on two views, session and arrangement view. Arrangement view is the typical DAW setup that you would see in software like ProTools (the software I initially learned). There are spaces stacked vertically called tracks in which you can place sample clips or MIDI clips. The x-axis is time. You simply drop clips into tracks and place them in time where you need them. Easy. Zooming in and out is essential to make sure each clip is aligned perfectly, and Ableton has zoom panels that allow for quick navigation. As a laptop user, I do get slightly frustrated that I cannot swipe left and right to move to time points beyond my screen, but the ability to zoom by dragging the pointer around in the panel is excellent. Also important to arrangement view is seeing as many tracks as possible. Thankfully, most panels in Ableton can be moved out of the way with a click (or keyboard shortcut) in order to maximize space where the music is being made. Each track can individually be expanded to do detail work. And there are handy buttons with H and W that compress everything you’ve done into the height or width of your available screen space. So navigation and visibility are mostly assets for me.

Now, let’s quickly distinguish between sample and MIDI clips. Samples are pre-recorded audio clips, like the songs and sounds on your computer. When you work with them in a DAW, you are working with digital sound itself. MIDI, on the other hand, is data that the system converts into different musical parameters. An instrument or sampler is chosen to be the sound source that brings the data to life. For example, within a MIDI clip, I can take a sound and map it onto different pitches of the keyboard to create scales. I can use MIDI to control dynamics (or velocity). I can also assign pitch bending and other variables to pitch and velocity to further shape notes. So rather than work with the nature of the sound itself, I am subjecting traditional musical pitch, rhythm, and dynamic thinking onto what the sound can do within the clip as if it were a keyboard instrument. Both can be stretched, expanded, raised in pitch, lowered in pitch, chopped into pieces, and this is done through clip properties and, with more precision, clip automation, which allows for property changes to happen during a clip while playing. Ableton’s clip editing abilities are limited compared to ProTools, which will make sense very soon, but there is great potential in the automation available.

Tracks do more than simply hold clips. Each track processes sound through effects. Ableton hosts three different types of effects: audio, MIDI, and Max effects. The first one applies to all sounds (audio signal processing), the second obviously applies to MIDI, and the last one processes MIDI through the software Max (and those who use Max can make their own effects). Audio effects originate in a variety of sound manipulation techniques that go back at least 80 years (it’s very easy to point to earlier primitive counterparts to the standard techniques, including splicing, before Pierre Schaeffer). Understanding the analog counterparts to these techniques helps in predicting the outcome of the effects, but most DAWs are designed in a way that invites experimentation. Some of the basic techniques possible with analog electronics include: delay, echo, chorus, flanger, filtering, phaser, panning, distortion, amplitude modulation, ring modulation, frequency modulation, and filtering. Techniques accessible through digital means are granulation and Fast Fourier transforms that allow for better pitch shifting and compression/expansion of audio files. MIDI effects and Max effects both process MIDI data prior to becoming a signal. In other words, these effects can change the more traditional music parameters such as pitch, rhythm, and velocity. Many of Ableton’s MIDI and Max effects are arpeggiators and note randomizers. So, for a MIDI track, MIDI effects will process the data before being channeled through a sound, and audio effects can process the resultant signal. Unfortunately, none of these effects can be directly applied to a clip; every clip must be placed in a track with the effects assigned to it.

Effects are not only applied to a track but can change over time, which is called automation. These can be inputted statically with lines, or the program will record your manipulations to the music live. For example, the high frequencies of a sound can slowly disappear as a low-pass filter moves downward. An echo effect can increase gradually. The sounds can be panned left and right to create the perception of movement in space. Ableton allows for automation within individual clips; however, the automation only applies to the effects on the track and a few other items of interest (panning, pitch bending, volume). The best way to freely use a clip with automation is to export the file as if rendering the piece, which is a hassle. These clips may be organized into categories or into folders within the main project folder to quickly find, but automation is most powerful on the tracks.

A final type of audio information registered by Ableton is a live feed. By using the techniques from above on a track, the sound can be processed instantaneously by audio effects. I have not worked with the live part of Ableton Live much, but the Push control board used extensively for Live maps the automation from above to its many buttons, as could be done to the normal computer keyboard.

What sets Ableton Live apart from other DAWs is the Session View. This view loops clips to a meter (and there are ways to get more complex metrical interactions) and allows the user to take the role of DJ. Clips placed horizontally are cued simultaneously while vertical clips move in a sequence. A simple and practical use of Session View would be to create the structure of a basic song with an alternating verse and chorus. The layers of instruments in the verse appear in one row while the second row contains the chorus material. While recording from session view, each verse can be different by simply omitting or adding layers to the verse cue. As a composer who works in a more controlled writing environment, I find the Session View important for generating ideas and the Arrangement View for putting those ideas to action.

Overall, writing electronic music is a different experience than writing with traditional notation. The composer gets to deal with the sound itself instead of with notation that must be translated to sound by a performer or computer. The MIDI information acts closest to notation, but the use of that data is much different than what can be done on a five-line staff without serious effort. Also, effects are largely coloristic and textual, in stark contrast to the grid system of pitch and rhythm set upon in traditional notation. The features described above can create the simplest of songs and the most complex arrangements, and most DAWs will allow for some level of manipulation to get the sounds desired.

Coming very soon: my thoughts on the Dorico music notation software as an 11-year user of Finale…

Mostly written in mid-March as dated here but published on May 20th.

Pacing Pt. II

I briefly wrote about the importance of pacing a few weeks ago. Here are some additional thoughts as I write my work Disconnect for saxophonist Chi Him Chik and percussionist Derek Frank with live electronics.

Writing a piece that includes both performers (who read notated music) and electronics (that do not fit nicely into our notation conventions) creates a unique challenge for perfect pacing. The way we notate music to fit into time is through strict rhythmic divisions within a meter. The notation system normally divides notes into halves (whole, half, quarter, eighth), but we can mark different divisions of notes in relationship to larger beats. For example, we can divide a quarter into 7 sixteenths by putting a 7 and a bracket over them. These all are to fit into a meter, which implies an emphasis (downbeat) and basic rhythmic framework (this is a simplification). Yet many natural sounding rhythms cannot be notated with precision because of our method. Some composers have invented ways to achieve more fluid rhythms, but they often cause great confusion for the standard performer.

Electronics, while they may be synced to one of these meters, are much more easily thought of in absolute time (minutes and seconds). With modern Digital Audio Workstations (DAWs), I can line up different sound files at just the right millisecond. I realized in this project that the best method for my piece was to work in absolute time and then place the live performers within that frame rather than deal with the electronics in a metric framework. To pace the performers within the ammetrical sound world, I first juxtapose standard meter in their music against the electronics, calculating about how many beats of rest are needed between entrances. For longer waits, I have a foot pedal attached to a computer to trigger the next major electronics entrance or shift.

As my piece progresses, however, I take the sax and percussion music away from strict meter. The first thing to go is the meter itself. The standard rhythmic configurations will exist, but without the meter, it implies that there is room for rhythmic flexibility. Then, I introduce reactionary gestures, which are sets of notes that will be triggered by something in the electronics or from the other performer. Soon after, I introduce imitation gestures, where instead of notation, the performers imitate something they hear from the other performer or in the electronics. Later, I give free improvisation with a contour, drawing lines that squiggle through their music to tell the performer only pitch content with a note on the general feeling of the line. And finally, they are given completely free improvisation within certain time frames, with expressive prompts for inspiration. As the structure of the notation loosens and leads into free improvisation, the musicians align themselves more with the spontaneity of the electronic music. As the piece progresses, I have less exact control over the pacing because of the loss of meter and exact rhythms; however, I place trust in the performers' developed musical senses and the implications from my electronics to make this a successful piece.

More on pacing later! This work Disconnect will be premiered at the Exchange of Midwestern Collegiate Composers (EMCC) on April 7th at the University of Iowa (Iowa City) at 7:30. See the performance page for directions (more details will be posted soon)!

Musicking and Improvisation

001.JPG

I was reminded of the concept of "musicking" yesterday, which is the late Christopher Small's term to help people think of music as an action rather than a thing (though adding a "k" to a word makes it feel archaic so I use the term warily). Thus, listening to music is musicking, performing music is musicking, and creating music is musicking. Sheet music or recordings themselves are not music until someone engages with them. While he goes in depth on this topic in his book Musicking: The Meanings of Performing and Listening, I would rather talk about how musicking as brought life to the music around me.

I recently joined UMKC's Imp Ensemble, a free (not necessarily jazz) improvisation ensemble. Several years ago at BYU I was part of a similar group called GEM (Group for Experimental Music). Both these ensembles provide a creative outlet where I am invigorated to musick without the restrictions of societal convention. I believe that we should strive to engage with our culture by putting forth our best contributions to the art, I also believe in what Ned Rorem termed "the distortion of Genius." It helps to step outside the bounds of classical concert music culture to reassess one's work and musical purpose. Next month, the Imp Ensemble will be performing at West Bottoms as part of the West Bottoms Reborn initiative. More details here.

This upcoming Tuesday, my work Improvisations VI: Just, Plane, Natural will be performed by Gabbi Roderer, an amazing flutist (see the event details here). This is the third in a series of improvisation pieces for soloist and live electronics that I originally wrote for myself as a way I could continue improvising outside of a group. But now they have become a fascinating means of collaborating with performers as musicians, tapping into fellow performers' intuitive abilities to musick, not according to the societal norms of their repertoire but according to the dictates of their ear in response to the electronics (which are wholly dependent on the performer's playing).

These improvisations are free for the performer, but while this sounds liberating, it actually invites the performer to solve their own compositional puzzle. The piece only progresses with a tap of a pedal that initiates a change in the electronics. These changes provide the overall structure of the piece while leaving pacing up to the performer. The puzzle for performers is to effectively navigate these changes to achieve their artistic vision. In this manner, the performer also becomes a creator and sculptor of sound in time. The performer also must engage carefully in listening. It is a perfect example of musicking to a composed work without the strings attached.

This work is a joint effort, a true collaboration. Rather than the composer acting as dictator or even as visionary, the composer becomes the facilitator and architect, providing a space and flow to accentuate the performer's work. While the composer is not active on stage (though I can easily code in my own laptop part and devise the form in real-time), the contribution of the electronics provides a unique mark that, while at times sounding very different in each iteration, infallibly remains. The contributions of composer and performer are equal; the electronics can only be engaged by the performer's input and the performer must engage with the electronics to play out the work. Through this partnership, the new work is born every time, and I love this sort of relationship. 

If you are a musician and want to musick with these Improvisations or have insights or comments on these concepts, I'd love to hear from you. Feel free to comment below.