Live Sampling While Live Coding

By Matthew B. Pohl

The concept of live coding is foreign to many people, making it a difficult draw for even small-scale performances outside of a very niche community. Except for the curious musician or coding enthusiast, a live coding environment does not have the same appeal as a rock group, string ensemble, or jazz band would in the confines of the music community. With this knowledge in mind, I proposed the following idea to David Hume and Martin Suarez-Tamayo, two of the Fall 2018 Ensemble’s members: investigate the integration of live musical performance and a live instrument within a live coding environment, namely the TidalCycles live music coding language and a compatible IDE in Atom. In this way, there could be a functional bridge between the audience interpretation of traditional gestural means and the commonly sedentary presentation of live coding.

While it is entirely possible to have any acoustic instrument perform alongside the sound processes executed via Atom (or David Ogborn’s extramuros software when in a group setting), the goal was to integrate a live performance into the code to manipulate it live. The TydalCycles programming language is heavily reliant on the SuperDirt synth, which is activated through the SuperCollider IDE, and on the pre-programmed samples of the SuperDirt synth. We discovered that the SuperDirt samples are located in a user-accessible folder called “Dirt-Samples,” the contents of which could be modified freely without causing errors to that end. Therefore, one could effectively sample a live instrument into any DAW, export the bounced file into a user-created folder within “Dirt-Samples,” and call upon the sample in Atom. This is the process which we followed.

Any instrument can be recorded and sampled, whether mic’d or recorded acoustically in the case of a violin or saxophone, or as recorded from a digital instrument such as a keyboard or electronic drum kit. To avoid having to deal with the problems acoustic instruments face in a live sound environment (levels, compression, and EQ to name a few) as the performance was ongoing, and due to potential space constraints, we optioned to use a digital piano (Roland FA-08) as the performance instrument. The output was sent from the keyboard to a MOTU audio interface, in which the sound from the piano was mixed with the sounds produced from the USB-connected computer and sent to the main mixing board for distribution out to the eight loudspeakers.

The actual performance, without going into significant detail, was comprised of the first two measures of Erik Satie’s 1¬¬ere Gymnopédie which we sampled into Ableton Live at 72 bpm, coinciding with the metrical value in Atom, cps (72/60/6). As the sample was being bounced, exported, and relocated to a user file in our local copy of “Dirt-Sounds,” I continued to play through the piece by improvising on a central theme of the work. The fact that the audience had a (partial) view of a performer improvising on a relatively well-known piece of music while the sample was being created functioned as a discreet separation from the awkward first moments of silence live coding often entails by providing for them a motive to follow throughout the performance.

This is a vital distinction between what live coding itself is perceived as versus what it can result in as part of a collaborative environment. I feel that, as live coding integration matures into a distinct musical art form as opposed to the more-or-less novelty that it presently is, it should be the responsibility of the performer and the orchestrator to find ways that live coding can be intertwined with common musical practice. While this is not a new idea, perhaps this is one step towards performing coders being able to create saw-wave melodies live for an eighties tribute band or live drum loop patterns for a modern pop-rock group, coming soon to a bar or club near you.

Advertisements

Composition in a Live Coding Environment

by Jordan Berg

Live coding in an interesting practice that I first attempted this semester as a member of the ICE Integra Contemporary Electroacoustics Ensemble at the University of Lethbridge. I am a composition major in my final year and have participated in the last two ICE concerts primarily as a percussionist and improviser. This fall I was introduced to live coding and learned the basics over the course of the semester in order to perform what I had absorbed live in our final concert on December 3, 2018 at the Zoo.

Live coding requires a musician to type lines of computer code into an interface to produce sound. It is not just as simple as pitch, dynamic, rhythm and duration – all of these parameters are controlled by the code as well as reverb, modulation, placement within the stereo field, repetition and more. There are so many aspects the performer can control that it would (and does) fill a small book and continues to be developed by musicians and programmers. It is possible that a live coder could perform Beethoven’s Fifth Symphony (with difficulty), but due to the nature of the constant looping that is essential to this practice, the style that has developed is different from the linear world of classical and popular music (although not necessarily). My first attempt at this was to figure out how to code ‘Walk This Way’ by Aerosmith using strange sounds for the first assignment. I felt that I was successful in this and for my second attempt I tried for something more ambitious. This attempt failed miserably because the complexity of having to type in pre-planned pitches, rhythmic groupings, and layers of commands in a live environment can come crashing down if the performer misses something as simple as a single character. I felt that the more successful attempts by my classmates relied less on pre-planning and more on aleatory. The understanding of the code and a rough idea in advance lets a performer engage in the live sculpting of sound rather than a frantic attempt to type pre-existing pages of numbers and characters into a computer under low light with many people watching. The latter seems to guarantee failure.

As a composer, I have always found it difficult to reconcile the relationship between being hyper-controlling on a measure-to-measure basis and letting things form over time without judging them instantly. Part of the problem is the ability to immediately listen to what I’ve composed on my computer at the touch of a button. I have no idea what makes me decide why I think something sounds good and something else doesn’t. I compose something based upon a concept and then I listen to it and hope that it sounds acceptable. If it doesn’t, I delete it instantly. I’ve been told constantly by my composition professors that I need to allow my music to travel into zones that I might not be comfortable with and I’ve never been sure how to accomplish this. My experience with live coding has taught me to value the randomness of initiating a concept and then allowing it to form and develop on its own before I decide to nudge it in a different direction. I realize now that the same is true on the manuscript page. Sometimes you need to allow an idea to come to fruition based on the parameters that you set into motion earlier rather than to judge the perceived acceptance of the results on a note-to-note or measure-to-measure basis.