Live Sampling While Live Coding

By Matthew B. Pohl

The concept of live coding is foreign to many people, making it a difficult draw for even small-scale performances outside of a very niche community. Except for the curious musician or coding enthusiast, a live coding environment does not have the same appeal as a rock group, string ensemble, or jazz band would in the confines of the music community. With this knowledge in mind, I proposed the following idea to David Hume and Martin Suarez-Tamayo, two of the Fall 2018 Ensemble’s members: investigate the integration of live musical performance and a live instrument within a live coding environment, namely the TidalCycles live music coding language and a compatible IDE in Atom. In this way, there could be a functional bridge between the audience interpretation of traditional gestural means and the commonly sedentary presentation of live coding.

While it is entirely possible to have any acoustic instrument perform alongside the sound processes executed via Atom (or David Ogborn’s extramuros software when in a group setting), the goal was to integrate a live performance into the code to manipulate it live. The TydalCycles programming language is heavily reliant on the SuperDirt synth, which is activated through the SuperCollider IDE, and on the pre-programmed samples of the SuperDirt synth. We discovered that the SuperDirt samples are located in a user-accessible folder called “Dirt-Samples,” the contents of which could be modified freely without causing errors to that end. Therefore, one could effectively sample a live instrument into any DAW, export the bounced file into a user-created folder within “Dirt-Samples,” and call upon the sample in Atom. This is the process which we followed.

Any instrument can be recorded and sampled, whether mic’d or recorded acoustically in the case of a violin or saxophone, or as recorded from a digital instrument such as a keyboard or electronic drum kit. To avoid having to deal with the problems acoustic instruments face in a live sound environment (levels, compression, and EQ to name a few) as the performance was ongoing, and due to potential space constraints, we optioned to use a digital piano (Roland FA-08) as the performance instrument. The output was sent from the keyboard to a MOTU audio interface, in which the sound from the piano was mixed with the sounds produced from the USB-connected computer and sent to the main mixing board for distribution out to the eight loudspeakers.

The actual performance, without going into significant detail, was comprised of the first two measures of Erik Satie’s 1¬¬ere Gymnopédie which we sampled into Ableton Live at 72 bpm, coinciding with the metrical value in Atom, cps (72/60/6). As the sample was being bounced, exported, and relocated to a user file in our local copy of “Dirt-Sounds,” I continued to play through the piece by improvising on a central theme of the work. The fact that the audience had a (partial) view of a performer improvising on a relatively well-known piece of music while the sample was being created functioned as a discreet separation from the awkward first moments of silence live coding often entails by providing for them a motive to follow throughout the performance.

This is a vital distinction between what live coding itself is perceived as versus what it can result in as part of a collaborative environment. I feel that, as live coding integration matures into a distinct musical art form as opposed to the more-or-less novelty that it presently is, it should be the responsibility of the performer and the orchestrator to find ways that live coding can be intertwined with common musical practice. While this is not a new idea, perhaps this is one step towards performing coders being able to create saw-wave melodies live for an eighties tribute band or live drum loop patterns for a modern pop-rock group, coming soon to a bar or club near you.

Advertisements

Composition in a Live Coding Environment

by Jordan Berg

Live coding in an interesting practice that I first attempted this semester as a member of the ICE Integra Contemporary Electroacoustics Ensemble at the University of Lethbridge. I am a composition major in my final year and have participated in the last two ICE concerts primarily as a percussionist and improviser. This fall I was introduced to live coding and learned the basics over the course of the semester in order to perform what I had absorbed live in our final concert on December 3, 2018 at the Zoo.

Live coding requires a musician to type lines of computer code into an interface to produce sound. It is not just as simple as pitch, dynamic, rhythm and duration – all of these parameters are controlled by the code as well as reverb, modulation, placement within the stereo field, repetition and more. There are so many aspects the performer can control that it would (and does) fill a small book and continues to be developed by musicians and programmers. It is possible that a live coder could perform Beethoven’s Fifth Symphony (with difficulty), but due to the nature of the constant looping that is essential to this practice, the style that has developed is different from the linear world of classical and popular music (although not necessarily). My first attempt at this was to figure out how to code ‘Walk This Way’ by Aerosmith using strange sounds for the first assignment. I felt that I was successful in this and for my second attempt I tried for something more ambitious. This attempt failed miserably because the complexity of having to type in pre-planned pitches, rhythmic groupings, and layers of commands in a live environment can come crashing down if the performer misses something as simple as a single character. I felt that the more successful attempts by my classmates relied less on pre-planning and more on aleatory. The understanding of the code and a rough idea in advance lets a performer engage in the live sculpting of sound rather than a frantic attempt to type pre-existing pages of numbers and characters into a computer under low light with many people watching. The latter seems to guarantee failure.

As a composer, I have always found it difficult to reconcile the relationship between being hyper-controlling on a measure-to-measure basis and letting things form over time without judging them instantly. Part of the problem is the ability to immediately listen to what I’ve composed on my computer at the touch of a button. I have no idea what makes me decide why I think something sounds good and something else doesn’t. I compose something based upon a concept and then I listen to it and hope that it sounds acceptable. If it doesn’t, I delete it instantly. I’ve been told constantly by my composition professors that I need to allow my music to travel into zones that I might not be comfortable with and I’ve never been sure how to accomplish this. My experience with live coding has taught me to value the randomness of initiating a concept and then allowing it to form and develop on its own before I decide to nudge it in a different direction. I realize now that the same is true on the manuscript page. Sometimes you need to allow an idea to come to fruition based on the parameters that you set into motion earlier rather than to judge the perceived acceptance of the results on a note-to-note or measure-to-measure basis.

Ensemble Performance and Teamwork While Live Coding

By Travis Lee

For this write-up I decided to take a closer look at my group performance with Cameron at this semester’s concert for the Integra Contemporary and Electroacoustics Ensemble. I will first talk about our ideas and the decisions we made regarding our piece leading up to the date of the concert. After that I will discuss the thoughts and reflections I have about the piece and the performance we gave. Through this review I hope to pinpoint the strengths and weaknesses of our performance so that I may take that information and apply it to any future performances.

For our group performance piece, Cameron and I decided to repurpose our midterm performance, but not without a bit of retooling. Our changes and additions to the piece were largely based on the feedback we received from the class after our midterm, the main points being to use explore more diverse sample libraries and to incorporate a bass voice into the piece. In an effort to act upon those recommendations, we replaced the “bd” sample with a one from the “hardkick” library and added a line that played samples from “jungbass”. These changes also ended up giving the piece a slightly more electronic texture than before. We also wanted to change the format from ABA, as we felt we needed more breathing room. The original intention was to play the first two sections and end it there, however in the rehearsal we found that there still wasn’t enough time to end the piece comfortably. So, we decided to just use the A section and then supplement it by adding additional effects to shape the sound. This in of itself provided us with a pseudo AB format, one where the musical material is added in the A section and the texture/rhythmic structure is varied in the B section.

After both the rehearsal and the concert performance, one major problem with our method became clear. As with our midterm performance, we used only one channel in our piece and used the stack function to layer multiple voices over top of each other within that channel. There were certain advantages to this format that proved useful to us, chief among them was the fact that it allowed us to add effects to individual voices as well as globally. However, using this format required us to effectively work in stages, since having one channel meant that we could only execute through one of the boxes. To expedite the process and to help prevent us from executing unfinished code, one person would write his code into another box and then copy and paste the code from that box to the primary one. Unfortunately, this was not enough of a fix to counteract the regression in productivity that this problem caused, and in retrospect it was the primary reason why we had to scale down the piece. For subsequent group performances of this nature I plan on returning to the multi-channel method that was used by the rest of the performances, though I would still consider using this format for solo performances if I were to do such a performance.

Live Coding Etiquette in Small and Large Ensembles

By Kierian Turner

When you are in any type of ensemble, the focus is on the collective and the sounds that are collectively being created. With that being said, the focus is not and should not be on the individual, unless it is in a solo section or performance piece. When it comes to a live coding ensemble, these principles remain the same.

When live coding in a large ensemble, you as the individual want to make sure that you are adding to the overall texture of the piece. You do not want your ideas to shine through extensively and mask other ensemble members’ ideas, so you need to make sure that you are thinking about your ‘gain’ values, your ‘pan’ values and the overall busyness of your coding. If you are coding the bass or rhythmic line of the piece, you want to make sure that it fits well within the piece and sits properly in the mix. If you are layering harmonic and melodic lines, you want to make sure that it does not conflict with the existing lines of code.

In general, you want to create different sections throughout the piece, so each ensemble member has to be thinking about the scope of it. You want transitions from section to section to be smooth so that the piece has a general flow, unless you intend to have a very choppy piece. Therefore, each member must think about how they are going to transition from idea A to idea B. Each ensemble member must also exercise the ‘power of limits’ since with a large ensemble, if you have everyone in it consistently jumping from idea to idea and using a wide variety of sounds and parameters, the piece will become very chaotic very quickly. If everyone has a designated role or at the very least limits themselves to a few sounds and effects, this will add to the structure and the cohesiveness of the piece.

When coding in a small ensemble of two or three, the majority of the above information is relevant but there are some additional parameters, restrictions and limitations that one must consider. In a large ensemble, the music will automatically continue to drive forward, since there will be many moving parts in the piece with lines constantly changing. When there are only two or three members, one of the challenges they will face is how to keep the audience engaged. To achieve this, the members must stagger their parts and really focus on the timing of adding new lines of code. You want to write short lines of code and be able to manipulate them quickly so that the audience does not get bored listening to the same repeated lines for one minute. If you stagger your manipulated short lines of code, then each member has a bit longer to work with their code before having to implement their next line, freeing them from a large time constraint. Another challenge that one has to keep in mind is that you don’t have let’s say ten others coding, therefore you have to be able to layer your material creatively and add multiple parts to the piece. Where you could focus on one instrument or sound in a large ensemble, you have to be able to contribute melodic, harmonic and rhythmic elements to truly reinforce the piece in a small ensemble. This means you have to be coding multiple lines continuously or coding lines that contain multiple elements in order to fulfil this.

ICE EVENT on 3 December, 2018

ICE ensemble holds the first-ever “Algorave” at U of L. New music created by real-time computer scripting and live coding (TidalCycles, SuperCollider, Haskell). Pushing musical boundaries. Exploring the limits of electroacoustic sound, improvisation and performance art with ICE.

7:00pm
Monday, December 3
The Zoo, Student Union, uLethbridge
Free admission

unnamed soundsculpture by Daniel Franke & Cedric Kiefer (film)

In April, 2017, Integra Contemporary & Electroacoustics presented their final event of the academic year, entitled Film, Sound & Space. Our concert showcased the hard work of the members of the ensemble: Digital Audio Arts students from the University of Lethbridge. In addition, we featured video, a 360° film, and a web browser-based graphic novel, among other pieces and improvisations.

I want to thank Daniel Franke and Cedric Kiefer for letting reinterpret the musical accompaniment to their original work, entitled unnamed soundsculpture: embodiment of sound (2012), and I want to acknowledge onformative.com for facilitating this collaboration.

For more information about the production of the film, go to:
onformative.com/work/unnamed-soundsculpture

Other Electroacoustic Ensemble videos at https://vimeo.com/album/2201064

CrawlSpace by Bryn Hewko (360° film)

In April, 2017, Integra Contemporary & Electroacoustics presented their final event of the academic year, entitled Film, Sound & Space. Our concert showcased the hard work of the members of the ensemble: Digital Audio Arts students from the University of Lethbridge. In addition, we featured video, a 360° film, and a web browser-based graphic novel, among other pieces and improvisations.

I want to thank Bryn Hewko for permitting us to work with his 360° film, entitled CrawlSpace (2016), and I want to acknowledge the assistance of the New Media Department and “AGILITY” at uLethbridge. New Media supplied us with an Oculus Rift and AGILITY donated several uLethbridge-branded Google Cardboard for the participating audience.

The performance begins after a one-minute introduction.

Other Electroacoustic Ensemble videos at https://vimeo.com/album/2201064