ICE EVENT on 7 April, 2019

In the tradition of the silent film improvisation, join ICE Ensemble for The Silent Game Project Live music. Live beats. Live gameplay. Humanity needs a reset, a restart, a reboot. We’ll be Scrolling to Zero, Scoring Breath, Lingering in Shadows, and more. Resistance! Featuring video artist and composer Rafał Zapała. Visual artist Mary Kavanagh. Thatgamecompany, demosceners Plastic, and Insomniac Games. Guest artist and kit player Theo Lysyk.
7 April, 7:30pm, University Recital Hall

Advertisements

Group Collaboration, Improvisation and Life, while Live Coding

By Jordan Berg

Last semester was my first experience with live coding and one of my most profound conclusions at the time was that ‘going with the flow’ when live coding produced music that I felt was more successful than when I had tried to strictly compose material beforehand. This semester I found the same approach created more successful collaborations when improvising with the group. If I let my feelings, ideas and previously chosen sounds dominate my thinking, I ended up fighting with the unpredictability of the other sounds and choices that were made by the other performers. If I held nothing sacred in my choices and ideas I had made beforehand, remained very open minded with the direction and sounds made by the other players, I found that the overall experience was more enjoyable and always resulted in music that was original and interesting.

Timbre or sound choice is one of the biggest potential issues when improvising with others in a live coding environment. When improvising with non-electronic instrumentalists it is easier to know in advance how the sound from your colleagues will blend with your own instrument/sound. When live coding, the sounds that other members of your group make can be totally unpredictable. It could be the sound of an acoustic instrument, a voice, an electronic noise, animal sound or really anything else one can think of (and beyond). When you are accustomed to traditional composition or improvisation this can be frustrating at first because your ideas for the piece and what you have composed in advance might not pair successfully with the vision and sounds of the other members. Another potentially frustrating part of live coding with a group is that you might completely disagree initially that the sounds that your colleagues make are suitable for the particular performance. One player might be taking a more serious approach, for example playing a percussive rhythm, a melody with a recognizable instrument, or a subtle sonic atmosphere, while another player might take a comedic approach (this often happens). What I found to be a surprising result of having an open mind is that when the group engaged in a discussion about our choices, sometimes it seemed that everyone else unanimously enjoyed something I thought wasn’t working. I was happy that I had kept an open mind because it made me realize that maybe the only reason that I felt that something didn’t fit was perhaps that it simply didn’t work with my idea. Maybe my idea didn’t really work in the first place. It also made me think that when I sometimes felt displeased with an abrasive sound played loudly and/or repetitively maybe others sometimes felt that same way about a sound that I was proud of. I discovered that I should definitely communicate my opinions, but that I should remain very open minded, play my sounds in a way that blend in with the others and not carry overtop all the time, and then change things up from time to time because no matter how much I loved what I had done, it would become tiring to others inevitably. Even with the seriousness vs. comedic dichotomy and the fact that there might be sounds in the mix you don’t fully appreciate, I find that live coding is very forgiving if you let it be. If you hold things you’ve come up with as too precious and you allow your idea for the piece to dominate your thinking, you will end up conflicting with other performers’ elements and feeling frustrated. If you communicate with others in a constructive and positive way and you let things fall as they may without getting precious about anything in particular and allowing for strangeness that you may not love initially, the music that results ends up sounding good even if it does have strange, surreal, or out of place sounds. I feel that communication is very necessary because you can discuss the vision for the piece and talk about strategies. It is also necessary to openly discuss relative volumes/levels because often a player doesn’t realize how their sound or performance is dominating due to their focus on it. I found the experience overall to be rewarding and will be of value every time I perform with another musician whether live coding or otherwise.

Spontaneity and Delayed Response in Performance: Live Coding and Live Gameplay

By Matthew B. Pohl

Improvisatory performance has long been part of musical interpretation since the days of fugue and counterpoint, with virtuosic musicians such as Mozart and Beethoven serving as the earliest such examples in the Western art tradition. The twelve-bar blues progression and its number of variants serve as a more popular form of music containing improvisational language and techniques. Both of these examples follow the traditional idea of interacting directly with a sound source, using a number of control mechanisms – force, speed, intonation, subtlety – to invoke a desired sound from an instrument.

For example, if a drummer wanted the kick drum to make sound, they would force their foot into the pedal, which results in the kick drum being struck by a mallet with the help of some mechanics to transfer the force of motion. This somewhat complex process can normally take place in less than a quarter of a second. In the case of the ICE Ensemble and its live coding dynamic, the process changes and so do the variables: to produce a kick drum sound, one must effectively and efficiently type the name of the low-level sound location, its selection number, a gain value, and a pan value, in syntax, within an IDE containing TidalCycles. For a proficient coder/typist, this process can take anywhere from ten to thirty seconds, a far cry from the near instantaneous motion of interacting with a kick drum pedal.

The ICE Ensemble explored a very interesting perspective on performance this semester: live coding for video games. There is an exceptional range of spontaneity involved in video games, as the player alone is in control of the flow of the game and, in the case of live music creation for the game, the flow of the performance. The challenge was to explore limits when live coding for games, and what we can do to overcome this.

The first challenge is, with modern technology, exceptionally simple: overcoming the delay from slow typist speeds with copy/paste. To implement complex lines of code quickly, scenes used in performance would have to be developed and practiced beforehand by musicians and gaming performers in tandem. This cuts down on both the delayed response, but since the gamer also has practice beforehand it also cuts down on the spontaneity of live gameplay.

The second challenge becomes how one incorporates live coding into gameplay without removing its spontaneity – or at the very least the sense of spontaneity. This challenge is much more difficult to answer, partially because any observation of an audience’s perception of spontaneity would have resided in the final concert that was cancelled due to extraneous circumstances. However, one should reflect on the core idea that an interpretation of music and its related elements will vary from person to person, and that a non-performer will likely maintain a different perception than a performer concerning what is spontaneous, what is planned, and what is rehearsed.

Leading up to the final concert, one of the practicing gamers commented a number of times that her perception of the in-game music changed while practicing with the ensemble, and that it was more enveloping and interactive to have it created while the game was being played by the individual as opposed to being as a computer-interpreted result of that player’s actions, and that gaming at home after the practices “just isn’t the same”. Perhaps that would be the general consensus among an audience with similar backgrounds to this individual, while taking in such a unique subtype of music creation in an exceptionally unique way. Perhaps in this case, then, spontaneity is not about being unaware of what will happen, but about the anticipation of what could happen. Remember that first time you listened to Beethoven’s 9th Symphony as a critical music listener?

Beginner’s Guide to Tidal

By Cameron Sperling

So, you want to learn how to use Tidal? Well then, first things first, let’s start with the basics. Creating a strong foundation on which to build your understanding is vital. In this tutorial we will cover two important topics to get you started. How to create a sound, and how to modify your sound.

The first thing to do is decide which synth connection(s) you want to work on. If you’re in a group, you should number yourselves off. For these examples I’m just going to use connection one. This is inputted as d1. Connection two would be d2, connection three, d3, and so forth. Once that’s done, you need to choose a sound file and type it into the line of code. For these examples I’m going to use “bleep”. You can choose whichever file you want. If you’re unsure as to which files you can use, check the Dirt-Samples folder which comes with SuperDirt. Once you’ve decided, you type the following beginning with d1… (do not include “Eg. 1:”).

Eg. 1: d1 $ s “bleep”

Let’s break down what all that gibberish – that strange code – means. As previously mentioned, d1 tells the program to use channel 1. $ indicates to the program what level of priority this piece of the code has. The s tells the program to play a sound, and bleep is the name of the sound file being played (see “sound bank” below). Note that it needs to be inputted inside quotation marks. Finally, all that’s left is to actually run (evaluation or execute) the program. You can do so by pressing the shift and enter keys simultaneously.

So, you got Tidal to play for you? That’s great! Now it’s time to move on to the next step, changing and modifying your sound. The first thing that we need to do is clarify some terminology. In the last section, I used the term “sound file” for simplicity’s sake, but that wasn’t really accurate. It would’ve been more accurate to have used the term “sound bank”. You see, the name that was inputted in my example was “bleep”. This isn’t actually the name of a single sound file, but a collection of sound files that are numbered off starting from 0. (Not inputting any number, as was done in Eg. 1, is the same as inputting 0) There are two ways of changing which file in the soundbank to use, a basic method (Eg. 2) and an Advanced method (Eg.3)

Eg. 2a: d1 $ s “bleep:0”
Eg. 2b: d1 $ s “bleep:1”
Eg. 3a: d1 $ n “0” s # “bleep”
Eg. 3b: d1 $ n “1” s # “bleep”

Changing sound files isn’t the only way that you can add variety. You can also modify the sounds that you’re using too. Tidal has many different functions, but to keep this from becoming too long, I’m just going to explain three, gain, crush, and up. Gain is the simplest of these, it just controls the volume level. If the value is less than 1, the sound is quieter. If the value is greater than 1, the sound is louder. Generally speaking, going about 1 is dangerous and make lead to digital distortion (levels to hot).

Eg. 4a: d1 $ s “bleep” # gain 0.8
Eg. 4b: d1 $ s “bleep” # gain 1.2

The crush function crushes the bit depth, creating a more crunchy sound. The smaller the value, the more the sound gets distorted. The value can’t go smaller than 1.0, however.

Eg. 5: d1 $ s “bleep” # crush 5

Finally, up shifts the the pitch of the sound file. The value provided equals the number of semitones up (or down if you use a negative value) the sound file is shifted. It should be noted, the pitch and speed of a sound file are connected, and so the higher you shift it, the faster/shorter the sound will become.

Eg. 6: d1 $ s “bleep” # up 5

You may have noticed that in the most recent examples, a # was used. Similar to $, this symbol is used to indicate a level of priority to the program.

And there you have it! Tidal is a complex program, with lots to learn about how to use it, but I hope that you found this tutorial useful for getting you started. Thank you for your time.

Finding a Place in an Ensemble: A Reflection on the First Live Coding Experience

By Carter Potts

In an ensemble, each individual often has their own role in performance. For example, in a pop band, there is often a vocalist, guitarist, bassist, and drummer. In this type of ensemble, the most common roles, respectively, are to carry the melody, provide harmonic support, drive the rhythm, and hold the ensemble together and on time. However, in less traditional ensembles these roles aren’t as clearly defined. I have come to learn this as a member of the ICE Ensemble.

My first semester as a member of the ensemble was also the first semester that the ensemble partook in live coding. At first, I was skeptical of the ability for live coding to be recognized as a legitimate means of musical expression. As I continued with the ensemble, however, I recognized that live coding couldn’t necessarily be performed like traditional music. By that I mean that there are limits to live coding, and that these limits are integral to the live coding experience.

Live coding often limits itself to just a few short musical ideas, with these motifs being looped infinitely until terminated by the performer. The interest in live coding thus derives from the performer’s ability to manipulate the repeating musical material in new ways. It also allows the performer the ability to layer new ideas on top of old ones, while also transitioning this old and new material seamlessly. This may sound like a lot to focus on at one time, and it is. This is where the ensemble becomes an invaluable tool in live coding performance.

As already stated, ICE Ensemble’s live coding era was only a semester young at this point, so there was much development for the group to undergo. Every ensemble member started with no experience in live coding and no knowledge of the coding language that we used. As a result, everyone entered the ensemble at the same level. As the group grew in experience, and therefore knowledge, each individual began to recognize their own strengths, and discover their favourite sounds and preferred coding functions. This improved the group’s improv performance, as everyone fell into a role where they were most comfortable.

These roles became most evident in the improv game the ensemble played, called “On the Clock”. In this game, the entire ensemble takes turns editing the same

block of code. Each individual is given 30 seconds to listen to the current state of the piece, and then edit the code before the timer goes off. This quick thinking game compels each performer to play to their strongest qualities. For example, some performers choose to implement an additional voice to the code in order to richen the texture. Other performers may choose to make quick, yet impactful changes to pre-existing code that may, for instance, increase the rhythmic motion by adding more samples to the loop, or change the texture by changing the sound bank used for a voice. These different styles of performance allow for the piece to remain interesting by constantly altering all of the different musical qualities that are present in the performance.

The ICE ensemble has shown substantial growth through its first semester of live coding by displaying the different roles ensemble members have taken within the group. Each member limited themselves within the group in an effort to produce a cohesive performance. As a result, the ensemble members independently found their own roles within live coding performance, much like a more traditional ensemble would already have in place.

Live Sampling While Live Coding

By Matthew B. Pohl

The concept of live coding is foreign to many people, making it a difficult draw for even small-scale performances outside of a very niche community. Except for the curious musician or coding enthusiast, a live coding environment does not have the same appeal as a rock group, string ensemble, or jazz band would in the confines of the music community. With this knowledge in mind, I proposed the following idea to David Hume and Martin Suarez-Tamayo, two of the Fall 2018 Ensemble’s members: investigate the integration of live musical performance and a live instrument within a live coding environment, namely the TidalCycles live music coding language and a compatible IDE in Atom. In this way, there could be a functional bridge between the audience interpretation of traditional gestural means and the commonly sedentary presentation of live coding.

While it is entirely possible to have any acoustic instrument perform alongside the sound processes executed via Atom (or David Ogborn’s extramuros software when in a group setting), the goal was to integrate a live performance into the code to manipulate it live. The TydalCycles programming language is heavily reliant on the SuperDirt synth, which is activated through the SuperCollider IDE, and on the pre-programmed samples of the SuperDirt synth. We discovered that the SuperDirt samples are located in a user-accessible folder called “Dirt-Samples,” the contents of which could be modified freely without causing errors to that end. Therefore, one could effectively sample a live instrument into any DAW, export the bounced file into a user-created folder within “Dirt-Samples,” and call upon the sample in Atom. This is the process which we followed.

Any instrument can be recorded and sampled, whether mic’d or recorded acoustically in the case of a violin or saxophone, or as recorded from a digital instrument such as a keyboard or electronic drum kit. To avoid having to deal with the problems acoustic instruments face in a live sound environment (levels, compression, and EQ to name a few) as the performance was ongoing, and due to potential space constraints, we optioned to use a digital piano (Roland FA-08) as the performance instrument. The output was sent from the keyboard to a MOTU audio interface, in which the sound from the piano was mixed with the sounds produced from the USB-connected computer and sent to the main mixing board for distribution out to the eight loudspeakers.

The actual performance, without going into significant detail, was comprised of the first two measures of Erik Satie’s 1¬¬ere Gymnopédie which we sampled into Ableton Live at 72 bpm, coinciding with the metrical value in Atom, cps (72/60/6). As the sample was being bounced, exported, and relocated to a user file in our local copy of “Dirt-Sounds,” I continued to play through the piece by improvising on a central theme of the work. The fact that the audience had a (partial) view of a performer improvising on a relatively well-known piece of music while the sample was being created functioned as a discreet separation from the awkward first moments of silence live coding often entails by providing for them a motive to follow throughout the performance.

This is a vital distinction between what live coding itself is perceived as versus what it can result in as part of a collaborative environment. I feel that, as live coding integration matures into a distinct musical art form as opposed to the more-or-less novelty that it presently is, it should be the responsibility of the performer and the orchestrator to find ways that live coding can be intertwined with common musical practice. While this is not a new idea, perhaps this is one step towards performing coders being able to create saw-wave melodies live for an eighties tribute band or live drum loop patterns for a modern pop-rock group, coming soon to a bar or club near you.

Composition in a Live Coding Environment

by Jordan Berg

Live coding in an interesting practice that I first attempted this semester as a member of the ICE Integra Contemporary Electroacoustics Ensemble at the University of Lethbridge. I am a composition major in my final year and have participated in the last two ICE concerts primarily as a percussionist and improviser. This fall I was introduced to live coding and learned the basics over the course of the semester in order to perform what I had absorbed live in our final concert on December 3, 2018 at the Zoo.

Live coding requires a musician to type lines of computer code into an interface to produce sound. It is not just as simple as pitch, dynamic, rhythm and duration – all of these parameters are controlled by the code as well as reverb, modulation, placement within the stereo field, repetition and more. There are so many aspects the performer can control that it would (and does) fill a small book and continues to be developed by musicians and programmers. It is possible that a live coder could perform Beethoven’s Fifth Symphony (with difficulty), but due to the nature of the constant looping that is essential to this practice, the style that has developed is different from the linear world of classical and popular music (although not necessarily). My first attempt at this was to figure out how to code ‘Walk This Way’ by Aerosmith using strange sounds for the first assignment. I felt that I was successful in this and for my second attempt I tried for something more ambitious. This attempt failed miserably because the complexity of having to type in pre-planned pitches, rhythmic groupings, and layers of commands in a live environment can come crashing down if the performer misses something as simple as a single character. I felt that the more successful attempts by my classmates relied less on pre-planning and more on aleatory. The understanding of the code and a rough idea in advance lets a performer engage in the live sculpting of sound rather than a frantic attempt to type pre-existing pages of numbers and characters into a computer under low light with many people watching. The latter seems to guarantee failure.

As a composer, I have always found it difficult to reconcile the relationship between being hyper-controlling on a measure-to-measure basis and letting things form over time without judging them instantly. Part of the problem is the ability to immediately listen to what I’ve composed on my computer at the touch of a button. I have no idea what makes me decide why I think something sounds good and something else doesn’t. I compose something based upon a concept and then I listen to it and hope that it sounds acceptable. If it doesn’t, I delete it instantly. I’ve been told constantly by my composition professors that I need to allow my music to travel into zones that I might not be comfortable with and I’ve never been sure how to accomplish this. My experience with live coding has taught me to value the randomness of initiating a concept and then allowing it to form and develop on its own before I decide to nudge it in a different direction. I realize now that the same is true on the manuscript page. Sometimes you need to allow an idea to come to fruition based on the parameters that you set into motion earlier rather than to judge the perceived acceptance of the results on a note-to-note or measure-to-measure basis.