Integra Contemporary & Electroacoustics

Integra Contemporary & Electroacoustics (ICE) is a student ensemble that focuses on the integration of divergent practices and art forms: audio and visual; analogue and digital; instrumental and computer/algorithmic performance; hardware and software; music and performance art.

In ICE, we engage in experimental approaches that include composing and free improv, experimenting with live electronics, developing new digital instruments, sound design, creative sound production techniques, live coding, live gameplay, experimental video, and software synthesis (Logic Pro, Ableton Suite, Max, among others). We also code with TidalCycles, an open source, Haskell-based, software for patterning and performing music with code, for “algoraves” or composing in the studio.

Students can also integrate regular musical instruments (saxophones, pianos, didgeridoo, accordions, voices, percussion instruments, etc.), electronic instruments (guitars, microphones, MOOG Voyager, etc.) and digital instruments (turntables, DTX Drums, wii remotes, tablets, and more…) while practicing group performance, composition and free improv.

We develop new performance practices and explore a potential for increased musical expressivity through technology.

ICE EVENT on 7 April, 2019

In the tradition of the silent film improvisation, join ICE Ensemble for The Silent Game Project Live music. Live beats. Live gameplay. Humanity needs a reset, a restart, a reboot. We’ll be Scrolling to Zero, Scoring Breath, Lingering in Shadows, and more. Resistance! Featuring video artist and composer Rafał Zapała. Visual artist Mary Kavanagh. Thatgamecompany, demosceners Plastic, and Insomniac Games. Guest artist and kit player Theo Lysyk.
7 April, 7:30pm, University Recital Hall

Group Collaboration, Improvisation and Life, while Live Coding

By Jordan Berg

Last semester was my first experience with live coding and one of my most profound conclusions at the time was that ‘going with the flow’ when live coding produced music that I felt was more successful than when I had tried to strictly compose material beforehand. This semester I found the same approach created more successful collaborations when improvising with the group. If I let my feelings, ideas and previously chosen sounds dominate my thinking, I ended up fighting with the unpredictability of the other sounds and choices that were made by the other performers. If I held nothing sacred in my choices and ideas I had made beforehand, remained very open minded with the direction and sounds made by the other players, I found that the overall experience was more enjoyable and always resulted in music that was original and interesting.

Timbre or sound choice is one of the biggest potential issues when improvising with others in a live coding environment. When improvising with non-electronic instrumentalists it is easier to know in advance how the sound from your colleagues will blend with your own instrument/sound. When live coding, the sounds that other members of your group make can be totally unpredictable. It could be the sound of an acoustic instrument, a voice, an electronic noise, animal sound or really anything else one can think of (and beyond). When you are accustomed to traditional composition or improvisation this can be frustrating at first because your ideas for the piece and what you have composed in advance might not pair successfully with the vision and sounds of the other members. Another potentially frustrating part of live coding with a group is that you might completely disagree initially that the sounds that your colleagues make are suitable for the particular performance. One player might be taking a more serious approach, for example playing a percussive rhythm, a melody with a recognizable instrument, or a subtle sonic atmosphere, while another player might take a comedic approach (this often happens). What I found to be a surprising result of having an open mind is that when the group engaged in a discussion about our choices, sometimes it seemed that everyone else unanimously enjoyed something I thought wasn’t working. I was happy that I had kept an open mind because it made me realize that maybe the only reason that I felt that something didn’t fit was perhaps that it simply didn’t work with my idea. Maybe my idea didn’t really work in the first place. It also made me think that when I sometimes felt displeased with an abrasive sound played loudly and/or repetitively maybe others sometimes felt that same way about a sound that I was proud of. I discovered that I should definitely communicate my opinions, but that I should remain very open minded, play my sounds in a way that blend in with the others and not carry overtop all the time, and then change things up from time to time because no matter how much I loved what I had done, it would become tiring to others inevitably. Even with the seriousness vs. comedic dichotomy and the fact that there might be sounds in the mix you don’t fully appreciate, I find that live coding is very forgiving if you let it be. If you hold things you’ve come up with as too precious and you allow your idea for the piece to dominate your thinking, you will end up conflicting with other performers’ elements and feeling frustrated. If you communicate with others in a constructive and positive way and you let things fall as they may without getting precious about anything in particular and allowing for strangeness that you may not love initially, the music that results ends up sounding good even if it does have strange, surreal, or out of place sounds. I feel that communication is very necessary because you can discuss the vision for the piece and talk about strategies. It is also necessary to openly discuss relative volumes/levels because often a player doesn’t realize how their sound or performance is dominating due to their focus on it. I found the experience overall to be rewarding and will be of value every time I perform with another musician whether live coding or otherwise.

Spontaneity and Delayed Response in Performance: Live Coding and Live Gameplay

By Matthew B. Pohl

Improvisatory performance has long been part of musical interpretation since the days of fugue and counterpoint, with virtuosic musicians such as Mozart and Beethoven serving as the earliest such examples in the Western art tradition. The twelve-bar blues progression and its number of variants serve as a more popular form of music containing improvisational language and techniques. Both of these examples follow the traditional idea of interacting directly with a sound source, using a number of control mechanisms – force, speed, intonation, subtlety – to invoke a desired sound from an instrument.

For example, if a drummer wanted the kick drum to make sound, they would force their foot into the pedal, which results in the kick drum being struck by a mallet with the help of some mechanics to transfer the force of motion. This somewhat complex process can normally take place in less than a quarter of a second. In the case of the ICE Ensemble and its live coding dynamic, the process changes and so do the variables: to produce a kick drum sound, one must effectively and efficiently type the name of the low-level sound location, its selection number, a gain value, and a pan value, in syntax, within an IDE containing TidalCycles. For a proficient coder/typist, this process can take anywhere from ten to thirty seconds, a far cry from the near instantaneous motion of interacting with a kick drum pedal.

The ICE Ensemble explored a very interesting perspective on performance this semester: live coding for video games. There is an exceptional range of spontaneity involved in video games, as the player alone is in control of the flow of the game and, in the case of live music creation for the game, the flow of the performance. The challenge was to explore limits when live coding for games, and what we can do to overcome this.

The first challenge is, with modern technology, exceptionally simple: overcoming the delay from slow typist speeds with copy/paste. To implement complex lines of code quickly, scenes used in performance would have to be developed and practiced beforehand by musicians and gaming performers in tandem. This cuts down on both the delayed response, but since the gamer also has practice beforehand it also cuts down on the spontaneity of live gameplay.

The second challenge becomes how one incorporates live coding into gameplay without removing its spontaneity – or at the very least the sense of spontaneity. This challenge is much more difficult to answer, partially because any observation of an audience’s perception of spontaneity would have resided in the final concert that was cancelled due to extraneous circumstances. However, one should reflect on the core idea that an interpretation of music and its related elements will vary from person to person, and that a non-performer will likely maintain a different perception than a performer concerning what is spontaneous, what is planned, and what is rehearsed.

Leading up to the final concert, one of the practicing gamers commented a number of times that her perception of the in-game music changed while practicing with the ensemble, and that it was more enveloping and interactive to have it created while the game was being played by the individual as opposed to being as a computer-interpreted result of that player’s actions, and that gaming at home after the practices “just isn’t the same”. Perhaps that would be the general consensus among an audience with similar backgrounds to this individual, while taking in such a unique subtype of music creation in an exceptionally unique way. Perhaps in this case, then, spontaneity is not about being unaware of what will happen, but about the anticipation of what could happen. Remember that first time you listened to Beethoven’s 9th Symphony as a critical music listener?

Beginner’s Guide to Tidal

By Cameron Sperling

So, you want to learn how to use Tidal? Well then, first things first, let’s start with the basics. Creating a strong foundation on which to build your understanding is vital. In this tutorial we will cover two important topics to get you started. How to create a sound, and how to modify your sound.

The first thing to do is decide which synth connection(s) you want to work on. If you’re in a group, you should number yourselves off. For these examples I’m just going to use connection one. This is inputted as d1. Connection two would be d2, connection three, d3, and so forth. Once that’s done, you need to choose a sound file and type it into the line of code. For these examples I’m going to use “bleep”. You can choose whichever file you want. If you’re unsure as to which files you can use, check the Dirt-Samples folder which comes with SuperDirt. Once you’ve decided, you type the following beginning with d1… (do not include “Eg. 1:”).

Eg. 1: d1 $ s “bleep”

Let’s break down what all that gibberish – that strange code – means. As previously mentioned, d1 tells the program to use channel 1. $ indicates to the program what level of priority this piece of the code has. The s tells the program to play a sound, and bleep is the name of the sound file being played (see “sound bank” below). Note that it needs to be inputted inside quotation marks. Finally, all that’s left is to actually run (evaluation or execute) the program. You can do so by pressing the shift and enter keys simultaneously.

So, you got Tidal to play for you? That’s great! Now it’s time to move on to the next step, changing and modifying your sound. The first thing that we need to do is clarify some terminology. In the last section, I used the term “sound file” for simplicity’s sake, but that wasn’t really accurate. It would’ve been more accurate to have used the term “sound bank”. You see, the name that was inputted in my example was “bleep”. This isn’t actually the name of a single sound file, but a collection of sound files that are numbered off starting from 0. (Not inputting any number, as was done in Eg. 1, is the same as inputting 0) There are two ways of changing which file in the soundbank to use, a basic method (Eg. 2) and an Advanced method (Eg.3)

Eg. 2a: d1 $ s “bleep:0”
Eg. 2b: d1 $ s “bleep:1”
Eg. 3a: d1 $ n “0” s # “bleep”
Eg. 3b: d1 $ n “1” s # “bleep”

Changing sound files isn’t the only way that you can add variety. You can also modify the sounds that you’re using too. Tidal has many different functions, but to keep this from becoming too long, I’m just going to explain three, gain, crush, and up. Gain is the simplest of these, it just controls the volume level. If the value is less than 1, the sound is quieter. If the value is greater than 1, the sound is louder. Generally speaking, going about 1 is dangerous and make lead to digital distortion (levels to hot).

Eg. 4a: d1 $ s “bleep” # gain 0.8
Eg. 4b: d1 $ s “bleep” # gain 1.2

The crush function crushes the bit depth, creating a more crunchy sound. The smaller the value, the more the sound gets distorted. The value can’t go smaller than 1.0, however.

Eg. 5: d1 $ s “bleep” # crush 5

Finally, up shifts the the pitch of the sound file. The value provided equals the number of semitones up (or down if you use a negative value) the sound file is shifted. It should be noted, the pitch and speed of a sound file are connected, and so the higher you shift it, the faster/shorter the sound will become.

Eg. 6: d1 $ s “bleep” # up 5

You may have noticed that in the most recent examples, a # was used. Similar to $, this symbol is used to indicate a level of priority to the program.

And there you have it! Tidal is a complex program, with lots to learn about how to use it, but I hope that you found this tutorial useful for getting you started. Thank you for your time.

Finding a Place in an Ensemble: A Reflection on the First Live Coding Experience

By Carter Potts

In an ensemble, each individual often has their own role in performance. For example, in a pop band, there is often a vocalist, guitarist, bassist, and drummer. In this type of ensemble, the most common roles, respectively, are to carry the melody, provide harmonic support, drive the rhythm, and hold the ensemble together and on time. However, in less traditional ensembles these roles aren’t as clearly defined. I have come to learn this as a member of the ICE Ensemble.

My first semester as a member of the ensemble was also the first semester that the ensemble partook in live coding. At first, I was skeptical of the ability for live coding to be recognized as a legitimate means of musical expression. As I continued with the ensemble, however, I recognized that live coding couldn’t necessarily be performed like traditional music. By that I mean that there are limits to live coding, and that these limits are integral to the live coding experience.

Live coding often limits itself to just a few short musical ideas, with these motifs being looped infinitely until terminated by the performer. The interest in live coding thus derives from the performer’s ability to manipulate the repeating musical material in new ways. It also allows the performer the ability to layer new ideas on top of old ones, while also transitioning this old and new material seamlessly. This may sound like a lot to focus on at one time, and it is. This is where the ensemble becomes an invaluable tool in live coding performance.

As already stated, ICE Ensemble’s live coding era was only a semester young at this point, so there was much development for the group to undergo. Every ensemble member started with no experience in live coding and no knowledge of the coding language that we used. As a result, everyone entered the ensemble at the same level. As the group grew in experience, and therefore knowledge, each individual began to recognize their own strengths, and discover their favourite sounds and preferred coding functions. This improved the group’s improv performance, as everyone fell into a role where they were most comfortable.

These roles became most evident in the improv game the ensemble played, called “On the Clock”. In this game, the entire ensemble takes turns editing the same

block of code. Each individual is given 30 seconds to listen to the current state of the piece, and then edit the code before the timer goes off. This quick thinking game compels each performer to play to their strongest qualities. For example, some performers choose to implement an additional voice to the code in order to richen the texture. Other performers may choose to make quick, yet impactful changes to pre-existing code that may, for instance, increase the rhythmic motion by adding more samples to the loop, or change the texture by changing the sound bank used for a voice. These different styles of performance allow for the piece to remain interesting by constantly altering all of the different musical qualities that are present in the performance.

The ICE ensemble has shown substantial growth through its first semester of live coding by displaying the different roles ensemble members have taken within the group. Each member limited themselves within the group in an effort to produce a cohesive performance. As a result, the ensemble members independently found their own roles within live coding performance, much like a more traditional ensemble would already have in place.