Using sampled audio for your software instrument voice in Ableton

 

This video was created by Martin Suarez as a course assignment for “Integra Contemporary and Electroacoustics Ensemble” at The University of Lethbridge (Canada).

Advertisements

Beginner’s Guide to Tidal

By Cameron Sperling

So, you want to learn how to use Tidal? Well then, first things first, let’s start with the basics. Creating a strong foundation on which to build your understanding is vital. In this tutorial we will cover two important topics to get you started. How to create a sound, and how to modify your sound.

The first thing to do is decide which synth connection(s) you want to work on. If you’re in a group, you should number yourselves off. For these examples I’m just going to use connection one. This is inputted as d1. Connection two would be d2, connection three, d3, and so forth. Once that’s done, you need to choose a sound file and type it into the line of code. For these examples I’m going to use “bleep”. You can choose whichever file you want. If you’re unsure as to which files you can use, check the Dirt-Samples folder which comes with SuperDirt. Once you’ve decided, you type the following beginning with d1… (do not include “Eg. 1:”).

Eg. 1: d1 $ s “bleep”

Let’s break down what all that gibberish – that strange code – means. As previously mentioned, d1 tells the program to use channel 1. $ indicates to the program what level of priority this piece of the code has. The s tells the program to play a sound, and bleep is the name of the sound file being played (see “sound bank” below). Note that it needs to be inputted inside quotation marks. Finally, all that’s left is to actually run (evaluation or execute) the program. You can do so by pressing the shift and enter keys simultaneously.

So, you got Tidal to play for you? That’s great! Now it’s time to move on to the next step, changing and modifying your sound. The first thing that we need to do is clarify some terminology. In the last section, I used the term “sound file” for simplicity’s sake, but that wasn’t really accurate. It would’ve been more accurate to have used the term “sound bank”. You see, the name that was inputted in my example was “bleep”. This isn’t actually the name of a single sound file, but a collection of sound files that are numbered off starting from 0. (Not inputting any number, as was done in Eg. 1, is the same as inputting 0) There are two ways of changing which file in the soundbank to use, a basic method (Eg. 2) and an Advanced method (Eg.3)

Eg. 2a: d1 $ s “bleep:0”
Eg. 2b: d1 $ s “bleep:1”
Eg. 3a: d1 $ n “0” s # “bleep”
Eg. 3b: d1 $ n “1” s # “bleep”

Changing sound files isn’t the only way that you can add variety. You can also modify the sounds that you’re using too. Tidal has many different functions, but to keep this from becoming too long, I’m just going to explain three, gain, crush, and up. Gain is the simplest of these, it just controls the volume level. If the value is less than 1, the sound is quieter. If the value is greater than 1, the sound is louder. Generally speaking, going about 1 is dangerous and make lead to digital distortion (levels to hot).

Eg. 4a: d1 $ s “bleep” # gain 0.8
Eg. 4b: d1 $ s “bleep” # gain 1.2

The crush function crushes the bit depth, creating a more crunchy sound. The smaller the value, the more the sound gets distorted. The value can’t go smaller than 1.0, however.

Eg. 5: d1 $ s “bleep” # crush 5

Finally, up shifts the the pitch of the sound file. The value provided equals the number of semitones up (or down if you use a negative value) the sound file is shifted. It should be noted, the pitch and speed of a sound file are connected, and so the higher you shift it, the faster/shorter the sound will become.

Eg. 6: d1 $ s “bleep” # up 5

You may have noticed that in the most recent examples, a # was used. Similar to $, this symbol is used to indicate a level of priority to the program.

And there you have it! Tidal is a complex program, with lots to learn about how to use it, but I hope that you found this tutorial useful for getting you started. Thank you for your time.

Live Sampling While Live Coding

By Matthew B. Pohl

The concept of live coding is foreign to many people, making it a difficult draw for even small-scale performances outside of a very niche community. Except for the curious musician or coding enthusiast, a live coding environment does not have the same appeal as a rock group, string ensemble, or jazz band would in the confines of the music community. With this knowledge in mind, I proposed the following idea to David Hume and Martin Suarez-Tamayo, two of the Fall 2018 Ensemble’s members: investigate the integration of live musical performance and a live instrument within a live coding environment, namely the TidalCycles live music coding language and a compatible IDE in Atom. In this way, there could be a functional bridge between the audience interpretation of traditional gestural means and the commonly sedentary presentation of live coding.

While it is entirely possible to have any acoustic instrument perform alongside the sound processes executed via Atom (or David Ogborn’s extramuros software when in a group setting), the goal was to integrate a live performance into the code to manipulate it live. The TydalCycles programming language is heavily reliant on the SuperDirt synth, which is activated through the SuperCollider IDE, and on the pre-programmed samples of the SuperDirt synth. We discovered that the SuperDirt samples are located in a user-accessible folder called “Dirt-Samples,” the contents of which could be modified freely without causing errors to that end. Therefore, one could effectively sample a live instrument into any DAW, export the bounced file into a user-created folder within “Dirt-Samples,” and call upon the sample in Atom. This is the process which we followed.

Any instrument can be recorded and sampled, whether mic’d or recorded acoustically in the case of a violin or saxophone, or as recorded from a digital instrument such as a keyboard or electronic drum kit. To avoid having to deal with the problems acoustic instruments face in a live sound environment (levels, compression, and EQ to name a few) as the performance was ongoing, and due to potential space constraints, we optioned to use a digital piano (Roland FA-08) as the performance instrument. The output was sent from the keyboard to a MOTU audio interface, in which the sound from the piano was mixed with the sounds produced from the USB-connected computer and sent to the main mixing board for distribution out to the eight loudspeakers.

The actual performance, without going into significant detail, was comprised of the first two measures of Erik Satie’s 1¬¬ere Gymnopédie which we sampled into Ableton Live at 72 bpm, coinciding with the metrical value in Atom, cps (72/60/6). As the sample was being bounced, exported, and relocated to a user file in our local copy of “Dirt-Sounds,” I continued to play through the piece by improvising on a central theme of the work. The fact that the audience had a (partial) view of a performer improvising on a relatively well-known piece of music while the sample was being created functioned as a discreet separation from the awkward first moments of silence live coding often entails by providing for them a motive to follow throughout the performance.

This is a vital distinction between what live coding itself is perceived as versus what it can result in as part of a collaborative environment. I feel that, as live coding integration matures into a distinct musical art form as opposed to the more-or-less novelty that it presently is, it should be the responsibility of the performer and the orchestrator to find ways that live coding can be intertwined with common musical practice. While this is not a new idea, perhaps this is one step towards performing coders being able to create saw-wave melodies live for an eighties tribute band or live drum loop patterns for a modern pop-rock group, coming soon to a bar or club near you.

Composition in a Live Coding Environment

by Jordan Berg

Live coding in an interesting practice that I first attempted this semester as a member of the ICE Integra Contemporary Electroacoustics Ensemble at the University of Lethbridge. I am a composition major in my final year and have participated in the last two ICE concerts primarily as a percussionist and improviser. This fall I was introduced to live coding and learned the basics over the course of the semester in order to perform what I had absorbed live in our final concert on December 3, 2018 at the Zoo.

Live coding requires a musician to type lines of computer code into an interface to produce sound. It is not just as simple as pitch, dynamic, rhythm and duration – all of these parameters are controlled by the code as well as reverb, modulation, placement within the stereo field, repetition and more. There are so many aspects the performer can control that it would (and does) fill a small book and continues to be developed by musicians and programmers. It is possible that a live coder could perform Beethoven’s Fifth Symphony (with difficulty), but due to the nature of the constant looping that is essential to this practice, the style that has developed is different from the linear world of classical and popular music (although not necessarily). My first attempt at this was to figure out how to code ‘Walk This Way’ by Aerosmith using strange sounds for the first assignment. I felt that I was successful in this and for my second attempt I tried for something more ambitious. This attempt failed miserably because the complexity of having to type in pre-planned pitches, rhythmic groupings, and layers of commands in a live environment can come crashing down if the performer misses something as simple as a single character. I felt that the more successful attempts by my classmates relied less on pre-planning and more on aleatory. The understanding of the code and a rough idea in advance lets a performer engage in the live sculpting of sound rather than a frantic attempt to type pre-existing pages of numbers and characters into a computer under low light with many people watching. The latter seems to guarantee failure.

As a composer, I have always found it difficult to reconcile the relationship between being hyper-controlling on a measure-to-measure basis and letting things form over time without judging them instantly. Part of the problem is the ability to immediately listen to what I’ve composed on my computer at the touch of a button. I have no idea what makes me decide why I think something sounds good and something else doesn’t. I compose something based upon a concept and then I listen to it and hope that it sounds acceptable. If it doesn’t, I delete it instantly. I’ve been told constantly by my composition professors that I need to allow my music to travel into zones that I might not be comfortable with and I’ve never been sure how to accomplish this. My experience with live coding has taught me to value the randomness of initiating a concept and then allowing it to form and develop on its own before I decide to nudge it in a different direction. I realize now that the same is true on the manuscript page. Sometimes you need to allow an idea to come to fruition based on the parameters that you set into motion earlier rather than to judge the perceived acceptance of the results on a note-to-note or measure-to-measure basis.

Creating Sounds in ES P (Logic Pro X)

By Jon Martin

Overview

In this post, you will learn about how the parameters in the ES P virtual instrument in Logic Pro X can be used to shape a synthesized sound.

Before beginning, confirm that your speakers or headphones are functioning and that the system volume level is set to a moderate level.

Launch program and create new document

 

  1. Launch the Logic Pro application, located in the Applications folder.

>/Applications/Logic Pro

  1. Create a new Logic document by navigating to:

>File>New

Loading ES P

We will now load the ES P plugin.

  1. Create a new track with the ES P virtual instrument loaded. You can load the instrument by creating a new virtual instrument track, selecting the “Instrument” plug in section, and choosing ES P (Polyphonic synth) from the menu.

 

 

  1. Press Cmd+K (Logic Pro X) or Caps Lock (Logic Pro 9) to make the keyboard appear.
  1. Take a few minutes and switch through the ten provided sound presets and make some sound with each using the Musical Typing Keyboard.
  1. You will notice how the parameters of the ES P interface change as you move between presets.

ES P Parameters

We will now investigate the plugin controls. We will be looking at three main sections of the plugin: the synthesis engine on the left side, the resonance/filter section in the middle, and the ADSR section on the right.

martinesp3

  1. The Oscillator section allows you to control the type of oscillators that make up the sound produced by the plugin. From left to right, you have the triangle, sawtooth, and rectangle waves. The next two faders are sub-octave generators, the first controlling the amount generated one octave below the sound from the first three faders, and the second dropping two octaves. The final fader controls the amount of white noise that introduced into the sound. To the left of the faders are three button labelled 4, 8, and 16, which determine which octave the sound produced resides in.
  2. The Filter section introduces a low-pass filter into the signal. It reduces the range of frequencies produced about the cutoff point (labelled as frequency). The 1/3, 2/3, and 3/3 control the octave range that is covered by the lowpass filter. 1/3 will cut off the least amount of signal, while 3/3 will cut off the most. The resonance control allows you to choose how much the signal is boosted at the cutoff frequency. Changing the amount of resonance will drastically change the synthesized sound.
  3. The Envelope section allows you to control the attack (A), decay (D), sustain (S), and release (R) characteristics of the sound produced. By using ADSR to shape the volume envelope of the sound, you can create familiar or completely new sounds.
  4. The remaining controls offer additional ways to change the sound produced, including distortion and chorus effects, a low frequency oscillator, and additional envelope parameters that will not be used I this tutorial.

Creating a sound

We will use the plugin parameters to create a basic synthesized sound.

  1. Use the Recall Default setting in the preset menu to reset the plugin the its default parameters.
  2. Begin by setting all of the oscillator values to zero, at their lowest position.
  3. Raise the triangle wave slider about halfway up. Use the Musical Typing Keyboard to produce a sound.
  4. Next, raise the noise slider to add sharper attack characteristics to the sound. While making sound, find a position that you like.
  5. Select the 8’ octave range.
  6. Next we will move to the Filter section. Raise both the Frequency and Resonance controls fully clockwise and play a sound. You may have to hold down the key to hear the complete sound.
  7. Now lower the frequency control until it is cutting off the signal at a position that is pleasing.
  8. The “laserbeam” like quality to the sound is caused by self-oscillation created by having the Resonance set very high. Reduce the Resonance control until it is positively contributing to the synthesized sound and not causing unwanted distraction. You may change or lower to octave range of the filter to lessen the overall effect.
  9. Now we move to the Envelope section. The A control sets how long it takes for a sound to reach its maximum amplitude. The D control sets how long it takes for the sound to go from its maximum to resting level. The S control sets how long the sound remains at its resting level. The R control sets how long the sound takes to return to silence after the key has been released.
  10. Starting with all ADSR controls at zero, beginning moving them from left to right to shape the envelope characteristics of the synthesized sound. Moving the slider up increases the time value.

You will notice that longer attack values will require a longer key press to allow the sound to become audible from silence. Shorter values may imply percussive sounds, while longer sounds may evoke legato strings.

  1. Modify the ADSR values until a satisfactory sound is being produced. Use this sounds to play a few simple melodies, making changes to the parameters as necessary to increase the sound quality and function of the instrument.
  2. Finally, use the two sub-oscillator sliders to add lower harmonics to the sound. You may need to use high quality speakers or headphones to hear the change occurring due to low frequency content not being played back on smaller speakers.
  3. Once a desired sound has been created, click on the preset menu and click Save As… to save the sound for later use.
  4. Go through this process several times until three contrasting and useful sounds have been created.

Closing the application

  1. Close the Logic application by selecting Logic Pro > Quit Logic Pro.

Using libmapper for signal connections among visual programming projects

by Sam Walker-Kierluk

In this age of innovation in the area of digital musical instruments, ever-increasing numbers of interested musicians and programmers are experimenting with the development of new and exciting pieces of hardware and software that are redefining what we think of as instruments.  The creation of applications that act as instruments, or fulfill other custom audio synthesis and processing needs, has become accessible to even the most novice of computer users, thanks to visual programming environments like Max and Pure Data that are designed primarily for audio purposes.  libmapper is a software library for connecting virtual data signals locally or across networked computers running, providing tools for creating and using systems for interactive control of media synthesis, and is able to integrate effectively with programs like Max and Pure Data.

walker_kierluk_01

In this tutorial, I will demonstrate how to transmit numerical data from one Max project to another Max project using the libmapper library and its associated tools.

libmapper is an open-source software with cross-compatibility among many software environments.  The developers have already created the two tools needed to use libmapper with Max: external objects for Max, and a graphical user interface (mapperGUI) to perform mapping operations among the objects.  The software can be downloaded at http://idmil.org/software/libmapper/downloads, where installation instructions can also be found.

libmapper can be used only to transmit integers and floating-point numbers.

Creating and defining sources and destinations in Max patches

1. Create/open the two Max patches between which you intend to transmit information.  These can both be running on the same computer, or can be running on separate computers, each running Max, as long as they are connected via a Local Area Network connection (typically Ethernet or Wi-Fi).

For your source patch:

2. Go to the patch that contains source data.  This could include any data-rate (not audio-signal-rate) generator, such as the output of a slider or number box.

3. Create a new object box, and type map.device [patchname], where [patchname] is the name you wish to give to that specific patch on the network.  You must not use any spaces in your naming scheme in libmapper, as this will cause errors and much frustration.  For our purposes, let’s call this device “signal_source”, meaning we’d create an object box and type map.device signal_source.

For each data stream source you wish to make:

4. Create a new object box, and type map.out [sourcename] [i/f], where [sourcename] is the name of this source, and i or f sets whether it will be sending integers or floating-point numbers, respectively.  Optionally, you can add the arguments @min [minvalue], @max [maxvalue], and @unit [unit], the first two of which define the range of the values received by the map.out box, and the last of which labels the data’s measurement unit for convenience later on (Hz, for example).  Defining the range of values allows you to use libmapper’s powerful scaling features after you complete your patching.  In this example, we’ll make a box containing map.out “pitch_raw” i @min 0 @max 1023 @unit pressure.

For your destination patch:

5. Go to the patch where you wish to send the data.

6. Create a new object box with the map.device object (see step 3), with your desired name.

For each data stream destination you wish to make:

7. Create a new object box, and type map.in [destinationname] [i/f], with a similar manner to step 4.  Fill in the arguments for scaling, if you will use this feature, as well as the @unit argument for convenience, if appropriate.  For our box, we will use map.in frequency f @min 440 @max 880 @unit Hz.

Creating Mappings in mapperGUI

1. Launch mapperGUI, an application for configuring libmapper data mappings.  Although libmapper is cross-platform, and Max externals are compiled for Mac OS X and Windows, mapperGUI and other libmapper GUIs are only made for Mac.  However, these GUIs can perform mappings for libmapper instances running on PCs and Macs if they are connected on the same local area network.

2. Once the application has opened, observe the three main columns in the window, Sources, Links, and Destinations.  In the Sources column, you should see an entry called “/signal_source.1” or whatever you named your source patch with the map.device object.  This will provide information including the number of inputs(map.in)/output(map.out) contained within the patch, the port on which the patch communicates, and the IP address of the machine on the network.  In the Destinations column, you should see an entry called “/destination.1” or whatever you named your destination patch with the map.device object.  The Links column should be empty.

3. First, we must tell libmapper that you wish to route data from your source patch to your destination patch.  To do this, click and hold on top of the entry for “/signal_source.1” in the Sources column, drag across the Links column, and release the mouse button on top of the entry for “/destination.1” in the Destinations column.  A line will appear linking the two entries.  If you wish to have one source patch transmit to more than one destination patch, create multiple “Links” from that source by clicking and dragging to each one.

4. Next, click on the tab for your source device that is listed above the three columns, next to “All Devices.”  Ours will be called “/signal_source.1”.

5. You will see three columns as before, but this time, Sources will show your map.out object name(s) within your source patch, Destinations will show your map.in object name(s) for all available destinations to which you have linked, and the “Links” column is replaced by “Connections”.

6. To connect a source to a destination, drag from the source, across the Connections column, to the destination, similar to step 3.  As with step 3, you can connect one source to more than one input.

7. That’s it!  Any data sent into the inlet on your map.out object will be sent to any map.in objects to which you have connected it.  map.in objects will receive the number in whichever format is specified in their name, regardless of the originating type of data, meaning floating-point boxes will output all numbers received as floating-point numbers, regardless of how the data was sent, and integer boxes will output all numbers received as integers (floating-point numbers received by integer boxes will be rounded down to the nearest whole number).

Connection and scaling options in libmapper using mapperGUI

By default, if the @min and @max values have been specified in the Max objects, libmapper will scale the numbers streamed from object to object using these parameters in a linear fashion.  In our case, the “pitch_raw” source’s range of 0-1023 will be mapped only the “frequency” destination’s range of 440-880.

1. To set connection preferences in mapperGUI, click on a connection from a source map.out object to a destination map.in object.  The line and objects will become highlighted.

2. At the top of the mapperGUI window, just under the title bar, there will be a group of boxes: Mute, Byp, Line, Calib, Expr,  and Rev.

• “Mute” temporarily disables all throughput on a connection without permanently removing it.

• “Byp” disables scaling, and all number will be unaltered from source to destination.

• “Line” is the default option, and scales numbers linearly based on source and destination ranges.

• “Calib” lets you calibrate the input range based on real-time input data and the preset output range.

• “Expr” lets you transform source numbers using a mathematical expression.

• “Rev” is a feature that lets you reverse the functionality of a set of map.out and map.in boxes, with all scaling bypassed.

3. Finally, connections and settings within the Sources-Connections-Destinations can be saved and recalled by using the Load and Save buttons in the top-left  corner.

Although not a complete guide to using libmapper with Max and mapperGUI, this should serve as a functional introduction to allow for quick patching and experimentation in the development and use of digital musical instruments and other media projects.

A reversed reverberating effect in 5 steps

by Sebastien Caron-Roy

In this tutorial, I will explain how to create a reversed reverberating effect in 5 steps on any source using any DAW and any reverb plugin. Reversed reverb is a fairly common effect, but this has been of particular interest to me as it is an effect that can not exist in natural conditions. This is due to the fact that all audio processing occurs after the source has been played or captured. For example, traditional reverberation is the delay and diffusion of a signal after it has been generated. With the case of a reversed reverb, the processed sound (the diffused “wet” signal) is heard before one hears the direct source (the “dry” signal). This makes it impossible of an effect to use live, as either the reverb algorithm would need to know what’s going to be played before the performer even plays anything, or the signal would need to be delayed an amount of time at least equivalent to the reverb’s decay time. This is what causes the effect to sound particularly ethereal, as it transcends the linearity of time itself, and implies that the source exists outside of the constraints of time. Of course, the nature of digitalized audio removes these physical limitations and allows us to create such an effect.

1. The first step of this process is to ensure that whatever source one wishes to process is an audio file. This technique will not work with MIDI tracks, so if the source is a software synthesizer or sampler, one will need to bounce the track to an audio file.

2. The next step is to reverse the source that one wishes to process. This is why step one is necessary, as one can only reverse audio files. Nearly all DAWs have an easily accessible function for this.

3. Once one has a reversed source, apply a reverb to the reversed track. The parameters of the reverb can be adjusted to taste, but the reverb needs to be set to “100% wet” so that one can only hear the reverberated sound and none of the original reversed source.

4. Next, bounce the reverberated reversed source to a new audio track. Once this is done, you can remove or bypass the reverb on the original reversed source, so that one is left with a dry source and a wet source.

5. Once this is complete, reverse the audio files for both the dry and the wet tracks. This will cause the dry source to return to it’s normal un-reversed state, and will cause the wet track to reverberate before the the dry source plays. Depending on the bounds of the audio files, it is likely that you will have to experiment with the timing of the wet track relative to the dry track in order for them to overlap properly. Simply drag the wet track left or right until you feel that the two tracks have merged into a single sound.

I hope this tutorial has been an informative inspirational launching pad for even greater ideas. I highly recommend experimenting with layering several different reverb algorithms together (one short, one longer for example), or processing the wet track even further by applying effects such as a chorus or a phaser on it.