Creating Sounds in ES P (Logic Pro X)

By Jon Martin

Overview

In this post, you will learn about how the parameters in the ES P virtual instrument in Logic Pro X can be used to shape a synthesized sound.

Before beginning, confirm that your speakers or headphones are functioning and that the system volume level is set to a moderate level.

Launch program and create new document

 

  1. Launch the Logic Pro application, located in the Applications folder.

>/Applications/Logic Pro

  1. Create a new Logic document by navigating to:

>File>New

Loading ES P

We will now load the ES P plugin.

  1. Create a new track with the ES P virtual instrument loaded. You can load the instrument by creating a new virtual instrument track, selecting the “Instrument” plug in section, and choosing ES P (Polyphonic synth) from the menu.

 

 

  1. Press Cmd+K (Logic Pro X) or Caps Lock (Logic Pro 9) to make the keyboard appear.
  1. Take a few minutes and switch through the ten provided sound presets and make some sound with each using the Musical Typing Keyboard.
  1. You will notice how the parameters of the ES P interface change as you move between presets.

ES P Parameters

We will now investigate the plugin controls. We will be looking at three main sections of the plugin: the synthesis engine on the left side, the resonance/filter section in the middle, and the ADSR section on the right.

martinesp3

  1. The Oscillator section allows you to control the type of oscillators that make up the sound produced by the plugin. From left to right, you have the triangle, sawtooth, and rectangle waves. The next two faders are sub-octave generators, the first controlling the amount generated one octave below the sound from the first three faders, and the second dropping two octaves. The final fader controls the amount of white noise that introduced into the sound. To the left of the faders are three button labelled 4, 8, and 16, which determine which octave the sound produced resides in.
  2. The Filter section introduces a low-pass filter into the signal. It reduces the range of frequencies produced about the cutoff point (labelled as frequency). The 1/3, 2/3, and 3/3 control the octave range that is covered by the lowpass filter. 1/3 will cut off the least amount of signal, while 3/3 will cut off the most. The resonance control allows you to choose how much the signal is boosted at the cutoff frequency. Changing the amount of resonance will drastically change the synthesized sound.
  3. The Envelope section allows you to control the attack (A), decay (D), sustain (S), and release (R) characteristics of the sound produced. By using ADSR to shape the volume envelope of the sound, you can create familiar or completely new sounds.
  4. The remaining controls offer additional ways to change the sound produced, including distortion and chorus effects, a low frequency oscillator, and additional envelope parameters that will not be used I this tutorial.

Creating a sound

We will use the plugin parameters to create a basic synthesized sound.

  1. Use the Recall Default setting in the preset menu to reset the plugin the its default parameters.
  2. Begin by setting all of the oscillator values to zero, at their lowest position.
  3. Raise the triangle wave slider about halfway up. Use the Musical Typing Keyboard to produce a sound.
  4. Next, raise the noise slider to add sharper attack characteristics to the sound. While making sound, find a position that you like.
  5. Select the 8’ octave range.
  6. Next we will move to the Filter section. Raise both the Frequency and Resonance controls fully clockwise and play a sound. You may have to hold down the key to hear the complete sound.
  7. Now lower the frequency control until it is cutting off the signal at a position that is pleasing.
  8. The “laserbeam” like quality to the sound is caused by self-oscillation created by having the Resonance set very high. Reduce the Resonance control until it is positively contributing to the synthesized sound and not causing unwanted distraction. You may change or lower to octave range of the filter to lessen the overall effect.
  9. Now we move to the Envelope section. The A control sets how long it takes for a sound to reach its maximum amplitude. The D control sets how long it takes for the sound to go from its maximum to resting level. The S control sets how long the sound remains at its resting level. The R control sets how long the sound takes to return to silence after the key has been released.
  10. Starting with all ADSR controls at zero, beginning moving them from left to right to shape the envelope characteristics of the synthesized sound. Moving the slider up increases the time value.

You will notice that longer attack values will require a longer key press to allow the sound to become audible from silence. Shorter values may imply percussive sounds, while longer sounds may evoke legato strings.

  1. Modify the ADSR values until a satisfactory sound is being produced. Use this sounds to play a few simple melodies, making changes to the parameters as necessary to increase the sound quality and function of the instrument.
  2. Finally, use the two sub-oscillator sliders to add lower harmonics to the sound. You may need to use high quality speakers or headphones to hear the change occurring due to low frequency content not being played back on smaller speakers.
  3. Once a desired sound has been created, click on the preset menu and click Save As… to save the sound for later use.
  4. Go through this process several times until three contrasting and useful sounds have been created.

Closing the application

  1. Close the Logic application by selecting Logic Pro > Quit Logic Pro.
Advertisements

Using libmapper for signal connections among visual programming projects

by Sam Walker-Kierluk

In this age of innovation in the area of digital musical instruments, ever-increasing numbers of interested musicians and programmers are experimenting with the development of new and exciting pieces of hardware and software that are redefining what we think of as instruments.  The creation of applications that act as instruments, or fulfill other custom audio synthesis and processing needs, has become accessible to even the most novice of computer users, thanks to visual programming environments like Max and Pure Data that are designed primarily for audio purposes.  libmapper is a software library for connecting virtual data signals locally or across networked computers running, providing tools for creating and using systems for interactive control of media synthesis, and is able to integrate effectively with programs like Max and Pure Data.

walker_kierluk_01

In this tutorial, I will demonstrate how to transmit numerical data from one Max project to another Max project using the libmapper library and its associated tools.

libmapper is an open-source software with cross-compatibility among many software environments.  The developers have already created the two tools needed to use libmapper with Max: external objects for Max, and a graphical user interface (mapperGUI) to perform mapping operations among the objects.  The software can be downloaded at http://idmil.org/software/libmapper/downloads, where installation instructions can also be found.

libmapper can be used only to transmit integers and floating-point numbers.

Creating and defining sources and destinations in Max patches

1. Create/open the two Max patches between which you intend to transmit information.  These can both be running on the same computer, or can be running on separate computers, each running Max, as long as they are connected via a Local Area Network connection (typically Ethernet or Wi-Fi).

For your source patch:

2. Go to the patch that contains source data.  This could include any data-rate (not audio-signal-rate) generator, such as the output of a slider or number box.

3. Create a new object box, and type map.device [patchname], where [patchname] is the name you wish to give to that specific patch on the network.  You must not use any spaces in your naming scheme in libmapper, as this will cause errors and much frustration.  For our purposes, let’s call this device “signal_source”, meaning we’d create an object box and type map.device signal_source.

For each data stream source you wish to make:

4. Create a new object box, and type map.out [sourcename] [i/f], where [sourcename] is the name of this source, and i or f sets whether it will be sending integers or floating-point numbers, respectively.  Optionally, you can add the arguments @min [minvalue], @max [maxvalue], and @unit [unit], the first two of which define the range of the values received by the map.out box, and the last of which labels the data’s measurement unit for convenience later on (Hz, for example).  Defining the range of values allows you to use libmapper’s powerful scaling features after you complete your patching.  In this example, we’ll make a box containing map.out “pitch_raw” i @min 0 @max 1023 @unit pressure.

For your destination patch:

5. Go to the patch where you wish to send the data.

6. Create a new object box with the map.device object (see step 3), with your desired name.

For each data stream destination you wish to make:

7. Create a new object box, and type map.in [destinationname] [i/f], with a similar manner to step 4.  Fill in the arguments for scaling, if you will use this feature, as well as the @unit argument for convenience, if appropriate.  For our box, we will use map.in frequency f @min 440 @max 880 @unit Hz.

Creating Mappings in mapperGUI

1. Launch mapperGUI, an application for configuring libmapper data mappings.  Although libmapper is cross-platform, and Max externals are compiled for Mac OS X and Windows, mapperGUI and other libmapper GUIs are only made for Mac.  However, these GUIs can perform mappings for libmapper instances running on PCs and Macs if they are connected on the same local area network.

2. Once the application has opened, observe the three main columns in the window, Sources, Links, and Destinations.  In the Sources column, you should see an entry called “/signal_source.1” or whatever you named your source patch with the map.device object.  This will provide information including the number of inputs(map.in)/output(map.out) contained within the patch, the port on which the patch communicates, and the IP address of the machine on the network.  In the Destinations column, you should see an entry called “/destination.1” or whatever you named your destination patch with the map.device object.  The Links column should be empty.

3. First, we must tell libmapper that you wish to route data from your source patch to your destination patch.  To do this, click and hold on top of the entry for “/signal_source.1” in the Sources column, drag across the Links column, and release the mouse button on top of the entry for “/destination.1” in the Destinations column.  A line will appear linking the two entries.  If you wish to have one source patch transmit to more than one destination patch, create multiple “Links” from that source by clicking and dragging to each one.

4. Next, click on the tab for your source device that is listed above the three columns, next to “All Devices.”  Ours will be called “/signal_source.1”.

5. You will see three columns as before, but this time, Sources will show your map.out object name(s) within your source patch, Destinations will show your map.in object name(s) for all available destinations to which you have linked, and the “Links” column is replaced by “Connections”.

6. To connect a source to a destination, drag from the source, across the Connections column, to the destination, similar to step 3.  As with step 3, you can connect one source to more than one input.

7. That’s it!  Any data sent into the inlet on your map.out object will be sent to any map.in objects to which you have connected it.  map.in objects will receive the number in whichever format is specified in their name, regardless of the originating type of data, meaning floating-point boxes will output all numbers received as floating-point numbers, regardless of how the data was sent, and integer boxes will output all numbers received as integers (floating-point numbers received by integer boxes will be rounded down to the nearest whole number).

Connection and scaling options in libmapper using mapperGUI

By default, if the @min and @max values have been specified in the Max objects, libmapper will scale the numbers streamed from object to object using these parameters in a linear fashion.  In our case, the “pitch_raw” source’s range of 0-1023 will be mapped only the “frequency” destination’s range of 440-880.

1. To set connection preferences in mapperGUI, click on a connection from a source map.out object to a destination map.in object.  The line and objects will become highlighted.

2. At the top of the mapperGUI window, just under the title bar, there will be a group of boxes: Mute, Byp, Line, Calib, Expr,  and Rev.

• “Mute” temporarily disables all throughput on a connection without permanently removing it.

• “Byp” disables scaling, and all number will be unaltered from source to destination.

• “Line” is the default option, and scales numbers linearly based on source and destination ranges.

• “Calib” lets you calibrate the input range based on real-time input data and the preset output range.

• “Expr” lets you transform source numbers using a mathematical expression.

• “Rev” is a feature that lets you reverse the functionality of a set of map.out and map.in boxes, with all scaling bypassed.

3. Finally, connections and settings within the Sources-Connections-Destinations can be saved and recalled by using the Load and Save buttons in the top-left  corner.

Although not a complete guide to using libmapper with Max and mapperGUI, this should serve as a functional introduction to allow for quick patching and experimentation in the development and use of digital musical instruments and other media projects.

A reversed reverberating effect in 5 steps

by Sebastien Caron-Roy

In this tutorial, I will explain how to create a reversed reverberating effect in 5 steps on any source using any DAW and any reverb plugin. Reversed reverb is a fairly common effect, but this has been of particular interest to me as it is an effect that can not exist in natural conditions. This is due to the fact that all audio processing occurs after the source has been played or captured. For example, traditional reverberation is the delay and diffusion of a signal after it has been generated. With the case of a reversed reverb, the processed sound (the diffused “wet” signal) is heard before one hears the direct source (the “dry” signal). This makes it impossible of an effect to use live, as either the reverb algorithm would need to know what’s going to be played before the performer even plays anything, or the signal would need to be delayed an amount of time at least equivalent to the reverb’s decay time. This is what causes the effect to sound particularly ethereal, as it transcends the linearity of time itself, and implies that the source exists outside of the constraints of time. Of course, the nature of digitalized audio removes these physical limitations and allows us to create such an effect.

1. The first step of this process is to ensure that whatever source one wishes to process is an audio file. This technique will not work with MIDI tracks, so if the source is a software synthesizer or sampler, one will need to bounce the track to an audio file.

2. The next step is to reverse the source that one wishes to process. This is why step one is necessary, as one can only reverse audio files. Nearly all DAWs have an easily accessible function for this.

3. Once one has a reversed source, apply a reverb to the reversed track. The parameters of the reverb can be adjusted to taste, but the reverb needs to be set to “100% wet” so that one can only hear the reverberated sound and none of the original reversed source.

4. Next, bounce the reverberated reversed source to a new audio track. Once this is done, you can remove or bypass the reverb on the original reversed source, so that one is left with a dry source and a wet source.

5. Once this is complete, reverse the audio files for both the dry and the wet tracks. This will cause the dry source to return to it’s normal un-reversed state, and will cause the wet track to reverberate before the the dry source plays. Depending on the bounds of the audio files, it is likely that you will have to experiment with the timing of the wet track relative to the dry track in order for them to overlap properly. Simply drag the wet track left or right until you feel that the two tracks have merged into a single sound.

I hope this tutorial has been an informative inspirational launching pad for even greater ideas. I highly recommend experimenting with layering several different reverb algorithms together (one short, one longer for example), or processing the wet track even further by applying effects such as a chorus or a phaser on it.

A look into MuseScore: a free music notation program

by David Adam

As music students, enthusiasts, composers, recorders, and players, we are always looking for ways to improve our craft, whether it be with new technology, techniques, or software. One aspect for musicians is writing our own music. Notating it can be as simple as taking a piece of score paper and writing notes on it, much like it has been done millions of times before. But this technique requires someone to have an internal working on how the notes sound and how the intervals sound, and for some people that can be very difficult. This is where music notating software comes into play.

One of the leaders in music notating software is Finale. Most musicians have probably looked at and used Finale at some point in their career. However, Finale is very pricy, with the 2014 version being upwards to $600, and it can have a very steep learning curve (this is coming from personal experience). With this being said though, there are alternatives. And they are free.

Introducing MuseScore; a free, open source music composition and notation software created by Werner Schweer. For musicians (and especially students), purchasing programs such as Finale can seem impossible, due to the high price, but MuseScore provides an excellent alternative.
Upon first opening the program, I was greeted with a prewritten score for piano. This score is a fairly complex piece, showcasing only a small fraction
of what MuseScore is capable of.

adam_01

Upon first listening to the sample by pressing the space bar, the score included a blue bar following the where the program was in the score, making following along easier. However, the MIDI sample of the piano was certainly not the best (although after exploring some other instruments, some MIDI samples are better than others). However, keeping in mind that this is a free program, it’s not something to complain about; the program still conveys its message very well.

Starting up a new score is very easy. It’s as simple as clicking on File > New. After this, I was prompted with a window to name the composition and add any amount of information I desired.

adam_02

After typing in the information and clicking next, the window brought me to a menu where I could add just about any amount of instruments I wanted, which included everything found in a typical orchestra plus many more.

adam_03

After clicking Next, the window brought me to a menu where I could select a key signature for the piece, and then finally to a menu where I could choose the tempo, number of bars, and time signature for the piece. After clicking finish, the score was ready.

Inputting notes is very simple. By simply pressing the N key on the keyboard, you enter a note entry mode. And from there, just select the value of the note you want (whole, half, quarter, etc.) from the menu at the top and click anywhere on the staff you would like the note to go.

adam_04

It’s that simple. The only problem I encountered was inputting percussion parts, which are entered in a slightly different way. All it takes is to click the percussion staff, click on drums under palettes on the left side of the screen, click on the note that appears under drums, and start clicking on the percussion staff.

adam_05

Using and learning MuseScore was certainly the easiest experience I have had thus far with a music notation program. Although it does not include some features Finale has and some of the MIDI samples are better in Finale, having a free program that does just about everything Finale does is hard to argue with.

How to Make Your Voice Sound Like a Portal-Style Robot Using Melodyne Editor

by Graham Trudeau

The Portal series of videogames is well known for its distinctive characters, and their voices. Here’s a video for anyone who may be unfamiliar.

The distinctive sound of many voices in the Portal franchise is the result of basic pitch and formant editing, which can be done using Melodyne Editor. A free trial of this software can be found at: http://www.celemony.com/en/trial . Here’s how you can make your own Portal inspired robot voice.

1. Record Some Vocals

trudeau_01

This basic but crucial step can be done in any Digital Audio Workstation – Reaper is used in the photo above. A stereo or mono recording is fine, and rendering the recording as a high-quality WAV file is recommended, although most lower-quality file formats will also work. Ideally, you’ll want to avoid recording in a way which captures the “human” elements of speech; breathing, plosives, glottal fry, and other vocal phenomenon. Many of these sounds can be avoided or minimized by using a pop filter.

Here’s an example recording. Note; I didn’t use a pop filter, so some plosives and light breath sounds are present. its worth paying attention to how noticeable these sounds are in later examples.

(“Unprocessed Vocals.mp3”)

2. Import them into Melodyne Editor

trudeau_02

While you can use the file menu to import your vocals, you can drag the file directly into the main editor window (the grey window above) directly.If they imported correctly, you should see a window with pitch information pop up, similar to the one above.

Troubleshooting Note: If it only shows your vocals as having one pitch (i.e., C4), this may be due to a large amount of ambient noise, or multiple simultaneous pitches if you’re using the Essentials ($99) version of Melodyne editor. Either way, you’ll need to re-record higher quality version of your vocals.

3. Adjust the pitches

By left clicking and dragging, you can select multiple pitches at once – do this. Once you have, right click and select “Edit Pitch”. This will cause the nearest available chromatic pitch for each not to be highlighted in blue. Double clicking a pitch will automatically move all of your selected pitches to this nearest chromatic location, which helps your voice sound less natural. Additionally, moving all your pitches up will make the voice sound more feminine, and moving them down will make the voice sound more masculine. Mixing these approaches creates a warping effect.

4. Flatten the Pitch Modulation

Right clicking again, selecting the “Pitch Modulation” tool will make any variances in pitch readily apparent. Clicking on a pitch and dragging up or down (making sure all your pitches are already highlighted) will serve to flatten or accentuate these variances – a flat affect is generally effective for creating a robotic voice.

5. Adjust the Formants

While not a required step, adjusting the formants of your vocals (right click and select the formant tool) can help to make any pitch shifting sound more or less natural as required. Raising your formants above their pitches will make the voice sound more feminine (or nasal if it’s a low pitch), and lowering them will make your vocals sound more throaty and masculine. Once again, mixing these approaches creates a warping effect

Before we move on here’s what our initial vocals sound like after some pitch processing.

Raising the pitches and the formants:

(“UpHigh.mp3”)

Lowering the pitches and the formants:

(“DownLow.mp3)

Raising and lowering pitches and formants:

(“MixedVox.mp3”)

6. Render, and Apply any Additional Effects

trudeau_06

This optional step can be extremely important if you need your to add additional elements to your voice, such as speaker distortion or corruption errors. Here’s what our earlier voices sound like with some additional effects.

The raised vocals – sped up, distorted, and occasionally stuttered

(“HighFin.mp3”)

The lowered vocals – slightly distorted, slightly bitcrushed, with reverb.

(“LowFin.mp3”)

The mixed approach vocals – assorted “corruption” effects, including slowing and reversal.

(“MixedFin.mp3”)

Congratulations, you now know how to make your voice sound like a robot.

CSound Tutorial: Instruments and Score Code

by Jake Hills

Csound is one of the most powerful computer programming languages for generating and processing sound. There is almost nothing in the realm of digital audio that Csound can not do. It was developed by Barry Vercoe at MIT in the mid 1980s and was based on previous work by Max Mathews called Music-N.  Csound is free and open source, and is tended to by an experienced core of programmers and musicians and is supported by a large online community. Because of that community, help is readily available for anyone who wishes to get started with audio programming. This tutorial will teach how to make sound in Csound.

Csound does not have it’s own GUI (graphical user interface).  One can not simply double click and open Csound.  The easiest way to use Csound is through what is called a front end, or an integrated development environment or IDE for short. Most installations of Csound come with an IDE called CSoundqt.  Although there other IDEs available, this program is a fine tool for working with Csound.  Upon opening Csoundqt, the user is greeted with this:

<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>

sr = 44100
ksmps = 128
nchnls = 2
0dbfs = 1.0

</CsInstruments>
<CsScore>

</CsScore>
</CsoundSynthesizer>

Like in other programming languages, we see a bunch of opening tags, and closing tags.  At first, it can be confusing to look at something like this. Below is the same code, except with added comments which make the layout easier to understand.

<CsoundSynthesizer> ; everything under this tag, is part of the 
Csound program

<CsOptions> ; Under the CsOptions tag, we put specific options 
that we want.  We don’t need any to make sound for now.

</CsOptions> ; this is the close of the options tag

<CsInstruments> ; under this tag, we put the definitions for the 
instruments that we will make.

sr = 44100 ;Here, we set the sample rate to 44.1khz
ksmps = 128 ;Think of this as a buffer.
nchnls = 2 ;This is how many channels we produce. Leave this set 
at 2 for now.
0dbfs = 1.0 ; This is 0dB Full Scale.  When set to one, that meansmaximum amplitude for your program is 1.

; Here is where create our instrument.

</CsInstruments> ; Here is the end of the instrument tag.

<CsScore>; Here is where we tell Csound what to do with the 
instruments.

; This section is broken up into columns.  The first is tells 
csound which of the instruments to play, the second is when to 
begin, and the third is how long it lasts.  Any subsequent columnsare assigned by the programmer.

</CsScore> ; This is the end of the score section
</CsoundSynthesizer> ; This tag closes the Csound file.  Any 
information after this tag will be ignored by the compiler.

Now that we have a basic understanding of the layout of a Csound file, it is time to make a little noise.  Inside the <CsInstruments> tag, below 0dbfs = 1.0 is where we make our first instrument.  It may look something like this:

instr 1 ; This is the name of the instrument.  We just called it 
1.

aSig poscil 0.5, 440 ; Here, we use the opcode ‘poscil’ which can take two arguments, in this case amplitude (0.5) and frequency 
(440) and generates a sine wave and puts it in something called 
aSig.

outs aSig, aSig ; Here, whatever was in aSig is sent to the 
speakers using ‘outs’.

endin ; This is the end of the instrument we have created.

With our first instrument created, we now need to tell Csound what to do with it. This is done in the <CsScore> section. The first three columns here will tell Csound what is playing, when it plays, and how long it lasts.  For example:

;what when length
 i1   0    2

This code says to play an instrument called 1, which we made in the (CsInstruments> tag, beginning at time 0 for 2 beats.  By default, Csound is set at 60 beats per minute, so 2 beats equals 2 seconds for now.  Here is the completed code without comments:

<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>

sr = 44100
ksmps = 128
nchnls = 2
0dbfs = 1.0

instr 1
aSig posil 0.5, 440
outs aSig, aSig
endin

</CsInstruments>
<CsScore>

i1 0 2

</CsScore>
</CsoundSynthesizer>

We can now run the code by pressing command (or control) + enter.  We should then hear a sine wave with a frequency of 440 hz for 2 seconds.

Csound is extremely powerful, and once mastered, it can be used to do just about anything with audio imaginable.  This tutorial covered one of the most basic skills in Csound.  One of the best resources about learning how to use csound is found on the internet at http://en.flossmanuals.net/csound/ .  A complete list of opcodes with examples can be found here: http://www.csounds.com/manual/html/ .  With these resources, a little time and effort, Csound can be a powerful tool in audio production and processing.