Search

# Holy Bot

### musicmaking

There hasn’t been any time for composing “real” tracks lately. I only had time to doddle on this Eurorack modular system on the dinner table. And drinking coffee. But let me tell you a little (not all!) of what’s going on on this patch:

A triangle wave is modulated by a variable shape through linear FM. A digital phase distortion is gently being folded before going to the mixer. A sub also appears there.

A couple of cycling CV curves are generated in a semi-random fashion and feed into a quantizer set to C harmonic minor scale. This is routed as 1 V/octave to the oscillators.

The filter is self-oscillating and tracks the same pitch.

The tempo is set to 90 BPM and triggers straight eights on the quantizer and envelope.

Now this is only a snapshot of a patch in progress. All those patches will be lost in time, like tears in rain.

Everybody seems to be talking about Euclidean rhythms, but here’s a short explanations on this blog anyway.

The Euclidean algorithm computes the greatest common divisor of two given integers. This is used to distribute numbers as evenly as possible, not only in music, but in applications in many fields of scientific research, e.g. string theory and particle acceleration.

Euclidean rhythms are derived from two values: one that represents the length of the pattern in steps, and the other that defines the hits or notes to be distributed evenly across that pattern. Any remainders are also to be distributed.

Here’s two examples. First: E(2,8), this means a sequence of eight beats and two hits [11000000], where 1 is a hit and 0 represents a rest. Spread out evenly across, it should look something like this [10001000].

A second example: E(5,8), with five hits, [11111000] via [10101011] looks like this after the remainders are distributed as evenly as possible [10110110].

Any two numbers can be combined to generate a semi-repetitive pattern. It’s possible to offset the rotation of the sequence by a certain number of steps. And by playing several patterns with differents lengths and offsets, complex polyrhythms can occur. For example, try playing E(2,8), described as above, together with E(5,16) like this [0010010010010010].

# Mixing at the Right Levels

There’s this theory of the ear that it hears different frequencies at different levels. The Fletcher-Munson curves, commonly known as equal-loudness contours, indicate the ear’s average sensitivity to different frequencies at various amplitude levels.

Even if the tonal balance of the sound remains the same, at low volume, mid range frequencies sound more prominent. While at high listening volumes, the lows and highs sound more prominent, and the mid range seems to back off.

In short, this explains why quieter music seems to sound less rich and full than louder music. Generally it’s better for the music to sound good as the volume increases.

As a consequence of this, you should edit, mix and work on your music on a high enough volume (not ridiculously loud), so that you can make sure your music doesn’t sound terrible when it’s listened to at a higher level. Because as a music producer you would want your music to sound best when the listener is paying full attention. But use caution, don’t damage your ears bla bla bla.

The normal thing to treat a dry vocal is to put reverb and delay on it. But that could make the vocal a bit muddy.

To keep it in-your-face and conserve the clarity of the vocal, while still having an effect to make it sound bigger, try ducking the volume of the delays whenever the dry vocal is active. To do so, side-chain the delay bus to the lead vocal track.

For example, use a delay device on a return bus and put a quarter note delay with low feedback, and send it to the vocal track with a little less volume. On the same bus, put a compressor and select the vocal track as the side-chain source. Set it up as you like, perhaps bring down the wet-parameter some.

You can also try the same thing with a reverb.

Shimmer is a feedback-reverb-pitch-shift-effect made popular by Brian Eno and Daniel Lanois. The idea is to feed a reverb to a pitch shifter and back again. Each delay repetition gets shifted one octave up. In this case I’m using Ableton Live with stock effects, the Reverb and Grain Delay where the signal gets delayed and pitch shifted. You can use these guidelines in different environments (hardware/software) but here’s how I do it:

1. Insert two Return Tracks and put a Reverb on A.
2. Turn off Input Processing Hi Cut, set Global Quality to High, turn off Diffusion Network High, a fairly long Decay Time and turn the Dry/Wet to 100 %.
3. Enable Send B on the Return Track A and set it to max.
4. Use the Grain Delay on Return Track B.
5. Set Frequency to 1.00 Hz and Pitch to 12.0.
6. Enable Send A on the Return Track B and set it to max.
7. Dial Send A of the Track with the signal source that you what to shimmer.

Also try to bring in Send B on the signal. And play with the Size and Diffuse controls of the Reverb.

Here’s something for you synth programmers to try out: Modulate certain aspects of an envelope with itself.

For example, set the modulation destination of the filter envelope to affect its own parameter, such as its (linear) attack or decay time, by a positive or negative amount. This should render a concave or convex shape, respectively.

This effect is referred to as recursive modulation.

Now try to set filter envelope attack to 32, and envelope amount to 48. Then go to the modulation matrix and select the filter envelope as source, and modulate the destination filter envelope attack by 48.

It’s also possible to use this method on a LFO. Modulating its own level will also affect the shape of the LFO. And by modulate its own rate, will affect both the overall rate and the shape.

First off, there’s really no correct order. It’s all about preference, what you want to achieve and context. Although some effects do seem to work better in certain places of the signal path than in others. Still, feel free to experiment.

Inserts and Send Effects

Effects are chained in either series or parallel. For parallel processing, use send effects to process a copy of a signal (without affecting the original). Use auxiliary sends for time-based effects, such as reverb and delay.

No rules, but in most situations it makes more sense, and saves processing power and setup time, if for example reverb and delay are shared between all channels, rather than inserting a new instance of each effect in an insert slot on each channel.

Use insert effects to change the signal completely, e.g. dynamic processors like compressors, expanders, noise gates, transient shapers.

In terms of signal flow, the channel insert connections usually come before the channel EQ, fader and pan.

Daisy Chain Effects

It is possible to daisy chained effects into the signal path. The order of the effects determines the sound and have different impacts. Here’s a suggestion:

1. Noise gate
2. Subtractive EQ
3. Dynamics (compressors, limiters, expanders)
4. Gain (distortion, saturation)
5. General EQ
6. Time-based modulation (chorus, flanger, phaser)
7. Pure time-based (delay, reverb)

To clean up the signal, put the gate first, and it will work better with a wider dynamic range (than for example after a compressor).

Then use an EQ to cut away the unwanted frequencies; do this to avoid enhancing them with later effects. (Also maybe roll off frequencies below 30 Hz.)

Then place a compressor to adjust the dynamics of the signal.

After that, put on some overdrive boost or tape saturation effect. Also, such effects can work well in the beginning of the chain – as part of the initial sound – due to the harmonics generated by a distortion device, which bring richness to the effects that follow.

After gain effect, EQ to shape the tonal balance, but be careful when boosting.

At the end of the chain, modulation effects are usually placed after gain-related effects and before pure time-based effects.

Pure time-based effects such as delay and reverb usually come last in the signal chain.

The Mastering Chain

This post is mainly covering effects chain for channels and buses, but when entering the mastering stage, a conventional order of the mastering chain is:

1. EQ
2. Dynamics
3. Post EQ
4. Harmonic exciter
5. Stereo imaging
6. Loudness maximizer

There’s a few things you can do to make your bass sound on smaller speakers like laptops, tablets and cellphones. First you need fundamental and harmonic content on your bass. The fundamental frequency is the base foundation note that represents the pitch that is played and the harmonics are the following frequencies that support the fundamental. In short, it’s the higher frequency harmonics that allow for the sub to cut through the mix.

One idea is to create higher frequency harmonics. The harmonics should be in unison with the fundamental frequency, but don’t contain it. (The harmonics trick your brain into hearing lower frequencies that aren’t really there.) Add a touch of harmonic saturation, drive a little warmth, a little fuzz to help that sub cut through. The harmonic distortion, adds high-frequency information to reveal presence on systems with poor bass response.

Also try to copy the bass to a parallell channel, bitcrush the higher harmonics and cut all lows and mix with the original bass.

If you’re beefing up your main bass by layering a separate, low-passed sine wave at the octave below, perhaps try square (or triangle) to add some subtle higher frequencies that allow the sub bass to translate better than a pure sine wave.

You can also try to EQ the bass. Try to boost the harmonic multiples of the fundamental frequency to hear some definition from the bass sound. And boosting above 300 Hz will bring out the bass’s unique timbral character. Actually, try around 1 kHz (but add a low-pass filter at around 2-3 kHz).

Use high-pass filtering (to clear the low-end frequencies and make room for the intended bass sound), and you can also side-chain your sub bass to keep it from fighting with the kick drum.

When it comes to kick drums you can add a small click noise to help it translate onto smaller speakers.

P.S. There are also plugins that use psycho-acoustics to calculate precise harmonics that are related to the fundamental tones of bass.

I’ve written about the importance of headroom when submitting your track to a professional mastering engineer, but you should also pay attention to headroom when you do this on your own and when you encode MP3s.

Okay so when the track is mastered at 0 dB (the maximum level for digital audio files) many converters and encoders are prone to clip. Lossy compression formats utilize psychoacoustic models to remove audio information, and by doing so introduces an approximation error, a noise which can increase peak levels and cause clipping in the audio signal – even if the uncompressed source audio file appears to peak under 0 dB.

In Practice

For example SoundCloud transcodes uploaded audio to 128 Kbps MP3 for streaming. In this scenario, use a true peak limiter to ensure the right margin depending on the source material. A margin of -1.0 or -1.5 dBFS should work for no distortion (sometimes -0.3, -0.5 or -0.7 would work, but it’s safer to have greater margin).