Search

Holy Bot

Bedroom music production, gaming and random shit

Tag

musicproduction

About Euclidean Rhythms

Everybody seems to be talking about Euclidean rhythms, but here’s a short explanations on this blog anyway.

The Euclidean algorithm computes the greatest common divisor of two given integers. This is used to distribute numbers as evenly as possible, not only in music, but in applications in many fields of scientific research, e.g. string theory and particle acceleration.

Euclidean rhythms are derived from two values: one that represents the length of the pattern in steps, and the other that defines the hits or notes to be distributed evenly across that pattern. Any remainders are also to be distributed.

Here’s two examples. First: E(2,8), this means a sequence of eight beats and two hits [11000000], where 1 is a hit and 0 represents a rest. Spread out evenly across, it should look something like this [10001000].

A second example: E(5,8), with five hits, [11111000] via [10101011] looks like this after the remainders are distributed as evenly as possible [10110110].

Any two numbers can be combined to generate a semi-repetitive pattern. It’s possible to offset the rotation of the sequence by a certain number of steps. And by playing several patterns with differents lengths and offsets, complex polyrhythms can occur. For example, try playing E(2,8), described as above, together with E(5,16) like this [0010010010010010].

Advertisements

Mixing at the Right Levels

There’s this theory of the ear that it hears different frequencies at different levels. The Fletcher-Munson curves, commonly known as equal-loudness contours, indicate the ear’s average sensitivity to different frequencies at various amplitude levels.

image

Even if the tonal balance of the sound remains the same, at low volume, mid range frequencies sound more prominent. While at high listening volumes, the lows and highs sound more prominent, and the mid range seems to back off.

In short, this explains why quieter music seems to sound less rich and full than louder music. Generally it’s better for the music to sound good as the volume increases.

As a consequence of this, you should edit, mix and work on your music on a high enough volume (not ridiculously loud), so that you can make sure your music doesn’t sound terrible when it’s listened to at a higher level. Because as a music producer you would want your music to sound best when the listener is paying full attention. But use caution, don’t damage your ears bla bla bla.

Delayed LFO on Eurorack

Many of the vintage Roland synths – like Jupiter-8, MC-202 and Juno-106 – feature a delayed LFO which could sound like a vibrato that comes in after a while on longer notes.

It’s possible to do this effect with Doepfer A-147-2 VCDLFO, but the module can be a little confusing. So here’s how to do it (involving Make Noise FUNCTION but don’t worry, it’s expandable):

  1. Apply a gate to Signal Input of Make Noise FUNCTION.
  2. Send the EOR (End of Rise) output of FUNCTION to Delay Reset on Doepfer A-147-2.
  3. CV pitch goes in CV (1 V/octave) input of a VCO.
  4. Connect the signal from Out (not Delay Out) of A-147-2 to another CV input, preferably with an attenuator, on VCO. (If the VCO doesn’t have multiple CV inputs, then use a linear mixer.)
  5. Set the Delay knob on A-147-2 between 1 and 2; delay time is actually the wrong way around.

Now I just sold my A-147-2 module. It was nice an full of functionality but it overlapped, and every HP counts. Although, this cool delayed LFO effect, I’m no longer able to patch. I think.

Vocal Delay Ducking

The normal thing to treat a dry vocal is to put reverb and delay on it. But that could make the vocal a bit muddy.

To keep it in-your-face and conserve the clarity of the vocal, while still having an effect to make it sound bigger, try ducking the volume of the delays whenever the dry vocal is active. To do so, side-chain the delay bus to the lead vocal track.

For example, use a delay device on a return bus and put a quarter note delay with low feedback, and send it to the vocal track with a little less volume. On the same bus, put a compressor and select the vocal track as the side-chain source. Set it up as you like, perhaps bring down the wet-parameter some.

You can also try the same thing with a reverb.

Create a Shimmer Reverb

Shimmer is a feedback-reverb-pitch-shift-effect made popular by Brian Eno and Daniel Lanois. The idea is to feed a reverb to a pitch shifter and back again. Each delay repetition gets shifted one octave up. In this case I’m using Ableton Live with stock effects, the Reverb and Grain Delay where the signal gets delayed and pitch shifted. You can use these guidelines in different environments (hardware/software) but here’s how I do it:

  1. Insert two Return Tracks and put a Reverb on A.
  2. Turn off Input Processing Hi Cut, set Global Quality to High, turn off Diffusion Network High, a fairly long Decay Time and turn the Dry/Wet to 100 %.
  3. Enable Send B on the Return Track A and set it to max.
  4. Use the Grain Delay on Return Track B.
  5. Set Frequency to 1.00 Hz and Pitch to 12.0.
  6. Enable Send A on the Return Track B and set it to max.
  7. Dial Send A of the Track with the signal source that you what to shimmer.

Also try to bring in Send B on the signal. And play with the Size and Diffuse controls of the Reverb.

Split Frequency, Split

I’ve written about the perks of putting side-chain compression on only the low frequencies of a bass earlier.

To do so, three copies of the sound are needed. Or, as this post will show, you could split the frequency into three bands (high, mid and low). By doing this, it is possible to apply different signal processing on each band.

Now I usually try to write about music production on a more abstract level, and not about a specific DAW or instrument, but this time I going to illustrate with Ableton Live on Mac. The theory is the same though, you just need to figure out how it works in your particular environment.

So I’m using the stock effect Multiband Dynamics to split frequency. The device has noticeable affect and coloration on the signal, even when the intensity amount if set to zero, but it should be transparent enough for now.

  1. Drop a Multiband Dynamics in the Device View.
  2. Set the Amount control to 0.0 % to neutralize compression or gain adjustments to the signal.
  3. Group the Multiband Dynamics in an Audio Effect Rack (select the device and press CMD + G).
  4. Show the Chain List of the rack.
  5. Dictate the crossover points on High and Low (the Mid consists of what is left in between, so remember to also change the crossover points in the mid chain if you make adjustments on the others), e.g. set the bottom of the frequency range of the high band to 1.00 kHz.
  6. Duplicate the selected chain two times.
  7. Rename all of the chains High, Mid and Low, from top to bottom.
  8. Solo each band respectively on the Split Freq, i.e. solo Low on the low chain.

Now process each band individually. Use a Utility device on the low chain and set Width to 0.0 % to direct the low frequencies to mono. Also, on this band, set up a side-chain compression triggered by the kick drum. Try a stereo widening effect and some reverb on the mid chain. And perhaps a little saturation to add some crunch on the high chain, I dunno, it’s up to you.

Bedroom Studio Tips Revisited

Three years ago I posted a list of some music production methods and tips on my blog that still gets some attention. Now, here’s some other good read (I hope).

Moreover, you really should check out the most popular post on this blog about the best tips on music production that I can think of.

About Recursive Modulation

Here’s something for you synth programmers to try out: Modulate certain aspects of an envelope with itself.

For example, set the modulation destination of the filter envelope to affect its own parameter, such as its (linear) attack or decay time, by a positive or negative amount. This should render a concave or convex shape, respectively.

This effect is referred to as recursive modulation.

Now try to set filter envelope attack to 32, and envelope amount to 48. Then go to the modulation matrix and select the filter envelope as source, and modulate the destination filter envelope attack by 48.

It’s also possible to use this method on a LFO. Modulating its own level will also affect the shape of the LFO. And by modulate its own rate, will affect both the overall rate and the shape.

About Effects Chain Order

First off, there’s really no correct order. It’s all about preference, what you want to achieve and context. Although some effects do seem to work better in certain places of the signal path than in others. Still, feel free to experiment.

Inserts and Send Effects

Effects are chained in either series or parallel. For parallel processing, use send effects to process a copy of a signal (without affecting the original). Use auxiliary sends for time-based effects, such as reverb and delay.

No rules, but in most situations it makes more sense, and saves processing power and setup time, if for example reverb and delay are shared between all channels, rather than inserting a new instance of each effect in an insert slot on each channel.

Use insert effects to change the signal completely, e.g. dynamic processors like compressors, expanders, noise gates, transient shapers.

In terms of signal flow, the channel insert connections usually come before the channel EQ, fader and pan.

Daisy Chain Effects

It is possible to daisy chained effects into the signal path. The order of the effects determines the sound and have different impacts. Here’s a suggestion:

  1. Noise gate
  2. Subtractive EQ
  3. Dynamics (compressors, limiters, expanders)
  4. Gain (distortion, saturation)
  5. General EQ
  6. Time-based modulation (chorus, flanger, phaser)
  7. Pure time-based (delay, reverb)

To clean up the signal, put the gate first, and it will work better with a wider dynamic range (than for example after a compressor).

Then use an EQ to cut away the unwanted frequencies; do this to avoid enhancing them with later effects. (Also maybe roll off frequencies below 30 Hz.)

Then place a compressor to adjust the dynamics of the signal.

After that, put on some overdrive boost or tape saturation effect. Also, such effects can work well in the beginning of the chain – as part of the initial sound – due to the harmonics generated by a distortion device, which bring richness to the effects that follow.

After gain effect, EQ to shape the tonal balance, but be careful when boosting.

At the end of the chain, modulation effects are usually placed after gain-related effects and before pure time-based effects.

Pure time-based effects such as delay and reverb usually come last in the signal chain.

The Mastering Chain

This post is mainly covering effects chain for channels and buses, but when entering the mastering stage, a conventional order of the mastering chain is:

  1. EQ
  2. Dynamics
  3. Post EQ
  4. Harmonic exciter
  5. Stereo imaging
  6. Loudness maximizer

Read more about mastering, http://palsen.tumblr.com/post/76108679797/mastering-bedroom-style.

Create a free website or blog at WordPress.com.

Up ↑