Search

Holy Bot

Bedroom music production, gaming and random shit

Tag

songwriting

About Euclidean Rhythms

Everybody seems to be talking about Euclidean rhythms, but here’s a short explanations on this blog anyway.

The Euclidean algorithm computes the greatest common divisor of two given integers. This is used to distribute numbers as evenly as possible, not only in music, but in applications in many fields of scientific research, e.g. string theory and particle acceleration.

Euclidean rhythms are derived from two values: one that represents the length of the pattern in steps, and the other that defines the hits or notes to be distributed evenly across that pattern. Any remainders are also to be distributed.

Here’s two examples. First: E(2,8), this means a sequence of eight beats and two hits [11000000], where 1 is a hit and 0 represents a rest. Spread out evenly across, it should look something like this [10001000].

A second example: E(5,8), with five hits, [11111000] via [10101011] looks like this after the remainders are distributed as evenly as possible [10110110].

Any two numbers can be combined to generate a semi-repetitive pattern. It’s possible to offset the rotation of the sequence by a certain number of steps. And by playing several patterns with differents lengths and offsets, complex polyrhythms can occur. For example, try playing E(2,8), described as above, together with E(5,16) like this [0010010010010010].

Advertisements

Mixing at the Right Levels

There’s this theory of the ear that it hears different frequencies at different levels. The Fletcher-Munson curves, commonly known as equal-loudness contours, indicate the ear’s average sensitivity to different frequencies at various amplitude levels.

image

Even if the tonal balance of the sound remains the same, at low volume, mid range frequencies sound more prominent. While at high listening volumes, the lows and highs sound more prominent, and the mid range seems to back off.

In short, this explains why quieter music seems to sound less rich and full than louder music. Generally it’s better for the music to sound good as the volume increases.

As a consequence of this, you should edit, mix and work on your music on a high enough volume (not ridiculously loud), so that you can make sure your music doesn’t sound terrible when it’s listened to at a higher level. Because as a music producer you would want your music to sound best when the listener is paying full attention. But use caution, don’t damage your ears bla bla bla.

Vocal Delay Ducking

The normal thing to treat a dry vocal is to put reverb and delay on it. But that could make the vocal a bit muddy.

To keep it in-your-face and conserve the clarity of the vocal, while still having an effect to make it sound bigger, try ducking the volume of the delays whenever the dry vocal is active. To do so, side-chain the delay bus to the lead vocal track.

For example, use a delay device on a return bus and put a quarter note delay with low feedback, and send it to the vocal track with a little less volume. On the same bus, put a compressor and select the vocal track as the side-chain source. Set it up as you like, perhaps bring down the wet-parameter some.

You can also try the same thing with a reverb.

Bedroom Studio Tips Revisited

Three years ago I posted a list of some music production methods and tips on my blog that still gets some attention. Now, here’s some other good read (I hope).

Moreover, you really should check out the most popular post on this blog about the best tips on music production that I can think of.

Translate Sub onto Smaller Speakers

There’s a few things you can do to make your bass sound on smaller speakers like laptops, tablets and cellphones. First you need fundamental and harmonic content on your bass. The fundamental frequency is the base foundation note that represents the pitch that is played and the harmonics are the following frequencies that support the fundamental. In short, it’s the higher frequency harmonics that allow for the sub to cut through the mix.

One idea is to create higher frequency harmonics. The harmonics should be in unison with the fundamental frequency, but don’t contain it. (The harmonics trick your brain into hearing lower frequencies that aren’t really there.) Add a touch of harmonic saturation, drive a little warmth, a little fuzz to help that sub cut through. The harmonic distortion, adds high-frequency information to reveal presence on systems with poor bass response.

Also try to copy the bass to a parallell channel, bitcrush the higher harmonics and cut all lows and mix with the original bass.

If you’re beefing up your main bass by layering a separate, low-passed sine wave at the octave below, perhaps try square (or triangle) to add some subtle higher frequencies that allow the sub bass to translate better than a pure sine wave.

You can also try to EQ the bass. Try to boost the harmonic multiples of the fundamental frequency to hear some definition from the bass sound. And boosting above 300 Hz will bring out the bass’s unique timbral character. Actually, try around 1 kHz (but add a low-pass filter at around 2-3 kHz).

Use high-pass filtering (to clear the low-end frequencies and make room for the intended bass sound), and you can also side-chain your sub bass to keep it from fighting with the kick drum.

When it comes to kick drums you can add a small click noise to help it translate onto smaller speakers.

P.S. There are also plugins that use psycho-acoustics to calculate precise harmonics that are related to the fundamental tones of bass.

Mixing with Pink Noise

Setting basic level and pan are usually the first things to do in the process of mixing. Choose a sound/channel, e.g. kick drum, to act as your main level reference, and balance all the other instruments tracks against it. So establish the initial gains and then refine with dynamics processing and stuff. That’s what I usually do.

But – here’s a neat trick to help you get the balance right: use pink noise as level reference and balance each sound/channel to it.

Generate or play pink noise at the stereo bus. Calibrate the noise to a sensible reference level that allow ample headroom on your master bus when mixing. Use an averaging meter, a RMS-type meter, to establish the level of the noise.

Start with soloing the first instrument and play it alongside the pink noise, and balance it directly against the noise by ear. That is, try to find the level at which the instrument is just audible above the noise, but not hidden. Now mute that instrument and solo the next one. Repeat. Kill the noise and voilà!

Mixing this way won’t make it perfect, but accurate enough for a start and then some.

Another (general) tip is to listen to and learn by mixers that are much better than you, and that you admire.

Note: Pink noise is a random signal, filtered to have equal energy per octave.

Compression Time Again

Compression is an invaluable tool that can be applied to almost any sound. Therefore, here’s a friendly reminder about compression and the settings of attack and release on a compressor.

Most times compression is used to control dynamics and taming peaks to get a smooth, consistent signal. Other times compression is used to add punch, impact, proximity or for tonal control.

Four Settings

There are four settings on most compressors. The threshold controls the point at which compression begins. The ratio is the setting for how much compression is being applied. (A so called limiter is a compressor with a high ratio, e.g. inf:1, that will stop the signal at the set threshold.) Then there are attack and release settings. Attack sets how long it takes to reach maximum compression once the signal exceeds the threshold. And release sets how long it takes for compression to stop after the signal gets below the threshold. (Some compressors feature an auto release, which automatically adjusts the release time based on the incoming signal.)

Attack

Attack controls how much initial impact gets through.

A fast attack time shaves off the initial transient impact, and can make it sound more consistent and controlled. But when gone too far, the sound will lose vibrance and seem more further away.

A slow attack time is letting a lot of transient formation through. The initial impact will come through and the compressor will start to work after that. This can make it sound punchy, big and aggressive, but not very consistent dynamically.

Release

For release time, again there are two options: fast and slow. In general, fast release can render a more aggressive, gritty sound – the initial sustain is sort of brought up, meaning more perceived loudness. But when the release time is too fast, it can sound exaggerated, distorted and bad, and there can also occur some pumping artifacts.

A slow release time will give more dynamic control, more smoothness, but also sound a bit distant. And if overdone with a slower release, the compressor will not release in time for the next hit to come through, and that can suck the life out of the initial impact and sound flat.

Stack Compressors

An effective way to stack compressors is to put the compressor with the fast attack time first and the compressor with the slow attack time second. The first compressor will smooth out those transients and make the initial hits more consistent, while the second compressor, fed by the dynamically controlled signal, will accentuate the initial hits.

Add Life to Your Mix

Sometimes when I read about music production and audio engineering stuff I come across ideas that I personally wouldn’t use in my music, but nevertheless could be interesting – at least in theory – and perhaps someone else dare to try.

Here’s one: record your “as is” mix from your monitor speakers, using a couple of microphones, and then blend the recording with your final mix.

This could add vibrance and “realism”. It could of course also clutter your mix if you overdo it. If needed, try to poke the recording to play with the phase relationship.

Recording your mix like this can add some analog imperfection by revealing a little of the studio’s ambient, and the colors of the mics, preamps and monitors would also print this sound layer. And you need not to record in the studio, you could put the monitors in a (non-acoustic treated) reverbant room, or record with an opened window… You get the drift.

No Reason to Go Back

image

I’ve started making electronic music on the Amiga 500 using music trackers many, many years ago.

I then got a PC and used several shady Cubase versions. After that, I got a Mac and started using Logic for a while. At that time FruityLoops was weak and Reason’s sequencer wasn’t in a good place. But then something happened – Reason 6.5 introduced rack extensions and shit. And then came version 7, and I though it was the greatest. Everything was fine –  for a short while. When Propellerhead released version 8, focus had shifted to the surface, and community building seemed to be the new black. So I switched to Ableton Live.

Reason 9 just revealed. It adds pitch edit, scales and chords, note echo and dual arpeggio. What do guys think? Well, I for one, am not going back.

Blog at WordPress.com.

Up ↑