Search

Holy Bot

Bedroom music production, gaming and random shit

Tag

producer

About Euclidean Rhythms

Everybody seems to be talking about Euclidean rhythms, but here’s a short explanations on this blog anyway.

The Euclidean algorithm computes the greatest common divisor of two given integers. This is used to distribute numbers as evenly as possible, not only in music, but in applications in many fields of scientific research, e.g. string theory and particle acceleration.

Euclidean rhythms are derived from two values: one that represents the length of the pattern in steps, and the other that defines the hits or notes to be distributed evenly across that pattern. Any remainders are also to be distributed.

Here’s two examples. First: E(2,8), this means a sequence of eight beats and two hits [11000000], where 1 is a hit and 0 represents a rest. Spread out evenly across, it should look something like this [10001000].

A second example: E(5,8), with five hits, [11111000] via [10101011] looks like this after the remainders are distributed as evenly as possible [10110110].

Any two numbers can be combined to generate a semi-repetitive pattern. It’s possible to offset the rotation of the sequence by a certain number of steps. And by playing several patterns with differents lengths and offsets, complex polyrhythms can occur. For example, try playing E(2,8), described as above, together with E(5,16) like this [0010010010010010].

Advertisements

Mixing at the Right Levels

There’s this theory of the ear that it hears different frequencies at different levels. The Fletcher-Munson curves, commonly known as equal-loudness contours, indicate the ear’s average sensitivity to different frequencies at various amplitude levels.

image

Even if the tonal balance of the sound remains the same, at low volume, mid range frequencies sound more prominent. While at high listening volumes, the lows and highs sound more prominent, and the mid range seems to back off.

In short, this explains why quieter music seems to sound less rich and full than louder music. Generally it’s better for the music to sound good as the volume increases.

As a consequence of this, you should edit, mix and work on your music on a high enough volume (not ridiculously loud), so that you can make sure your music doesn’t sound terrible when it’s listened to at a higher level. Because as a music producer you would want your music to sound best when the listener is paying full attention. But use caution, don’t damage your ears bla bla bla.

Vocal Delay Ducking

The normal thing to treat a dry vocal is to put reverb and delay on it. But that could make the vocal a bit muddy.

To keep it in-your-face and conserve the clarity of the vocal, while still having an effect to make it sound bigger, try ducking the volume of the delays whenever the dry vocal is active. To do so, side-chain the delay bus to the lead vocal track.

For example, use a delay device on a return bus and put a quarter note delay with low feedback, and send it to the vocal track with a little less volume. On the same bus, put a compressor and select the vocal track as the side-chain source. Set it up as you like, perhaps bring down the wet-parameter some.

You can also try the same thing with a reverb.

Bedroom Studio Tips Revisited

Three years ago I posted a list of some music production methods and tips on my blog that still gets some attention. Now, here’s some other good read (I hope).

Moreover, you really should check out the most popular post on this blog about the best tips on music production that I can think of.

Headroom for MP3

I’ve written about the importance of headroom when submitting your track to a professional mastering engineer, but you should also pay attention to headroom when you do this on your own and when you encode MP3s.

Okay so when the track is mastered at 0 dB (the maximum level for digital audio files) many converters and encoders are prone to clip. Lossy compression formats utilize psychoacoustic models to remove audio information, and by doing so introduces an approximation error, a noise which can increase peak levels and cause clipping in the audio signal – even if the uncompressed source audio file appears to peak under 0 dB.

In Practice

For example SoundCloud transcodes uploaded audio to 128 Kbps MP3 for streaming. In this scenario, use a true peak limiter to ensure the right margin depending on the source material. A margin of -1.0 or -1.5 dBFS should work for no distortion (sometimes -0.3, -0.5 or -0.7 would work, but it’s safer to have greater margin).

Add Life to Your Mix

Sometimes when I read about music production and audio engineering stuff I come across ideas that I personally wouldn’t use in my music, but nevertheless could be interesting – at least in theory – and perhaps someone else dare to try.

Here’s one: record your “as is” mix from your monitor speakers, using a couple of microphones, and then blend the recording with your final mix.

This could add vibrance and “realism”. It could of course also clutter your mix if you overdo it. If needed, try to poke the recording to play with the phase relationship.

Recording your mix like this can add some analog imperfection by revealing a little of the studio’s ambient, and the colors of the mics, preamps and monitors would also print this sound layer. And you need not to record in the studio, you could put the monitors in a (non-acoustic treated) reverbant room, or record with an opened window… You get the drift.

No Reason to Go Back

image

I’ve started making electronic music on the Amiga 500 using music trackers many, many years ago.

I then got a PC and used several shady Cubase versions. After that, I got a Mac and started using Logic for a while. At that time FruityLoops was weak and Reason’s sequencer wasn’t in a good place. But then something happened – Reason 6.5 introduced rack extensions and shit. And then came version 7, and I though it was the greatest. Everything was fine –  for a short while. When Propellerhead released version 8, focus had shifted to the surface, and community building seemed to be the new black. So I switched to Ableton Live.

Reason 9 just revealed. It adds pitch edit, scales and chords, note echo and dual arpeggio. What do guys think? Well, I for one, am not going back.

FM à la Analog Four

One of my favorite synths is the Analog Four, and with the OS update 1.22 a while back, Elektron added new LFO synchronization modes and destinations and made this synth even more awesome. (If I only could take one of my synths to a deserted island, it would be the Analog Four.) Anyway, in short that means I’m now able to apply pitch tracked LFO FM behavior.

Here’s a way to start (not rules):

  1. Set triangle (as a substitute for sinus) waveform on an oscillator.
  2. Open up both filters.
  3. Set the LFO speed to any multiples of 16.
  4. Set the LFO multiplier to over 512 and synchronize it to the oscillator you’re working with.
  5. Let the LFO restart when a note is played on Trig Mode.
  6. Choose sinus as the LFO waveform.
  7. Set frequency or pitch modulation to the oscillator as LFO destination. (Also try different destinations later, like the filter frequency.)
  8. Set depth of the LFO modulation (or, if you’re using the first oscillator, let the second assignable envelope control this).
  9. If you use Depth A in the step above, then try to fade in or fade out the modulation.

Also, there’s a few videos on YouTube describing these methods, like this, which is a good walkthrough, even though it’s a bit unfocused and lengthy.

Understanding Hip Hop Production

I imagine that most readers of this blog are aspiring producers of electronic music. Its subject are not limited to only electronics, but I haven’t mention the word “flute” or “acoustic guitar” even once since the start of it all.

Anyway, electronic music is such a wide genre and has more to do with gear and techniques rather than a certain style of music.

So I’m into music production and old analog hardware. Here’s the thing, I’m into hip hop, and mostly trapish contemporary shit for the time being. And as a vintage gear head, I feel kind of alone.

Of course I’m influenced by Aphex Twin, Kraftwerk, older Depeche Mode, Warp Records, Hyperdub, Skinny Puppy, Front 242 and so on, but equally so by hip hop producers, such as J Dilla, DJ Premier, Timbaland, RZA, The Neptunes, Dr. Dre, DJ Shadow, RJD2.

And right now, I think my favorite producers/songwriters are Noah 40 Shebib and PartyNextDoor. And Noisia, and Zomby, and Burial, and perhaps Datsik around 2011.

Many electronic producers don’t seem to appreciate or understand hip hop production. I’m not talking about these people mentioned above, but about hte generic bedroom electronic music producer. They might think of hip hop as turntables and loops. But modern hip hop production uses the same gear and share many point of contacts with electronic music. (Of course, it’s kind of banal, because so do all popular styles of music.)

I don’t know, I don’t have an agenda here, I’ve just been thinking about this when browsing through different forums and groups. Most connoisseurs and nerds of synthesizers make techno, electronica, house, synthpop, industrial, edm or synthwave, not many make hip hop.

Blog at WordPress.com.

Up ↑