Search

# Holy Bot

### making music

There hasn’t been any time for composing “real” tracks lately. I only had time to doddle on this Eurorack modular system on the dinner table. And drinking coffee. But let me tell you a little (not all!) of what’s going on on this patch:

A triangle wave is modulated by a variable shape through linear FM. A digital phase distortion is gently being folded before going to the mixer. A sub also appears there.

A couple of cycling CV curves are generated in a semi-random fashion and feed into a quantizer set to C harmonic minor scale. This is routed as 1 V/octave to the oscillators.

The filter is self-oscillating and tracks the same pitch.

The tempo is set to 90 BPM and triggers straight eights on the quantizer and envelope.

Now this is only a snapshot of a patch in progress. All those patches will be lost in time, like tears in rain.

Everybody seems to be talking about Euclidean rhythms, but here’s a short explanations on this blog anyway.

The Euclidean algorithm computes the greatest common divisor of two given integers. This is used to distribute numbers as evenly as possible, not only in music, but in applications in many fields of scientific research, e.g. string theory and particle acceleration.

Euclidean rhythms are derived from two values: one that represents the length of the pattern in steps, and the other that defines the hits or notes to be distributed evenly across that pattern. Any remainders are also to be distributed.

Here’s two examples. First: E(2,8), this means a sequence of eight beats and two hits [11000000], where 1 is a hit and 0 represents a rest. Spread out evenly across, it should look something like this [10001000].

A second example: E(5,8), with five hits, [11111000] via [10101011] looks like this after the remainders are distributed as evenly as possible [10110110].

Any two numbers can be combined to generate a semi-repetitive pattern. It’s possible to offset the rotation of the sequence by a certain number of steps. And by playing several patterns with differents lengths and offsets, complex polyrhythms can occur. For example, try playing E(2,8), described as above, together with E(5,16) like this [0010010010010010].

# Mixing at the Right Levels

There’s this theory of the ear that it hears different frequencies at different levels. The Fletcher-Munson curves, commonly known as equal-loudness contours, indicate the ear’s average sensitivity to different frequencies at various amplitude levels.

Even if the tonal balance of the sound remains the same, at low volume, mid range frequencies sound more prominent. While at high listening volumes, the lows and highs sound more prominent, and the mid range seems to back off.

In short, this explains why quieter music seems to sound less rich and full than louder music. Generally it’s better for the music to sound good as the volume increases.

As a consequence of this, you should edit, mix and work on your music on a high enough volume (not ridiculously loud), so that you can make sure your music doesn’t sound terrible when it’s listened to at a higher level. Because as a music producer you would want your music to sound best when the listener is paying full attention. But use caution, don’t damage your ears bla bla bla.

That modular thing, that escalated quickly.

It’s like building a character in a role-playing video game. You distribute endurance, strength, dexterity and such to make the avatar/modular reflect your play style. Some builds will render an East Coast synth voice, while others are suited for a more experimental kind of noise.

My first iteration of modules was based on dedicated, no frills core functionalities, such as Doepfer’s essential modules. It was good to start with the basics. By doing this I was able to test different routings, patch them in how I wanted and learn the signal path.

I did want to build a complete system, made entirely from one manufacturer’s modules. Because part of the beauty with modular is putting together an own rack made of different modules from different places and with different approaches.

From the beginning I decided for a quite small system, a limited case of 6U, 84 HP. But one or a few function per module demands more space, so after a while I began to replace some with functionally dense modules, in other words, I levelled up. Still I didn’t want to go to far; I don’t want a computer-like module that solves everything – I reckon that would be contra-modular.

For the time being, I run sequencer/clock outside of the system. Maybe it’s a little bit cheating, but this way I save space in the case. Anyway, I’m using my Analog Keys, and with it I can drive two separate sequences, process the modular signals through the synth’s filters, envelopes, effects and so on, and trigger my TR-606. And using all four voices of the synth itself at the same time. The Rosie output module has send and return for external effects, so I’ve my BigSky plugged in there. All in all, it’s quite a powerful and portable little setup.

As for the case, I just cut up a cardboard box and gaffered it together to fit the Happy Ending Kit rails. It’s very slim, very light, maybe not so stylish though.

And the housing is really a project. It’s like a doll house that is defragmented, partly from an interior design thinking. Well, I want it to look nice and neat. Then again, most time is spent researching which modules go in and out, based on functionality and compability with the ecosystem.

Nevermind the patch in the picture, I just needed something so sound and didn’t want to clutter the image too much. The photo is from the kitchen table.

Many of the vintage Roland synths – like Jupiter-8, MC-202 and Juno-106 – feature a delayed LFO which could sound like a vibrato that comes in after a while on longer notes.

It’s possible to do this effect with Doepfer A-147-2 VCDLFO, but the module can be a little confusing. So here’s how to do it (involving Make Noise FUNCTION but don’t worry, it’s expandable):

1. Apply a gate to Signal Input of Make Noise FUNCTION.
2. Send the EOR (End of Rise) output of FUNCTION to Delay Reset on Doepfer A-147-2.
3. CV pitch goes in CV (1 V/octave) input of a VCO.
4. Connect the signal from Out (not Delay Out) of A-147-2 to another CV input, preferably with an attenuator, on VCO. (If the VCO doesn’t have multiple CV inputs, then use a linear mixer.)
5. Set the Delay knob on A-147-2 between 1 and 2; delay time is actually the wrong way around.

Now I just sold my A-147-2 module. It was nice an full of functionality but it overlapped, and every HP counts. Although, this cool delayed LFO effect, I’m no longer able to patch. I think.

I didn’t want to dive into the ocean of modular synthesis. For may years I resisted. I thought the practice was all about experimenting and jamming – all about the live session in itself. And for me, the things that come first in all of this, are songwriting, composition, arrangement, structure, mixing, and postproduction such as mastering.

And while I enjoy sound design very much, and regard it as an important part in the making of music, I thought Eurorack modular systems primarily made noises that was hard to integrate into more conventional tracks. And then it’s not possible to save presets.

But now I’m thinking: why not have both? I can still do my old routines, and at the same time care for a little ecosystem with an ephemeral nature on the side. I could set limits.

So I’m building a basic synth voice, something in that direction. The modular rack is made of dedicated modules (more or less) and has many modulation possibilities. It kind of goes like this: VCO > MIX > VCF > VCA > ENV > LFO.

I don’t want to use multifunctional toolboxes, such as Expert Sleepers Disting or advanced generators as Make Noise Maths to begin with. I don’t want a computer to do everything – that would defeat the purpose of a modular system (although a couple of combined utility functions are alright, like Mutable Instruments Kinks or Intellijel Triatt ). I’m not putting a self-contained, semi-modular synth – like a Moog Mother-32 or an Arturia MiniBrute 2 – as a starting point, because I want building blocks; different exchangeable modules. (I’m, however, using an Elektron Analog Keys to control everything and then some.) For the modular system will grow, evolve organically, and stuff will be supplemented or replaced.

From the get-go, the the modular is mainly Doepfer, but it will be customized with other equivalent modules or upgrades. I’d like to say I’m expanding slowly to get a chance to thoroughly understand the modules and how they interact with each other, but to tell you the truth, this configuration has really exploded. But I guess, and hope, it will cool down. It takes time and perhaps it’s the process per se that is the point.

Another agenda is to acquire used modules on the secondhand market, as far as possible. I want to be able to try out and then sell, if it doesn’t fit without losing too much money. This approach has been working great with the exception of a friend of mine whom is building a uBraids for me.

P.S. Ableton Live 10 is officially released today.

The normal thing to treat a dry vocal is to put reverb and delay on it. But that could make the vocal a bit muddy.

To keep it in-your-face and conserve the clarity of the vocal, while still having an effect to make it sound bigger, try ducking the volume of the delays whenever the dry vocal is active. To do so, side-chain the delay bus to the lead vocal track.

For example, use a delay device on a return bus and put a quarter note delay with low feedback, and send it to the vocal track with a little less volume. On the same bus, put a compressor and select the vocal track as the side-chain source. Set it up as you like, perhaps bring down the wet-parameter some.

You can also try the same thing with a reverb.

Shimmer is a feedback-reverb-pitch-shift-effect made popular by Brian Eno and Daniel Lanois. The idea is to feed a reverb to a pitch shifter and back again. Each delay repetition gets shifted one octave up. In this case I’m using Ableton Live with stock effects, the Reverb and Grain Delay where the signal gets delayed and pitch shifted. You can use these guidelines in different environments (hardware/software) but here’s how I do it:

1. Insert two Return Tracks and put a Reverb on A.
2. Turn off Input Processing Hi Cut, set Global Quality to High, turn off Diffusion Network High, a fairly long Decay Time and turn the Dry/Wet to 100 %.
3. Enable Send B on the Return Track A and set it to max.
4. Use the Grain Delay on Return Track B.
5. Set Frequency to 1.00 Hz and Pitch to 12.0.
6. Enable Send A on the Return Track B and set it to max.
7. Dial Send A of the Track with the signal source that you what to shimmer.

Also try to bring in Send B on the signal. And play with the Size and Diffuse controls of the Reverb.

There’s a few things you can do to make your bass sound on smaller speakers like laptops, tablets and cellphones. First you need fundamental and harmonic content on your bass. The fundamental frequency is the base foundation note that represents the pitch that is played and the harmonics are the following frequencies that support the fundamental. In short, it’s the higher frequency harmonics that allow for the sub to cut through the mix.

One idea is to create higher frequency harmonics. The harmonics should be in unison with the fundamental frequency, but don’t contain it. (The harmonics trick your brain into hearing lower frequencies that aren’t really there.) Add a touch of harmonic saturation, drive a little warmth, a little fuzz to help that sub cut through. The harmonic distortion, adds high-frequency information to reveal presence on systems with poor bass response.

Also try to copy the bass to a parallell channel, bitcrush the higher harmonics and cut all lows and mix with the original bass.

If you’re beefing up your main bass by layering a separate, low-passed sine wave at the octave below, perhaps try square (or triangle) to add some subtle higher frequencies that allow the sub bass to translate better than a pure sine wave.

You can also try to EQ the bass. Try to boost the harmonic multiples of the fundamental frequency to hear some definition from the bass sound. And boosting above 300 Hz will bring out the bass’s unique timbral character. Actually, try around 1 kHz (but add a low-pass filter at around 2-3 kHz).

Use high-pass filtering (to clear the low-end frequencies and make room for the intended bass sound), and you can also side-chain your sub bass to keep it from fighting with the kick drum.

When it comes to kick drums you can add a small click noise to help it translate onto smaller speakers.

P.S. There are also plugins that use psycho-acoustics to calculate precise harmonics that are related to the fundamental tones of bass.