There hasn’t been any time for composing “real” tracks lately. I only had time to doddle on this Eurorack modular system on the dinner table. And drinking coffee. But let me tell you a little (not all!) of what’s going on on this patch:
A triangle wave is modulated by a variable shape through linear FM. A digital phase distortion is gently being folded before going to the mixer. A sub also appears there.
A couple of cycling CV curves are generated in a semi-random fashion and feed into a quantizer set to C harmonic minor scale. This is routed as 1 V/octave to the oscillators.
The filter is self-oscillating and tracks the same pitch.
The tempo is set to 90 BPM and triggers straight eights on the quantizer and envelope.
Now this is only a snapshot of a patch in progress. All those patches will be lost in time, like tears in rain.
Everybody seems to be talking about Euclidean rhythms, but here’s a short explanations on this blog anyway.
The Euclidean algorithm computes the greatest common divisor of two given integers. This is used to distribute numbers as evenly as possible, not only in music, but in applications in many fields of scientific research, e.g. string theory and particle acceleration.
Euclidean rhythms are derived from two values: one that represents the length of the pattern in steps, and the other that defines the hits or notes to be distributed evenly across that pattern. Any remainders are also to be distributed.
Here’s two examples. First: E(2,8), this means a sequence of eight beats and two hits , where 1 is a hit and 0 represents a rest. Spread out evenly across, it should look something like this .
A second example: E(5,8), with five hits,  via  looks like this after the remainders are distributed as evenly as possible .
Any two numbers can be combined to generate a semi-repetitive pattern. It’s possible to offset the rotation of the sequence by a certain number of steps. And by playing several patterns with differents lengths and offsets, complex polyrhythms can occur. For example, try playing E(2,8), described as above, together with E(5,16) like this .
There’s this theory of the ear that it hears different frequencies at different levels. The Fletcher-Munson curves, commonly known as equal-loudness contours, indicate the ear’s average sensitivity to different frequencies at various amplitude levels.
Even if the tonal balance of the sound remains the same, at low volume, mid range frequencies sound more prominent. While at high listening volumes, the lows and highs sound more prominent, and the mid range seems to back off.
In short, this explains why quieter music seems to sound less rich and full than louder music. Generally it’s better for the music to sound good as the volume increases.
As a consequence of this, you should edit, mix and work on your music on a high enough volume (not ridiculously loud), so that you can make sure your music doesn’t sound terrible when it’s listened to at a higher level. Because as a music producer you would want your music to sound best when the listener is paying full attention. But use caution, don’t damage your ears bla bla bla.
Shimmer is a feedback-reverb-pitch-shift-effect made popular by Brian Eno and Daniel Lanois. The idea is to feed a reverb to a pitch shifter and back again. Each delay repetition gets shifted one octave up. In this case I’m using Ableton Live with stock effects, the Reverb and Grain Delay where the signal gets delayed and pitch shifted. You can use these guidelines in different environments (hardware/software) but here’s how I do it:
Insert two Return Tracks and put a Reverb on A.
Turn off Input Processing Hi Cut, set Global Quality to High, turn off Diffusion Network High, a fairly long Decay Time and turn the Dry/Wet to 100 %.
Enable Send B on the Return Track A and set it to max.
Use the Grain Delay on Return Track B.
Set Frequency to 1.00 Hz and Pitch to 12.0.
Enable Send A on the Return Track B and set it to max.
Dial Send A of the Track with the signal source that you what to shimmer.
Also try to bring in Send B on the signal. And play with the Size and Diffuse controls of the Reverb.
To do so, three copies of the sound are needed. Or, as this post will show, you could split the frequency into three bands (high, mid and low). By doing this, it is possible to apply different signal processing on each band.
Now I usually try to write about music production on a more abstract level, and not about a specific DAW or instrument, but this time I going to illustrate with Ableton Live on Mac. The theory is the same though, you just need to figure out how it works in your particular environment.
So I’m using the stock effect Multiband Dynamics to split frequency. The device has noticeable affect and coloration on the signal, even when the intensity amount if set to zero, but it should be transparent enough for now.
Drop a Multiband Dynamics in the Device View.
Set the Amount control to 0.0 % to neutralize compression or gain adjustments to the signal.
Group the Multiband Dynamics in an Audio Effect Rack (select the device and press CMD + G).
Show the Chain List of the rack.
Dictate the crossover points on High and Low (the Mid consists of what is left in between, so remember to also change the crossover points in the mid chain if you make adjustments on the others), e.g. set the bottom of the frequency range of the high band to 1.00 kHz.
Duplicate the selected chain two times.
Rename all of the chains High, Mid and Low, from top to bottom.
Solo each band respectively on the Split Freq, i.e. solo Low on the low chain.
Now process each band individually. Use a Utility device on the low chain and set Width to 0.0 % to direct the low frequencies to mono. Also, on this band, set up a side-chain compression triggered by the kick drum. Try a stereo widening effect and some reverb on the mid chain. And perhaps a little saturation to add some crunch on the high chain, I dunno, it’s up to you.
The Korg monotribe is a desktop analog monophonic synthesizer with an additional three preset drums sounds. Its sound is warm and rich but quite clicky and noisy – although I think I prefer this timbre over the newer volca series. The monotribe was released in 2011 and is now discontinued.
How to Silent the VCO When Processing External Audio
The synth has an audio in port to feed external audio into 12 dB/oct lowpass filter (which uses the same circuit as the classic MS-10/MS-20). The crux is that the synth engine must be triggered to run the filter, meaning it’s not possible to process external audio solo (without being modded). But the LFO can modulate the oscillator so that it becomes nearly inaudible. The workaround below is not exactly neat, but should do the trick. On the monotribe, do as follows:
Press PLAY button and then REC.
Set RANGE select switch to WIDE and press the highest key on the RIBBON keyboard during the whole sequence.
Set EG to GATE.
Switch TARGET to VCO.
Set MODE to 1SHOT.
Set WAVE to SQUARE WAVE.
Set LFO RATE knob to minimum speed and INT. to maximum depth.
Select TRIANGLE WAVE on modulation waveform WAVE.
How to CV Control the monotribe with the Analog Keys’ Keyboard
OS version 2.11 allows the SYNC IN connection to be used as a pitch CV/gate input. This makes it possible to control the monotribe with an external keyboard or sequencer (which is great because the ribbon keyboard is almost impossible to play). There are many ways to do this, but the theory is the same: send CV and gate via a TRRS 4-pole mini jack – where gate is tip and CV the second ring.
Now I got an Elektron Analog Keys which can send both tip and ring from the same CV output, but to do that to the monotribe I’d need a special cable (sort of TRS to TRRS) and I haven’t soldered any yet. So until then, I hacked a workable cable with many different pieces I found laying around (e.g. the composite video cable was provided with a TV I acquired last year). Again, you can build this patch cable more streamlined, but here’s my solution:
Connect a composite video cable to SYNC IN on the monotribe and connect a RCA connector, white male to white female and red male to yellow female. On the other end, connect a pair of adaptors, RCA female to mono 3.5 mm mini jack male and then another pair of adaptors, 3.5 mm mini jack female to 6.3 mm jack male and plug white in CV AB and red/yellow in CV CD on Analog Keys.
While this setup only uses the tips, and demands both CV ports on Analog Keys, set CV A to Gate, V-Trig, 5.0 V and CV C to Pitch V/oct, C 3, 1.000 V, C 6, 4.000 V. (CV B and D are not used.)
Prepare the monotribe as described in the documentation that came with the download package. Activate CV/GATE mode, set the Pitch CV curve to V/oct and GATE polarity to high.
P.S. It’s also possible to create a feedback loop by feeding the headphone output back into the monotribe’s audio in. This will render a mild thickening, and if you have some kind of attenuator on the feedback signal path, you can dial in some overdrive too.
Here’s something for you synth programmers to try out: Modulate certain aspects of an envelope with itself.
For example, set the modulation destination of the filter envelope to affect its own parameter, such as its (linear) attack or decay time, by a positive or negative amount. This should render a concave or convex shape, respectively.
This effect is referred to as recursive modulation.
Now try to set filter envelope attack to 32, and envelope amount to 48. Then go to the modulation matrix and select the filter envelope as source, and modulate the destination filter envelope attack by 48.
It’s also possible to use this method on a LFO. Modulating its own level will also affect the shape of the LFO. And by modulate its own rate, will affect both the overall rate and the shape.
As a consequence of scaling down my home studio, I sold two audio interfaces, Apogee Duet for iPad & Mac and Propellerhead Balance, to acquire an Apogee Quartet instead. (Yes I was checking out the newer Element 46 and even if the Element series audio quality and mic pre technology are a step above, the Quartet’s specifications are good enough for me, and more importantly I wanted/needed 8 outputs and a convenient front panel control.)
I decided for a 4-channel audio interface because I didn’t need 20+ hardware synths and drum machines up and running all the time. All that stuff took up too much space and I didn’t really use them. They were connected to a mixer – functioning more or less as a patchbay – and now that mixer is redundant. Remember, limitations drive creativity and all.
With the current setup, I’m able to insert outboard gear, not only to use Minitaur and Mopho as analog instruments, but also as signal processors/external filters. That is, with a little bit of routing in Ableton Live, I can send hardware and softsynths to the Moog ladder and Curtis low-pass filters.
Right now I got three analog monosynths (Minitaur, Mopho and SH-101) connected, and Analog Keys operating as an analog polysynth, master keyboard, sequencer and MIDI to CV converter. I can record all synths mentioned on separate tracks at once.
The plan is to switch gear depending on the project. It’s a clean, minimal setup which seems to suit me.
Recently, most time has been spent tweaking the setup, experiment with the gear, and programming and sound designing on the synths. I haven’t made any real compositions for a while though.
Next up could be a cassette tape recorder (to be able to make some lo-fi tape compression/saturation). And I think I’ll get the Strymon Deco pedal and put it in an effect signal chain.