Advanced Mathematics 2: Microrhythms

As you probably know by know, I’ve been awestruck recently by the concept of microtime, from its main proponent Malcolm Braff. Since there is ambiguity on the term itself, I’ll refer to it here as microrhythm. As a parallel to microtonality, microrhythm is like “the notes between the notes”, but on the time axis instead of the pitch value. Since it’s often best to provide an example, let me show you the song that started this quest in me, Malcolm Braff Trio’s “Crimson Waves”, with Aurélie Emery.

Here is the main theme of the composition, taken from Malcolm’s official website.

As you can see, the theme that I used for this post’s banner takes heavy inspiration from this one.

So, here’s a little breakdown of the notation. The theme is in 7/8 time: that is the first fact that we can assess. The brackets at the bottom of the notes represent the groupings present in it, and do not represent tuplet feel. You can see that the 28 notes are divided in groups of 5, 5, 5, 5, 4, and 4 notes respectively, with the first one being an anacrusis (a note that happens before the start of the measure). Up until now, we haven’t seen any outlandish notation, but that’s about to change.

If we look at the top of the notation, we see headless notes in groupings of 11 sixteenth notes. For this to fit within our 7/8 bar, they are obviously 11:14 notes (11 notes that are played in the span of 14). Since this is only the main theme and not the actual music notation for the song, there is one crucial detail missing, but you can spot it in this post’s banner: the percentage value. But, in order to understand what purpose this serves, we need to explain a bit how the hell we’re supposed to make sense of this.

Let’s go back to the main notes, the headed ones that show the height of the notes that should be played. This is our natural, straight, or original motif. However, if you play it as such, it isn’t microrhythmic. The same is true if you play it according to the headless notation on top, the phrased motif. What’s missing is the percentage value. If we take example on the banner, the 50% value means that what you should play is right between the lower and upper notations. As you can hear in “Crimson Waves”, although the feel of the theme varies throughout the song, it is never 100% on time. That is what microrhythm is all about.

That percentage value can vary within a song, as you can see in yet another example taken from Malcolm Braff’s website.

This makes for some utterly interesting rhythms and feels that we’re not used to in Western music. However, I’d like to point out that many traditional musics around the world make use of this kind of microrhythmic time that would be impossible to write down accurately using common notation techniques. I’d like to showcase just one such example from Bénin’s Gangbe Brass Band and their song “Miwa”.

Although more simple than Braff’s material, this shows that the technique is older and more common than some might think. The true innovation here is in the development of a standardized notation technique, which could allow more people to make use of the technique. My hope is that music notation softwares add this functionality to their features, and so we can hear in real time the playback of a written microrhythmic piece.

I hope that you liked our second class of Advanced Mathematics, and that you will start including microrhythm in your own compositions.

Update

I’ve put my money where my mouth is, and went ahead and modified a riff from Meshuggah’s vast riffotheque to include microrhythm. The chosen one is the opening riff of “This Spiteful Snake”, off of the Obzen album. I chose this one because it didn’t seem overly complex, and consisted of basically two themes in 6/8 and 7/8 that alternate over an 8-bar 4/4 hypermeasure, so that the last theme is one eighth shorter.

Since the original composition has a distinct rhythmic pattern, I put the phrased notes on the main staff and the straight ones overhead, contrary to tradition. It also gave way to a few rather odd but inevitable phenomena. For one, the straight rhythm had to be in an uncommon tuplet feel, here either in 5:6 or 6:7. I don’t think this is a thing we’d see in original microrhythm compositions, where the straight pattern is usually the main one, and, therefore, includes a more “straightforward” number of notes. But it’s here, and it’s interesting in its own right, and it could rather easily be used in original compositions as well.

Another interesting thing is how the backbeat is affected. In its original pattern, the drums play in sixteenth notes. That means that it plays two notes for every eighth note the guitar plays, and one note at every sixteenth note it plays. Since the microrhythm transformation is based on the guitar part, that means that the drums will sound as if they play in a faster tempo on longer guitar notes, and slows down where the guitar picks up. This too could be a rather interesting feature to take advantage of when composing for microrhythm.

Overall, as someone said, it gives a very “drunken” feel to the riff. As someone else said, it doesn’t flow the same way the original riff does, but I think that’s exactly the point. Moreover, this is only one riff, and one part of the song. Later, the same riff comes back with a steady timekeeper cymbal, and it would be interesting to perhaps keep this one steady, with the elastic rhythm happening underneath it. One theoretical justification for using this would be to keep the steady hypermeasure. After all, every original measure is the same duration as its altered version, so they don’t influence the next bar or the previous, or the hypermeasure for that matter. It could also be totally just to mirror the elastic changes of time of each measure into the hypermeasure and keep the accents on actual notes rather than on theoretical beats. Both ideas will give off very different feels, but both can be used and be interesting in their own right.

I find the concept utterly fascinating, and finding that it can be programmed with relative ease on a computer opens up many doors. For example, you could program patterns to practice this rhythmic concept. You could apply it to electronic music genres, like pop, synthwave, or chiptune music. I’m sure there are many other uses that I can’t see just yet, but its use in rhythm-based music genres, like djent, could bring a breath of fresh air on to the scene.

If you want me to turn your composition into microrhythm, feel free to contact me. Or if you have any question or comment, just write it here or on our facebook page or group!

Addendum

I thought I’d also leave some details about the transformation process I used in order to create microrhythmic MIDI files. It’s all summed up with this image, but I’ll explain it in more words right after it.

This is an amalgamation of screen captures taken in Reaper. I believe most digital audio workstations would look similar and have similar functionalities. The height of the note represents its pitch, based on the vertical piano roll on the left, and their length is displayed horizontally, based on the grid in the background. The top and bottom grids are adjusted to thirty-second notes, and the middle one to sixteenths.

So, the top row shows the original phrasing of the riff. The note values are indicated, in amount of beats. On the bottom row, you see the straight, equally divided notation, where each note has the same length value as the other. In regular microrhythm music, that part would be on the main staff, and the first row would be the phrasing overhead.

Now comes the interesting part. To calculate the microrhythm, you need to use this equation:

$$l_\mu^{50\%} = {(l_a – l_b)\over 2} + l_b$$

\(l_\mu\) represents the length of the microrhythm note, the one you want to find in the end. This can be in seconds or in beat values, as long as it’s consistent with all other variables. \(l_a\) is the longest of the two notes, whether it is the phrased or the equally divided one doesn’t matter, the longest note will be \(a\). And, of course, \(l_b\) is the shortest note. You have to make extra certain that the amount of notes in the phrased and the equally divided partitions are exactly the same, and that they all add up to the same number. As you see in my example, the notes of the top and bottom row both add up to a total of 3 beats. Therefore, the microrhythm version of it will be correct.

Now, that equation only works if your ratio between the phrased and straight parts is \(1:1\), or 50 %. However, if you want to merge them in another ratio or percentage, follow this equation:

$$l_\mu^{x/y} = {(l_a – l_b)\over y \times x} + l_b$$

For example, let’s say we want a 75 % phrased feel. For the example, let’s say that the straight note is 0.5 beat, and the phrased one is 1.0 beat long. Let’s fill in the equation :

$$l_\mu^{75\%} = {(1.0 – 0.5)\over 100 \times 75} + 0.5$$

$$l_\mu^{75\%} = 0.875$$

It’s important to note that, by convention, the percent value always refers to how much the phrased pattern influences the straight notes, and never the other way around. And, if you use a ratio in the form \(A:B\), you can transform it in percentage by doing \(A \over (A+B) \times 100\).

Now go and have fun with that!

On May 3 2018, this entry was posted.

3 comments on Advanced Mathematics 2: Microrhythms

  1. […] LP called Chaos et félicité. Bastien’s two big goals for this album was to incorporate computer-generated microrhythms and to make sure that his listeners feel a variety of things when they hear them. Sing The Nation […]

  2. […] of the trio and relies heavily on multiple rhythmic concepts such as the elasticity of time (sounds familiar?). So, hopefully there will be a Bandcamp page for this project once it’s released, on May […]