## If it’s the fourth this must be Thursday

What’s the day of the week of [insert date here]? Do it in your head.

There’s some similarity between calculating the phase of the moon for a given date and calculating the day of the week for a given date. Also some differences.

Similarly, how one does it depends on whether one’s doing it in one’s head or on a computer. (And how one does it on a computer depends on the software environment. Modern software libraries tend to make calculating the day of the week a complete non-problem, for instance. Whereas if you were programming in FORTRAN in the 1970s, it was on you.)

Differently, though, in terms of accuracy. The phase of the moon varies continually and slowly enough that getting it right to within a day or so is probably good enough, at least if you’re doing a mental calculation; if more precision matters, you probably want to do it in software anyway. Whereas if you want to know if June 17 is a Monday, finding out it’s probably within a day or so of Monday isn’t likely to be good enough. For the day of the week, either it’s right or it isn’t.

Anyway, if you were to do it, how would you do it?

We’ll specify a date with century, year, month, and day numbers. The day number D is just the number from 1 to 31 denoting which day of the month it is. Now, for month numbers, M, because leap years make things so complicated, let’s use years that start on March 1 and end on February 28 or 29. Within a March-to-February year we can figure out the number of days between any two dates without having to worry about whether it’s a leap year or not. So instead of M running from 1 (January) through 12 (December) we’ll use 3 (March) through 14 (February). January and February will be regarded as belonging to the preceding calendar year. For March through December the year number Y is the last two digits of the year (AD) and the century number C is the preceding digits. For January and February, use the same Y and C as for the preceding March.

What about BC years? Forget ’em. We’re dealing with Gregorian calendar only here because otherwise, ow. You can develop formulas for Julian calendar and BC and whatever else on your own, if you insist.

Okay. For 10 Apr 1937 we have C = 19, Y = 37, M = 4, D = 10. For 2 Feb 1600 we have C = 15, Y = 99, M = 14, D = 2. Got it?

If you know (x+1) mod 7, the weekday (encoded as an integer from 0, for Sunday, through 6, for Saturday) of the 1st of the month, then the formula for any other day in that month is simple: (D) mod 7.

How do you get x? If every month were 30 days then it’d be (y + 30(M–3)) mod 7 = (y + 2+ 1) mod 7, and therefore the weekday of month M day D would be (y + 2+ 1 + D) mod 7, where (y + 1) mod 7 is the weekday of 1 Mar. But unfortunately it’s more complicated. “Thirty days hath September…” (Or do you know this trick? Make a fist with your left hand. You have four knuckles, and three valleys between the knuckles. Label the knuckles and valleys with the month names starting from January; that and March and May and July are knuckles, February and April and June are valleys. Continue on the right fist: August, October, and December are knuckles and September and November are valleys, with a knuckle and a valley left over. The knuckle months have 31 days, the valley months don’t. You can get “JMMJ AONx” tattooed on your knuckles, if you want a good conversation starter.)

1 Mar is day 1 of the year, or 0 days more than (3–3) x 30 + 1. 1 Apr is the 32nd day, or 1 day more than (4–3) x 30 + 1. 1 May is the 62nd day, or 1 day more than (5–3) x 30 + 1. And so on. The whole table for the number of days more than 30 x (M–3) + D days past the last day of the preceding year is:

 March 0 April 1 May 1 June 2 July 2 August 3 September 4 October 4 November 5 December 5 January 6 February 7

So now if you know y, and you’ve memorized that table (call those numbers mM) then the formula becomes:

(y + 2M + 1 + mMD) mod 7

Memorizing that table’s a pain, but what else can you do? (Well, see below.) Note we can absorb additive constant values into y, so let’s just use

(y + 2M + mMD) mod 7

and just define y as whatever works.

Now, if you know the weekday of a date in some century, and if there were no leap years, then you could get the weekday for any date in that century just by using

(z + Y + 2MmMD) mod 7

because in any year the weekday for a given date is one later than it was the previous year. (365 mod 7 = 1.) Here z is, well, it’s what makes the formula come out right for the date you know.

But there are leap years, which add another 1 day shift from one year to the next. So we have to add 1 for each leap year from year 0 through Y–1, inclusive. Recall that Y = 0 is not a leap year, in any century, because by Y = 0 we mean 1 Mar of calendar year 0 through 28 Feb of calendar year 1. On the other hand Y = 3 is 1 Mar 03 through 29 Feb 04, and is a leap year! It won’t shift any days in that year but it’ll shift days starting in Y = 4. So — ignoring the century leap year rule for a moment — the number of leap years to take into account is just floor(Y/4). We have to add that number to our tally.

So for dates in that century we use:

(z + Y + floor(Y/4) + 2MmMD) mod 7

How about dates in the next century? Well, in most cases a century year (our Y = 99 year) is not a leap year so there are 24 leap years and 76 standard years in a century, or 24 x 2 + 76 x 1 mod 7 = 5 weekdays shift. If all centuries were like that it’d be

(z + 5CY + floor(Y/4) + 2MmM D) mod 7

I’m still calling the constant z, but it’s whatever makes things work.

But every 400 years we do have a leap year. So we have to add 1 day back in for every four centuries:

(z + 5C + floor(C/4) + Y + floor(Y/4) + 2MmMD) mod 7

Now we just need to determine z. Pick a day at random, oh let’s say Tuesday, 2 Jan 2018. Then it’s (remembering Y = 17 and M = 13 for January 2018)

(z + 5 x 20 + floor(20/4) + 17 + floor(17/4) + 2 x 13 + 7 + 2) mod 7

= (z + 2 + 5 + 3 + 4 + 5 + 6 + 2) mod 7

= (z + 6) mod 7

But this should equal 2 for Tuesday, so z = 3.

And now we can do any date. 17 Apr 2733 is:

(3 + 5 x 27 + floor(27/4) + 33  + floor(33/4) +  2 x 4 + 1 + 17) mod 7

= (3 + 2 + 6 + 5 + 1 + 1 + 1 + 3) mod 7 = 1 = Monday. Right, cal(1)?


\$ cal 4 2733
April 2733
Su Mo Tu We Th Fr Sa
1
2  3  4  5  6  7  8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30


Right!

Oh, one more thing:

The red dots are the values of mM plotted versus M. The blue dots are mM+1. The line is y = 0.6M – 1.3, and it passes between each pair of dots. So that means instead of memorizing mM and using

(3 + 5C + floor(C/4) + Y + floor(Y/4) + 2MmMD) mod 7

if you’d rather you could use

(3 + 5C + floor(C/4) + Y + floor(Y/4) + 2M + floor((6M – 13)/10) + D) mod 7

So just memorize the first formula and mM, or memorize the second formula, and keep in mind you have to use Y and for the preceding year and add 12 to M if the month is January or February, and know your Gregorian calendar limits for the particular geographical location in question (remember, February 1712 had 30 days in Sweden), and you can figure out the weekday in your head!

… Either way, the lunar phase is easier.

## Plausible phases

So why does that lunar phase method work anyway?

It’s mostly fairly obvious once you start thinking about it. For instance, suppose you know the age of the moon on a given day. Then its age a day later is, kind of obviously, one day more. Except when you hit new moon and it resets, every 29.53… days, and doing modulo 30 approximately accounts for that. So it makes sense for the formula to be something like (x+D) mod 30, where D is the day number and x doesn’t depend on the day (it’s approximately the age, minus 1, on the first of the month).

Now suppose you know the age of the moon on the first day of a given month. Then its age a month later is…? Well, in a 30 day month it’s about 0.47 days more, that is, 30–29.53; in a 31 day month it’s 1.47 days more. Or in an average month of about 365.25/12 = 30.52 days it’s very close to 1 day more. (Actually, since we’re working within the 2000 to 2018 Metonic cycle, the average year length that counts is 1/19 of five leap years and 14 normal years, which comes out to 365.263 days, not that the difference matters much.) If it weren’t for February it’d make sense for the formula to be something like (y+M+D) mod 30, where M is the month number and y doesn’t depend on the day or month (it’s the age at the start of the year, minus 2).

February’s short, though. How to deal with that? Hang on, we’ll get there.

Now suppose you know the age of the moon on the first day of a given year. Then its age a year later is… well, 365.25 days is 12.36 synodic months, so the age advance is 0.36 x 29.53 = 10.9 days. For a decent approximation, 11 days age advance per year, so a formula something like (z+(Y x 11)+M+D) mod 30, where Y is (last two digits of) the year number and z is a constant (the age on 1 Jan 2000, minus 2), would seem reasonable. If it weren’t for February.

It’s too bad February isn’t the last month of the year, then you wouldn’t have to deal with its shortness. Well, make it the last month of the year! Let’s pretend the year begins on March 1. Now within a month each day is a day greater age than the previous; within a (March-to-February) year each month is a day greater age than the previous; and each year is 11 days greater age than the previous. Looked at like this, we don’t have to worry especially about the age advance from February to March. So we could use a formula like the above, if we said March was month 1, April month 2, … January month 11, February month 12, and if we used the preceding year number in January and February. That is, for months March through December use

(z+(Y x 11)+(M–2)+D) mod 30

(here M is still 3 for March, et cetera) and for January and February use

(z+((Y–1) x 11)+(M+10)+D) mod 30 .

That gets a little complicated, though. But we haven’t defined z yet; we can absorb additive constants into it. That is, redefine z–2 as z so we have for March to December

(z+(Y x 11)+M+D) mod 30

and for January and February

(z+((Y–1) x 11)+M+12+D) mod 30 .

But now notice the latter is equal to

(z+(Y x 11)–11+M+12+D) mod 30

or

(z+(Y x 11)+M+1+D) mod 30

which means we don’t have to mess around with the year number or with month numbers like 13 and 14; just use the same formula for all months, except add 1 in January and February!

Nearly there. We just need to establish z. The age of the Moon at 00:00 UTC on 1 Jan 2000 was 23.882 days, so z is 20.882 or about 21. Or more conveniently, about 22, so we can replace z+(Y x 11) with ((Y+2)x 11), and that’s the formula. There are enough approximations here that one can justifiably worry about their accumulating to large errors, but in fact they tend to cancel out. (1 day per month is nearly exact, modulo 30 arithmetic gives you age increases that are too small, 11 days per year gives you age increases that are too large for non leap years and too small for leap years, z=22 means you start out the cycle with an age that’s too large. Sticking with z=21 would have made 1 Jan 2000 more accurate but would have led to too large errors later in the cycle.)

## Depends on the phase of the moon

[Edited for corrections and updates]

What’s the phase of the moon on [insert date here]? Do it in your head.

This is something I found on the Internet once, don’t know where; I learned it but eventually forgot it. Having put it back together again I’m posting it here for future reference. It’s more or less similar to the method, attributed to John Conway, described at much greater length at http://gmmentalgym.blogspot.com/2013/01/moon-phase-for-any-date.html. The two are not mathematically equivalent, though, giving slightly different results for some dates. See below.

To start with, assume the required date is in the range from 1 Jan 2000 to 31 Dec 2018. Take the last two digits of the year, add 2, and multiply by 11.

(Good thing multiplying by 11 is easy. You just add the two digits to the same two digits shifted one place left: e.g. 17 x 11 = 17 + 170 = 187. Or another way to think of it: the last digit of the result is the same as the last digit of the original number, the first digit of the result is the same as the first digit of the original number, and the middle digit of the result is the sum of the two digits of the original number. Of course if that sum exceeds 9, you have to add 1 to the first digit of the result.)

Now add the number of the month — 1 for January, 2 for February, and so on, 12 for December. If the month is January or February, add another 1.

Now add the day of the month.

Now divide that number by 30 and take the remainder. The result is, approximately, the age of the moon — the number of days since the new moon. 0 or 29 would be new moon, 14 or 15 would be full moon, about 7 would be first quarter, about 21 would be third quarter. It might be off a day or so (again, see below), but close enough for mental calculation, right?

What about dates outside that range? The moon returns to the same phase every 19 years exactly — or almost exactly; actually more like 19 years plus 2 hours. This is called a Metonic cycle. So add or subtract a multiple of 19 years to get into the 2000–2018 range and then proceed as above. Obviously for dates far in the past or future this breaks down, but for dates in the past or future few decades it’s fine.

Example: July 20, 1969. This is outside our 2000–2018 range but if you add 2×19=38 years you have July 20, 2007. So: for year 2007, (7+2)x11 = 99; for month 7 add 99+7=106; for day 20 add 106+20=126. To get the remainder when divided by 30, subtract the multiple of 30: 126-120=6. Age of the moon is 6 days, corresponding to just about first quarter. Of course July 20, 1969 is the date of the Apollo 11 landing. If you do the same for the other moon landing dates you get ages in the range 5 to 9. For reasons having to do with the angle of the Sun, NASA chose to do all the moon landings at about first quarter.

Example: January 1, 2018. (18+2)x11 + (1+1) + 1 = 223; 223-210=13. The moon is (about) 13 days old on New Year’s Day, so is close to full. In fact full moon occurs at about 2:00 UTC on January 2, or evening of January 1 in the United States. We get two full moons in January and March, none in February!

How well does this work? It depends on how you define “how well”. You’re getting an integer estimate of the age of the moon, which you can compare to the actual age of the moon… when? At 00:00 UTC? Noon?

Choosing to compare to the age at 00:00 UTC, here’s a plot of the error in this calculation for the years 2000 to 2018. There are just two dates where it’s more than 1.6 days off and most dates — 6269 out of 6940 —are less than 1 day off.

For years outside this range but within the nearest 4 Metonic cycles each way (i.e. from 1 Jan 1924 to 31 Dec 2094) I find there are only two dates where the error is (slightly) larger than 2 days, and average errors in each cycle are under 0.35 days. Outside that 171 year range it gets worse.

And here’s a comparison to the results of the method from gmmentalgym. The blue line is the same as above and the red line is gmmentalgym; they coincide for the first 10 years so you see only the red there. After that they differ by 1 day.

You can see gmmentalgym is systematically high in the second half of the cycle, so does worse. Note, though, that if we compared to the age at noon UTC, it would shift this whole graph down by half a day and the red line would be better than the blue.

On the other hand, here’s the 1962–1980 cycle (comparing at 00:00):

Here it looks like the gmmentalgym method is a little better, but would be much worse if comparing at noon UTC. On balance, I wouldn’t say either method has the clear upper hand — they’re both decent approximations, easily done in one’s head.

## My brother the chimp

I’ve written about this before, over on my genealogy blog, but it’s only tangentially related to genealogy. It’s not really recreational mathematics, either, but I want to expand on it somewhere, and since it involves a logical proof, this seems like the least silly place to choose.

So: All humans descend from a common ancestor. Or so many people believe. Literalist believers in the Old Testament would say Adam and Eve are common ancestors of us all. Believers in evolution go further and believe all life on Earth descends from a common ancestor. I’m in the latter group.

When did the most recent common ancestor (MRCA) of all humans live? Simulations suggest it was more recently than you might expect: 2000 to 5000 years ago. (It’s been argued that all persons of European descent share a common ancestor only about 1000 years ago, and genetic analysis [paperFAQ] seems to support that.) Furthermore, going only a few thousand more years back, you arrive at a time when literally every person then alive fell into one of two categories: Those who are ancestors of no one alive today, and those who are ancestors of everyone alive today. I don’t know if there’s a term for that state, so I’ll make one up: Universal common ancestry (UCA). Understand these are probabilistic claims; it would certainly be possible that 20,000 years ago there were people whose living descendants are some but not all of us. But the probability of such a thing is so small it is almost indistinguishable from zero: Something that improbable really cannot be expected to happen even once in the history of the universe. And even if it did, another few thousand years back a non-UCA state would be even more unlikely than that. The probability of UCA back then, and any time previous to that, is absolute certainty for all practical purposes.

Of course this doesn’t just apply to humans: We can talk about the MRCA of all chimpanzees alive today, or all hominins (humans and chimpanzees), or all ruby throated hummingbirds, or whatever.

Now, here’s the weird thing: Long ago, 5 million years or more, amongst members of the species Nakalipithecus or something like it, there lived an individual (call them Nico) who had (at least) two offspring, Chris and Harley. They were Nakalipithecus (or whatever) too, of course, and they mated with other Nakalipithecus and had Nakalipithecus children, who had more Nakalipithecus children, who had … well, you get it. But here’s the thing: All of Chris’s living descendants are chimpanzees. All of Harley’s are human.

That might surprise you. And think how surprised Chris and Harley would be.

But it’s true, and it’s provable (in, again, a certain-for-all-practical-purposes sense).

Nico, you see, was the MRCA of all living hominins. Nico has all living humans and all living chimps as their descendants, and being they’re the most recent common ancestor, no one in subsequent generations can make the same claim.

Now of course much more recent individuals are common ancestors of all living humans (and no chimps), or of all living chimps (and no humans). How about ancestors of all living humans and some living chimps? Or vice versa? But no. We have UCA for humans anytime before several thousand years ago, and I’m sure chimp UCA must be even more recent than that. More than a few thousand years ago, let alone a few million, anyone who was an ancestor of some living chimps was an ancestor of all living chimps. And likewise humans, so any individual with any living humans and any living chimps in their descendants would have to be a common ancestor to all hominins. And Nico was the MRCA, so anyone born after Nico would have to have no living descendants, or humans but no chimps, or chimps but no humans.

We know Nico must have had two or more offspring. If they had only one, then it would have been a common ancestor of all hominins and Nico would not be the MRCA. One of them was Chris, and the other was Harley… and they were born after Nico (duh). The argument above applies to anyone born after Nico, and that includes Chris and Harley. One had to be an ancestor of all living chimps (but no living humans) — that’s Chris — and the other the ancestor of all living humans (but no living chimps) — that’s Harley.

A scenario in which that makes sense is one like this. Seven million years ago, a group of Nakalipithecus woke up one morning, checked Facebook, went to work, or went out to play, or whatever, not realizing that fifty miles away there was a great huge lake held back by a natural dam which had been slowly eroding. That day the dam broke. There was an enormous flood, many of the Nakalipithecus died, and the ones that survived found themselves on two sides of a new, permanent body of water. And the topography was such that none of them were able to cross the water or go around it, so the survivors on one side never again encountered the survivors on the other.

All right, this is not a particularly likely scenario, but it’s a thought experiment, okay?

Nico, alas, died in the flood, but not before saving the life of Chris. Harley, meanwhile, was swept off and ended up on the other side of the water. Within a few thousand years all the Nakalipithecus on one side were descendants of Chris (but not Harley) and all on the other side were descendants of Harley (but not Chris). The two groups evolved apart and eventually gave rise to chimps on one side, humans on the other.

Granted, speciation doesn’t usually work like that. Much of the time geographical separation is a factor in speciation, but not necessarily all the time, and in any case the geographic separation usually isn’t instantaneous. Mountains rise slowly, rivers change course over centuries, and so on.

But it doesn’t matter whether the two groups are sundered in hours or eons. Or even if they are geographically sundered at all. The point is, for whatever reason, two groups evolve away from one another. At the start they are one species and they interbreed frequently, but the frequency of that decreases more and more. And it may decrease over a very long time period, but: There’s a single, momentary last time. A last time a member of one of the groups breeds with a member of the other group. And not long before that, a last time one individual breeds with a member of one group while their sibling breeds with a member of the other, and they have fertile offspring leading to descendants alive today. Their parent is our common ancestor, and the common ancestor of all chimps too. Their children, though, may be your ancestor, may have been Washoe’s, but are not both.

## Why an almost integer?

Over on https://mathlesstraveled.com there’s a series of posts going on having to do with this:

$\displaystyle \begin{array}{cc} n & (1 + \sqrt 2)^n \\ \hline 1 & 2.414213562373095 \\ 2 & 5.82842712474619 \\ 3 & 14.071067811865474 \\ 4 & 33.970562748477136 \\ 5 & 82.01219330881975 \\ 6 & 197.99494936611663 \\ 7 & 478.002092041053 \\ 8 & 1153.9991334482227 \\ 9 & 2786.0003589374983 \\ 10 & 6725.9998513232185 \end{array}$

See that? As $n$ increases, $(1+\sqrt 2)^n$ approaches integer values. Odd, huh? Why does it do that?

Despite what should have been a dead giveaway hint, I didn’t figure out the approach revealed in this post. Embarrassing. But having failed on the insight front, what can I do on the obvious generalization front?

Let’s think about quantities of the form

$\displaystyle (j+k^{l/m})^n$

where $j$, $k$, $l$, $m$, and $n$ are nonzero integers; $l/m$ is in lowest terms and $l>0$, $m>1$, and $n>0$. For now let’s also restrict $m$ to primes.

To investigate that we’ll consider

$\displaystyle \sum_{p=0}^{m-1} (j+e^{i2\pi p/m}k^{l/m})^n$

The complex quantities $e^{i2\pi p/m}$ lie on the unit circle in the complex plane and are the vertices of an $m$-gon. Using the binomial expansion, the sum is

$\displaystyle \sum_{p=0}^{m-1} \sum_{q=0}^{n} \binom{n}{q} j^{n-q} \left (e^{i2\pi p/m}k^{l/m} \right)^q$

or

$\displaystyle \sum_{q=0}^{n} \binom{n}{q} j^{n-q} \left (\sum_{p=0}^{m-1} e^{i2\pi pq/m}\right) k^{lq/m}$

Now, for the terms where $q$ is a multiple of $m$, $e^{i2\pi pq/m}$ is equal to 1 and the sum over $p$ equals $m$.

Otherwise, we’re summing over the points on the unit circle:

$\displaystyle \sum_{p=0}^{m-1} e^{i2\pi p/m}$

which is the sum of a geometric series so

$\displaystyle = \frac{1-e^{i2\pi}}{1-e^{i2\pi/m}} = 0$

For instance, when $m=2$, the sum is $e^0+e^{i\pi} = 1 - 1 = 0$. When $m = 3$, it’s $e^0 + e^{2i\pi/3} + e^{4i\pi/3} = 1 - \frac{1}{2} - \frac{1}{2} = 0$.

All right then. This means we keep only the terms where $q$ is a multiple of $m$:

$\displaystyle \sum_{q=\textrm{multiple of }m}^n \binom{n}{q} j^{n-q} m k^{lq/m}$

which is an integer. Call it $I$. Then

$\displaystyle \sum_{p=0}^{m-1} (j+e^{i2\pi p/m}k^{l/m})^n = (j+k^{l/m})^n + \sum_{p=1}^{m-1} (j+e^{i2\pi p/m}k^{l/m})^n = I$

or

$\displaystyle (j+k^{l/m})^n= I - \sum_{p=1}^{m-1} (j+e^{i2\pi p/m}k^{l/m})^n$

So for large $n$$(j+k^{l/m})^n$ approaches an integer if and only if the magnitudes of all the $m-1$ quantities $j+e^{i2\pi p/m}k^{l/m}$ have magnitude $<1$.

For instance: Let $j = 1$, $l = 1$. Then

$\displaystyle (1+k^{1/m})^n= I - \sum_{p=1}^{m-1} (1+e^{i2\pi p/m}k^{1/m})^n$

Now in the case $m = 2$,

$\displaystyle (1+k^{1/2})^n= I - (1+e^{i\pi}k^{1/2})^n = I - (1-k^{1/2})^n$

The magnitude of $1-k^{1/2}<1$ for $k<4$. In fact it’s zero for $k=1$, but then $1^{1/2}$ is an integer anyway; point is, $k = 2$ or $3$ works: $(1+\sqrt 2)^n$ and $(1+\sqrt 3)^n$ both approach integers (the former much more quickly than the latter).

How about $m = 3$? Then

$\displaystyle (1+k^{1/3})^n= I - (1+e^{i2\pi/3}k^{1/3})^n - (1+e^{i4\pi/3}k^{1/3})^n$

By symmetry the magnitudes of the two complex numbers are the same, so what we need is

$\displaystyle |1+e^{i2\pi/3}k^{1/3}|<1$

$\displaystyle 1 + k^{2/3} -k^{1/3}<1$

or

$\displaystyle k < 1$

So there are no integer values of $k$ that give convergence to an integer for $(1+k^{1/3})^n$. It seems evident the same is true for all $m>2$.

## Fifths get lucky (part 2)

I’ve been learning a little about pyplot, and I’ve drawn a diagram:

This shows as horizontal bars the range of “fifths” (or generators) for which n generators give an adequate major third. “Adequate” here means within 25 cents, and n is the number on the vertical axis; negative n means fifths downwards, positive n is fifths upwards. The black line is at 702 cents, a just perfect fifth. The bar colors just help distinguish different values of n (and make it a little less boring than if everything were blue).

So you can see that for just fifths, four upward fifths or eight downward fifths will work. The range where four upward fifths will work is larger, but still only 12.5 cents wide. The just fifths line is very close to the upper edge of the band, and once you go beyond it in the positive direction, there’s a small gap before you get to the range where nine upward fifths give a major third. In the other direction, ten downward fifths work, but only after a somewhat larger gap where nothing smaller than eleven fifths will do it.

Around 800 cents you see lots of bars. I wrote before about how near 720 cents the number of fifths needed for a major third is very large, because 720 cents generates a 5-equal scale with no adequate thirds. On the other hand 800 cents generates a 3-equal scale with an adequate third (400 cents), so n = -10, -7, -4, -1, 2, 5, 8 (and maybe beyond) are all solutions there, and nearby. (Similarly, near 700 cents, you have n = -8 and 4.)

## Fifths get lucky

I’ve been thinking more about musical tuning than CAs lately. For instance, four perfect fifths make (more or less) a major third. How crazy is that?

The diatonic scale most western music’s been based on for the past couple millennia comes from ancient Greece; they developed it by tuning their tetrachords using seven consecutive notes on a circle of fifths — F, C, G, D, A, E, B, for instance. Using a just perfect fifth, 701.96 cents, (Pythagorean tuning) the A (four fifths up from the F) is 2807.82 cents above F, or, dropping it down two octaves, 407.82 cents. This is not a particularly great approximation to a just major third, 386.31 cents, but it’s fairly close. Close enough that when music written in octaves or fifths or fourths gave way to use of thirds, musicians developed other ways of tuning diatonic scales for better thirds rather than dumping the entire system. What we’ve ended up with is equal temperament, with 700.00 cent fifths, four of which make a 400.00 cent major third.

So for many centuries we’ve been using a scale that was based on octaves and fifths to make music that uses thirds, because it happens to contain thirds that are close enough to just. Now, it’s a little odd to be using probabilistic terms to talk about simple arithmetic — two and two isn’t likely to be four, it is four — but I think you know what I mean when I ask, how likely is that? How lucky did we get?

It’s a hard question to quantify. But let’s generously say a major third is adequate if it’s within 25 cents of just. That’s a 50 cent range you want four fifths to fall into, so a single fifth has to be within a range a quarter of that size, 12.5 cents. If you didn’t know beforehand the size of a just perfect fifth, but knew it was somewhere between 650 and 750 cents, you might guess the odds of four fifths making a major third would at most be 12.5/100, or one in eight. Though worse, maybe, because the 12.5 cent range where it works might not be entirely contained within 650 to 750. In fact it might not overlap that range at all. (Though in actuality the range is from 690.33 to 702.83 cents.)

On the other hand, maybe the 25 cent range where two “fifths” make a major third does overlap, or the 16.7 cent range where three “fifths” will work does. So the odds of four or fewer fifths making an adequate major third might be a little better. Still seems small though.

Oh… but you also want to consider four or fewer downward fifths, or equivalently, four or fewer upward fourths. That improves the odds.

So let’s do a little simulation. Pick a number in the range, say, from 650.0 to 750.0 and see how many fifths, up or down, it takes to make an adequate third. Repeat, and get the distribution. Then ask about things like the average number of fifths needed.

There’s a difficulty here, though: Sometimes the answers get very large. Think about 720.00 cents. The notes that “fifth” generates are 720.0, 240.0, 960.0, 480.0, 0.0, 720.0… and it just repeats those five notes over and over; 720.00 generates a 5-equal scale. None of those notes is an adequate third, so you can run forever looking for it.

Of course you have pretty much zero chance of picking 720.00 at random, but if you pick, say, 719.932771, you’ll have to add a lot of fifths before hitting an adequate third. (1099 of them, looks like.) You’ll get occasional large numbers, then, and they’ll have a big impact on the mean value. The answer you get will fluctuate a lot depending on which large numbers you end up with.

This is why medians were invented.

So I wrote a little Python script to do this. If you take the range of possible “fifths” as 650 to 750 cents, then there’s about a 22% chance four or fewer fifths, up or down, will produce an adequate third. The median number of fifths required to make an adequate third: 11.

I think it’s safe to say if you needed 11 perfect fifths to make an adequate major third, the system upon which western music developed would have been entirely different. Different how, I have no idea, but different. Needing only four fifths was a “lucky” break… not win-the-lottery lucky, but definitely beating-the-odds lucky.