Three gaps, part 2

Previously we saw if you build a musical scale by starting at F and adding notes, each a perfect fifth (702 cents, approximately) above the previous, modulo an octave, you start to see a pattern: Each scale has at most three kinds of intervals between consecutive notes, or what we call gaps. If there are three kinds, then the largest is equal to the sum of the two smaller ones. When you add a note to such a scale, it splits one of the large gaps into one of each of the two other kinds; eventually you split all the large gaps and are left with a scale having only two kinds of gaps.

At least that’s the pattern up through seven notes. Does it continue?

Adding an eighth note, F♯, splits the whole tone between F and G into two semitones — but, contrary to what you might expect, they’re not equal; one is a diatonic semitone and one is a slightly larger chromatic semitone, 104 cents. It’s yellow.So in the eight note scale there are three kinds of gaps again. (And the big one, the whole tone, is the sum of the two others.)

Add notes nine through twelve and you split the remaining whole tones, ending up with a twelve note scale with two kinds of gaps, diatonic and chromatic semitones.

And you can still go on. The thirteenth note splits a chromatic semitone into a diatonic semitone plus a Pythagorean comma (which is very small, about 24 cents, and grey), so we have three kinds of gaps again, with the chromatic semitone being the sum of the diatonic semitone and the Pythagorean comma. (In the diagram I’m running out of room, so I just show the sequence number of each note, not the letter name.)

And if you keep going, when you get to the 17th note it splits the last of the chromatic semitones and you have a scale with two kinds of gaps, diatonic semitones and Pythagorean commas. And so on. You can keep this up for weeks if you want to.

What’s intriguing is that the 5-note scale, called a pentatonic scale, is widely used especially in folk music; the 7-note diatonic scale is the basis of most mainstream Western music; the 12-note chromatic scale (in a slightly different tuning) is what you find on a piano keyboard; and while the 17-note scale has no significant role in western music, the 13th century Islamic music theorist Safi al-Din al-Urmawi developed scales based on division of the octave into 17 notes. Note those are all 2-gap scales. Meanwhile the 3-gap scales with 4, 6, 8, 9, 10…  notes don’t turn up much at all. Hm.

Now, all of this can be generalized. You can use tempered fifths (as opposed to pure), you can use other intervals like major or minor sixths, pure or not; heck, you can use any interval you want to generate scales. For that matter you can use a tempered octave as your circle, or a perfect twelfth or something else. And if you do you always find the same pattern. Every scale has one, two, or three kinds of gap, and if there are three kinds, the largest gap is the sum of the other two.

Yes, that’s what you observe, but is it always true? It is, and that’s the three-gap theorem.

Wikipedia states it as

if one places n points on a circle, at angles of θ, 2θ, 3θ … from the starting point, then there will be at most three distinct distances between pairs of points in adjacent positions around the circle. When there are three distances, the larger of the three always equals the sum of the other two.

An article by Peter Schiu [paywall] gives it as

Let α > 0 be an irrational number and n > 1. For 1 ⩽ m ⩽ n, order the fractional parts of mα to form an increasing sequence (bm):

Then there are at most three distinct values in the set of gaps gm defined by

Moreover, if there are three values, then the largest one is the sum of the other two.

Despite the musical roots going back to Pythagoras and Safi al-Din al-Urmawi, this theorem wasn’t proved until the late 1950s.

Shall we look at a proof? Sure. In the next part.

Advertisements

Three gaps, part 1

From David Eppstein’s blog I learned of a new Wikipedia article about the three-gap theorem. The who? I’d never heard of it myself, but was familiar with the general idea, having written about an application of it in a long article on scales for xenharmonic music.

In a more conventional music context, think about notes in the range from one note, say F, to the next F an octave higher. Scales, if you will. Here’s a representation of that. Ignore the black box for now and just focus on the white rectangle. That’s an octave, with our first note, F, at the left end.

(These diagrams may be rather small and hard to see; click on them to expand them.) Notes an octave apart are considered to be equivalent for our purposes, so you can think of the left end as being joined to the right.

One way to talk about the distances between musical pitches — intervals, as they’re called — is to use a unit called cents, where 1200 cents make an octave. We’re going to generate a scale based on an interval called a perfect fifth, which is X\equiv 1200\log_2 (3/2) cents — about 702 cents. So we’ll take our note F, shift it to the right by a distance X (see arrow below), and call that new note C.

We’ll call the interval between two consecutive notes a gap. Here there are two gaps, one from F to C which is a perfect fifth, 702 cents, and one from C to F which is a perfect fourth, 498 cents. These are shown in red and green respectively, and they add up to one octave, 1200 cents. This is a rather minimalist scale, two notes, with two kinds of gaps. (The number in the black box tells you how many notes are in the scale.)

Now go up a fifth again, starting from C. That puts you past the right end of the octave, but remember we identify the right end with the left end, so we can think of the arrow as going to the right end and then continuing from the left end to the new note, G.

G lands between F and C. It’s 2X above F, minus an octave, which works out to 204 cents, an interval called a whole tone. From G to C is 702 cents minus 204 cents or 498 cents, so G splits the perfect fifth from F to C into a whole tone (in blue) plus another perfect fourth (again in green). In this three note scale there again are two kinds of gaps.

Going up another perfect fifth gives us D, between C and F and splitting that perfect fourth into a whole tone and a major third (294 cents). Now we have a four note scale with three kinds of gaps. The major third is orange.Notice the big gap (perfect fourth) is equal to the sum of the other two.

The next perfect fifth gives us A, which splits the remaining perfect fourth into a whole tone and a major third. This five note scale is back to having two kinds of gaps.

Add a sixth note, E,  and one of the major thirds gets split into a whole tone and a diatonic semitone (90 cents), shown here in purple. There are three kinds of gaps. The big one (major third this time) is again equal to the sum of the other two.

But adding a seventh note, B, splits the other major third into a whole tone and a diatonic semitone, and once again there are two kinds of gaps.

At this point we have a diatonic scale, or in less pedantic terms, a C major scale if you start on C. (Or A minor if you start on A.)

You start to see a pattern. Each scale has at most three kinds of gaps. If there are three kinds, then the largest is equal to the sum of the two smaller ones. When you add a note to such a scale, it splits one of the large gaps into one of each of the two other kinds; eventually you split all the large gaps and are left with a scale having only two kinds of gaps. But maybe we’re getting ahead of ourselves? That’s the pattern so far, but does it continue?

We’ll see in part 2.

If it’s the fourth this must be Thursday

What’s the day of the week of [insert date here]? Do it in your head.

There’s some similarity between calculating the phase of the moon for a given date and calculating the day of the week for a given date. Also some differences.

Similarly, how one does it depends on whether one’s doing it in one’s head or on a computer. (And how one does it on a computer depends on the software environment. Modern software libraries tend to make calculating the day of the week a complete non-problem, for instance. Whereas if you were programming in FORTRAN in the 1970s, it was on you.)

Differently, though, in terms of accuracy. The phase of the moon varies continually and slowly enough that getting it right to within a day or so is probably good enough, at least if you’re doing a mental calculation; if more precision matters, you probably want to do it in software anyway. Whereas if you want to know if June 17 is a Monday, finding out it’s probably within a day or so of Monday isn’t likely to be good enough. For the day of the week, either it’s right or it isn’t.

Anyway, if you were to do it, how would you do it?

We’ll specify a date with century, year, month, and day numbers. The day number D is just the number from 1 to 31 denoting which day of the month it is. Now, for month numbers, M, because leap years make things so complicated, let’s use years that start on March 1 and end on February 28 or 29. Within a March-to-February year we can figure out the number of days between any two dates without having to worry about whether it’s a leap year or not. So instead of M running from 1 (January) through 12 (December) we’ll use 3 (March) through 14 (February). January and February will be regarded as belonging to the preceding calendar year. For March through December the year number Y is the last two digits of the year (AD) and the century number C is the preceding digits. For January and February, use the same Y and C as for the preceding March.

What about BC years? Forget ’em. We’re dealing with Gregorian calendar only here because otherwise, ow. You can develop formulas for Julian calendar and BC and whatever else on your own, if you insist.

Okay. For 10 Apr 1937 we have C = 19, Y = 37, M = 4, D = 10. For 2 Feb 1600 we have C = 15, Y = 99, M = 14, D = 2. Got it?

If you know (x+1) mod 7, the weekday (encoded as an integer from 0, for Sunday, through 6, for Saturday) of the 1st of the month, then the formula for any other day in that month is simple: (D) mod 7.

How do you get x? If every month were 30 days then it’d be (y + 30(M–3)) mod 7 = (y + 2+ 1) mod 7, and therefore the weekday of month M day D would be (y + 2+ 1 + D) mod 7, where (y + 1) mod 7 is the weekday of 1 Mar. But unfortunately it’s more complicated. “Thirty days hath September…” (Or do you know this trick? Make a fist with your left hand. You have four knuckles, and three valleys between the knuckles. Label the knuckles and valleys with the month names starting from January; that and March and May and July are knuckles, February and April and June are valleys. Continue on the right fist: August, October, and December are knuckles and September and November are valleys, with a knuckle and a valley left over. The knuckle months have 31 days, the valley months don’t. You can get “JMMJ AONx” tattooed on your knuckles, if you want a good conversation starter.)

1 Mar is day 1 of the year, or 0 days more than (3–3) x 30 + 1. 1 Apr is the 32nd day, or 1 day more than (4–3) x 30 + 1. 1 May is the 62nd day, or 1 day more than (5–3) x 30 + 1. And so on. The whole table for the number of days more than 30 x (M–3) + D days past the last day of the preceding year is:

March 0
April 1
May 1
June 2
July 2
August 3
September 4
October 4
November 5
December 5
January 6
February 7

So now if you know y, and you’ve memorized that table (call those numbers mM) then the formula becomes:

(y + 2M + 1 + mMD) mod 7

Memorizing that table’s a pain, but what else can you do? (Well, see below.) Note we can absorb additive constant values into y, so let’s just use

(y + 2M + mMD) mod 7

and just define y as whatever works.

Now, if you know the weekday of a date in some century, and if there were no leap years, then you could get the weekday for any date in that century just by using

(z + Y + 2MmMD) mod 7

because in any year the weekday for a given date is one later than it was the previous year. (365 mod 7 = 1.) Here z is, well, it’s what makes the formula come out right for the date you know.

But there are leap years, which add another 1 day shift from one year to the next. So we have to add 1 for each leap year from year 0 through Y–1, inclusive. Recall that Y = 0 is not a leap year, in any century, because by Y = 0 we mean 1 Mar of calendar year 0 through 28 Feb of calendar year 1. On the other hand Y = 3 is 1 Mar 03 through 29 Feb 04, and is a leap year! It won’t shift any days in that year but it’ll shift days starting in Y = 4. So — ignoring the century leap year rule for a moment — the number of leap years to take into account is just floor(Y/4). We have to add that number to our tally.

So for dates in that century we use:

(z + Y + floor(Y/4) + 2MmMD) mod 7

How about dates in the next century? Well, in most cases a century year (our Y = 99 year) is not a leap year so there are 24 leap years and 76 standard years in a century, or 24 x 2 + 76 x 1 mod 7 = 5 weekdays shift. If all centuries were like that it’d be

(z + 5CY + floor(Y/4) + 2MmM D) mod 7

I’m still calling the constant z, but it’s whatever makes things work.

But every 400 years we do have a leap year. So we have to add 1 day back in for every four centuries:

(z + 5C + floor(C/4) + Y + floor(Y/4) + 2MmMD) mod 7

Now we just need to determine z. Pick a day at random, oh let’s say Tuesday, 2 Jan 2018. Then it’s (remembering Y = 17 and M = 13 for January 2018)

(z + 5 x 20 + floor(20/4) + 17 + floor(17/4) + 2 x 13 + 7 + 2) mod 7

= (z + 2 + 5 + 3 + 4 + 5 + 6 + 2) mod 7

= (z + 6) mod 7

But this should equal 2 for Tuesday, so z = 3.

And now we can do any date. 17 Apr 2733 is:

(3 + 5 x 27 + floor(27/4) + 33  + floor(33/4) +  2 x 4 + 1 + 17) mod 7

= (3 + 2 + 6 + 5 + 1 + 1 + 1 + 3) mod 7 = 1 = Monday. Right, cal(1)?


$ cal 4 2733
     April 2733
Su Mo Tu We Th Fr Sa
                   1
 2  3  4  5  6  7  8
 9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30

Right!

Oh, one more thing:

The red dots are the values of mM plotted versus M. The blue dots are mM+1. The line is y = 0.6M – 1.3, and it passes between each pair of dots. So that means instead of memorizing mM and using

(3 + 5C + floor(C/4) + Y + floor(Y/4) + 2MmMD) mod 7

if you’d rather you could use

(3 + 5C + floor(C/4) + Y + floor(Y/4) + 2M + floor((6M – 13)/10) + D) mod 7

So just memorize the first formula and mM, or memorize the second formula, and keep in mind you have to use Y and for the preceding year and add 12 to M if the month is January or February, and know your Gregorian calendar limits for the particular geographical location in question (remember, February 1712 had 30 days in Sweden), and you can figure out the weekday in your head!

… Either way, the lunar phase is easier.

 

Plausible phases

So why does that lunar phase method work anyway?

It’s mostly fairly obvious once you start thinking about it. For instance, suppose you know the age of the moon on a given day. Then its age a day later is, kind of obviously, one day more. Except when you hit new moon and it resets, every 29.53… days, and doing modulo 30 approximately accounts for that. So it makes sense for the formula to be something like (x+D) mod 30, where D is the day number and x doesn’t depend on the day (it’s approximately the age, minus 1, on the first of the month).

Now suppose you know the age of the moon on the first day of a given month. Then its age a month later is…? Well, in a 30 day month it’s about 0.47 days more, that is, 30–29.53; in a 31 day month it’s 1.47 days more. Or in an average month of about 365.25/12 = 30.52 days it’s very close to 1 day more. (Actually, since we’re working within the 2000 to 2018 Metonic cycle, the average year length that counts is 1/19 of five leap years and 14 normal years, which comes out to 365.263 days, not that the difference matters much.) If it weren’t for February it’d make sense for the formula to be something like (y+M+D) mod 30, where M is the month number and y doesn’t depend on the day or month (it’s the age at the start of the year, minus 2).

February’s short, though. How to deal with that? Hang on, we’ll get there.

Now suppose you know the age of the moon on the first day of a given year. Then its age a year later is… well, 365.25 days is 12.36 synodic months, so the age advance is 0.36 x 29.53 = 10.9 days. For a decent approximation, 11 days age advance per year, so a formula something like (z+(Y x 11)+M+D) mod 30, where Y is (last two digits of) the year number and z is a constant (the age on 1 Jan 2000, minus 2), would seem reasonable. If it weren’t for February.

It’s too bad February isn’t the last month of the year, then you wouldn’t have to deal with its shortness. Well, make it the last month of the year! Let’s pretend the year begins on March 1. Now within a month each day is a day greater age than the previous; within a (March-to-February) year each month is a day greater age than the previous; and each year is 11 days greater age than the previous. Looked at like this, we don’t have to worry especially about the age advance from February to March. So we could use a formula like the above, if we said March was month 1, April month 2, … January month 11, February month 12, and if we used the preceding year number in January and February. That is, for months March through December use

(z+(Y x 11)+(M–2)+D) mod 30

(here M is still 3 for March, et cetera) and for January and February use

(z+((Y–1) x 11)+(M+10)+D) mod 30 .

That gets a little complicated, though. But we haven’t defined z yet; we can absorb additive constants into it. That is, redefine z–2 as z so we have for March to December

(z+(Y x 11)+M+D) mod 30

and for January and February

(z+((Y–1) x 11)+M+12+D) mod 30 .

But now notice the latter is equal to

(z+(Y x 11)–11+M+12+D) mod 30

or

(z+(Y x 11)+M+1+D) mod 30

which means we don’t have to mess around with the year number or with month numbers like 13 and 14; just use the same formula for all months, except add 1 in January and February!

Nearly there. We just need to establish z. The age of the Moon at 00:00 UTC on 1 Jan 2000 was 23.882 days, so z is 20.882 or about 21. Or more conveniently, about 22, so we can replace z+(Y x 11) with ((Y+2)x 11), and that’s the formula. There are enough approximations here that one can justifiably worry about their accumulating to large errors, but in fact they tend to cancel out. (1 day per month is nearly exact, modulo 30 arithmetic gives you age increases that are too small, 11 days per year gives you age increases that are too large for non leap years and too small for leap years, z=22 means you start out the cycle with an age that’s too large. Sticking with z=21 would have made 1 Jan 2000 more accurate but would have led to too large errors later in the cycle.)

Depends on the phase of the moon

[Edited for corrections and updates]

What’s the phase of the moon on [insert date here]? Do it in your head.

This is something I found on the Internet once, don’t know where; I learned it but eventually forgot it. Having put it back together again I’m posting it here for future reference. It’s more or less similar to the method, attributed to John Conway, described at much greater length at http://gmmentalgym.blogspot.com/2013/01/moon-phase-for-any-date.html. The two are not mathematically equivalent, though, giving slightly different results for some dates. See below.

To start with, assume the required date is in the range from 1 Jan 2000 to 31 Dec 2018. Take the last two digits of the year, add 2, and multiply by 11.

(Good thing multiplying by 11 is easy. You just add the two digits to the same two digits shifted one place left: e.g. 17 x 11 = 17 + 170 = 187. Or another way to think of it: the last digit of the result is the same as the last digit of the original number, the first digit of the result is the same as the first digit of the original number, and the middle digit of the result is the sum of the two digits of the original number. Of course if that sum exceeds 9, you have to add 1 to the first digit of the result.)

Now add the number of the month — 1 for January, 2 for February, and so on, 12 for December. If the month is January or February, add another 1.

Now add the day of the month.

Now divide that number by 30 and take the remainder. The result is, approximately, the age of the moon — the number of days since the new moon. 0 or 29 would be new moon, 14 or 15 would be full moon, about 7 would be first quarter, about 21 would be third quarter. It might be off a day or so (again, see below), but close enough for mental calculation, right?

What about dates outside that range? The moon returns to the same phase every 19 years exactly — or almost exactly; actually more like 19 years plus 2 hours. This is called a Metonic cycle. So add or subtract a multiple of 19 years to get into the 2000–2018 range and then proceed as above. Obviously for dates far in the past or future this breaks down, but for dates in the past or future few decades it’s fine.

Example: July 20, 1969. This is outside our 2000–2018 range but if you add 2×19=38 years you have July 20, 2007. So: for year 2007, (7+2)x11 = 99; for month 7 add 99+7=106; for day 20 add 106+20=126. To get the remainder when divided by 30, subtract the multiple of 30: 126-120=6. Age of the moon is 6 days, corresponding to just about first quarter. Of course July 20, 1969 is the date of the Apollo 11 landing. If you do the same for the other moon landing dates you get ages in the range 5 to 9. For reasons having to do with the angle of the Sun, NASA chose to do all the moon landings at about first quarter.

Example: January 1, 2018. (18+2)x11 + (1+1) + 1 = 223; 223-210=13. The moon is (about) 13 days old on New Year’s Day, so is close to full. In fact full moon occurs at about 2:00 UTC on January 2, or evening of January 1 in the United States. We get two full moons in January and March, none in February!

How well does this work? It depends on how you define “how well”. You’re getting an integer estimate of the age of the moon, which you can compare to the actual age of the moon… when? At 00:00 UTC? Noon?

Choosing to compare to the age at 00:00 UTC, here’s a plot of the error in this calculation for the years 2000 to 2018. There are just two dates where it’s more than 1.6 days off and most dates — 6269 out of 6940 —are less than 1 day off.

For years outside this range but within the nearest 4 Metonic cycles each way (i.e. from 1 Jan 1924 to 31 Dec 2094) I find there are only two dates where the error is (slightly) larger than 2 days, and average errors in each cycle are under 0.35 days. Outside that 171 year range it gets worse.

And here’s a comparison to the results of the method from gmmentalgym. The blue line is the same as above and the red line is gmmentalgym; they coincide for the first 10 years so you see only the red there. After that they differ by 1 day.

You can see gmmentalgym is systematically high in the second half of the cycle, so does worse. Note, though, that if we compared to the age at noon UTC, it would shift this whole graph down by half a day and the red line would be better than the blue.

On the other hand, here’s the 1962–1980 cycle (comparing at 00:00):

Here it looks like the gmmentalgym method is a little better, but would be much worse if comparing at noon UTC. On balance, I wouldn’t say either method has the clear upper hand — they’re both decent approximations, easily done in one’s head.

My brother the chimp

I’ve written about this before, over on my genealogy blog, but it’s only tangentially related to genealogy. It’s not really recreational mathematics, either, but I want to expand on it somewhere, and since it involves a logical proof, this seems like the least silly place to choose.

So: All humans descend from a common ancestor. Or so many people believe. Literalist believers in the Old Testament would say Adam and Eve are common ancestors of us all. Believers in evolution go further and believe all life on Earth descends from a common ancestor. I’m in the latter group.

When did the most recent common ancestor (MRCA) of all humans live? Simulations suggest it was more recently than you might expect: 2000 to 5000 years ago. (It’s been argued that all persons of European descent share a common ancestor only about 1000 years ago, and genetic analysis [paperFAQ] seems to support that.) Furthermore, going only a few thousand more years back, you arrive at a time when literally every person then alive fell into one of two categories: Those who are ancestors of no one alive today, and those who are ancestors of everyone alive today. I don’t know if there’s a term for that state, so I’ll make one up: Universal common ancestry (UCA). Understand these are probabilistic claims; it would certainly be possible that 20,000 years ago there were people whose living descendants are some but not all of us. But the probability of such a thing is so small it is almost indistinguishable from zero: Something that improbable really cannot be expected to happen even once in the history of the universe. And even if it did, another few thousand years back a non-UCA state would be even more unlikely than that. The probability of UCA back then, and any time previous to that, is absolute certainty for all practical purposes.

Of course this doesn’t just apply to humans: We can talk about the MRCA of all chimpanzees alive today, or all hominins (humans and chimpanzees), or all ruby throated hummingbirds, or whatever.

Now, here’s the weird thing: Long ago, 5 million years or more, amongst members of the species Nakalipithecus or something like it, there lived an individual (call them Nico) who had (at least) two offspring, Chris and Harley. They were Nakalipithecus (or whatever) too, of course, and they mated with other Nakalipithecus and had Nakalipithecus children, who had more Nakalipithecus children, who had … well, you get it. But here’s the thing: All of Chris’s living descendants are chimpanzees. All of Harley’s are human.

That might surprise you. And think how surprised Chris and Harley would be.

But it’s true, and it’s provable (in, again, a certain-for-all-practical-purposes sense).

Nico, you see, was the MRCA of all living hominins. Nico has all living humans and all living chimps as their descendants, and being they’re the most recent common ancestor, no one in subsequent generations can make the same claim.

Now of course much more recent individuals are common ancestors of all living humans (and no chimps), or of all living chimps (and no humans). How about ancestors of all living humans and some living chimps? Or vice versa? But no. We have UCA for humans anytime before several thousand years ago, and I’m sure chimp UCA must be even more recent than that. More than a few thousand years ago, let alone a few million, anyone who was an ancestor of some living chimps was an ancestor of all living chimps. And likewise humans, so any individual with any living humans and any living chimps in their descendants would have to be a common ancestor to all hominins. And Nico was the MRCA, so anyone born after Nico would have to have no living descendants, or humans but no chimps, or chimps but no humans.

We know Nico must have had two or more offspring. If they had only one, then it would have been a common ancestor of all hominins and Nico would not be the MRCA. One of them was Chris, and the other was Harley… and they were born after Nico (duh). The argument above applies to anyone born after Nico, and that includes Chris and Harley. One had to be an ancestor of all living chimps (but no living humans) — that’s Chris — and the other the ancestor of all living humans (but no living chimps) — that’s Harley.

A scenario in which that makes sense is one like this. Seven million years ago, a group of Nakalipithecus woke up one morning, checked Facebook, went to work, or went out to play, or whatever, not realizing that fifty miles away there was a great huge lake held back by a natural dam which had been slowly eroding. That day the dam broke. There was an enormous flood, many of the Nakalipithecus died, and the ones that survived found themselves on two sides of a new, permanent body of water. And the topography was such that none of them were able to cross the water or go around it, so the survivors on one side never again encountered the survivors on the other.

All right, this is not a particularly likely scenario, but it’s a thought experiment, okay?

Nico, alas, died in the flood, but not before saving the life of Chris. Harley, meanwhile, was swept off and ended up on the other side of the water. Within a few thousand years all the Nakalipithecus on one side were descendants of Chris (but not Harley) and all on the other side were descendants of Harley (but not Chris). The two groups evolved apart and eventually gave rise to chimps on one side, humans on the other.

Granted, speciation doesn’t usually work like that. Much of the time geographical separation is a factor in speciation, but not necessarily all the time, and in any case the geographic separation usually isn’t instantaneous. Mountains rise slowly, rivers change course over centuries, and so on.

But it doesn’t matter whether the two groups are sundered in hours or eons. Or even if they are geographically sundered at all. The point is, for whatever reason, two groups evolve away from one another. At the start they are one species and they interbreed frequently, but the frequency of that decreases more and more. And it may decrease over a very long time period, but: There’s a single, momentary last time. A last time a member of one of the groups breeds with a member of the other group. And not long before that, a last time one individual breeds with a member of one group while their sibling breeds with a member of the other, and they have fertile offspring leading to descendants alive today. Their parent is our common ancestor, and the common ancestor of all chimps too. Their children, though, may be your ancestor, may have been Washoe’s, but are not both.

 

Why an almost integer?

Over on https://mathlesstraveled.com there’s a series of posts going on having to do with this:

\displaystyle \begin{array}{cc} n & (1 + \sqrt 2)^n \\ \hline 1 & 2.414213562373095 \\ 2 & 5.82842712474619 \\ 3 & 14.071067811865474 \\ 4 & 33.970562748477136 \\ 5 & 82.01219330881975 \\ 6 & 197.99494936611663 \\ 7 & 478.002092041053 \\ 8 & 1153.9991334482227 \\ 9 & 2786.0003589374983 \\ 10 & 6725.9998513232185 \end{array}

See that? As n increases, (1+\sqrt 2)^n approaches integer values. Odd, huh? Why does it do that?

Despite what should have been a dead giveaway hint, I didn’t figure out the approach revealed in this post. Embarrassing. But having failed on the insight front, what can I do on the obvious generalization front?

Let’s think about quantities of the form

\displaystyle (j+k^{l/m})^n

where j, k, l, m, and n are nonzero integers; l/m is in lowest terms and l>0, m>1, and n>0. For now let’s also restrict m to primes.

To investigate that we’ll consider

\displaystyle \sum_{p=0}^{m-1} (j+e^{i2\pi p/m}k^{l/m})^n

The complex quantities e^{i2\pi p/m} lie on the unit circle in the complex plane and are the vertices of an m-gon. Using the binomial expansion, the sum is

\displaystyle \sum_{p=0}^{m-1} \sum_{q=0}^{n} \binom{n}{q} j^{n-q} \left (e^{i2\pi p/m}k^{l/m} \right)^q

or

\displaystyle \sum_{q=0}^{n} \binom{n}{q} j^{n-q} \left (\sum_{p=0}^{m-1} e^{i2\pi pq/m}\right) k^{lq/m}

Now, for the terms where q is a multiple of m, e^{i2\pi pq/m} is equal to 1 and the sum over p equals m.

Otherwise, we’re summing over the points on the unit circle:

\displaystyle \sum_{p=0}^{m-1} e^{i2\pi p/m}

which is the sum of a geometric series so

\displaystyle = \frac{1-e^{i2\pi}}{1-e^{i2\pi/m}} = 0

For instance, when m=2, the sum is e^0+e^{i\pi} = 1 - 1 = 0. When m = 3, it’s e^0 + e^{2i\pi/3} + e^{4i\pi/3} = 1 - \frac{1}{2} - \frac{1}{2} = 0.

All right then. This means we keep only the terms where q is a multiple of m:

\displaystyle \sum_{q=\textrm{multiple of }m}^n \binom{n}{q} j^{n-q} m k^{lq/m}

which is an integer. Call it I. Then

\displaystyle \sum_{p=0}^{m-1} (j+e^{i2\pi p/m}k^{l/m})^n = (j+k^{l/m})^n + \sum_{p=1}^{m-1} (j+e^{i2\pi p/m}k^{l/m})^n = I

or

\displaystyle (j+k^{l/m})^n= I - \sum_{p=1}^{m-1} (j+e^{i2\pi p/m}k^{l/m})^n

So for large n(j+k^{l/m})^n approaches an integer if and only if the magnitudes of all the m-1 quantities j+e^{i2\pi p/m}k^{l/m} have magnitude <1.

For instance: Let j = 1, l = 1. Then

\displaystyle (1+k^{1/m})^n= I - \sum_{p=1}^{m-1} (1+e^{i2\pi p/m}k^{1/m})^n

Now in the case m = 2,

\displaystyle (1+k^{1/2})^n= I -  (1+e^{i\pi}k^{1/2})^n = I - (1-k^{1/2})^n

The magnitude of 1-k^{1/2}<1 for k<4. In fact it’s zero for k=1, but then 1^{1/2} is an integer anyway; point is, k = 2 or 3 works: (1+\sqrt 2)^n and (1+\sqrt 3)^n both approach integers (the former much more quickly than the latter).

How about m = 3? Then

\displaystyle (1+k^{1/3})^n= I - (1+e^{i2\pi/3}k^{1/3})^n - (1+e^{i4\pi/3}k^{1/3})^n

By symmetry the magnitudes of the two complex numbers are the same, so what we need is

\displaystyle |1+e^{i2\pi/3}k^{1/3}|<1

\displaystyle 1 + k^{2/3} -k^{1/3}<1

or

\displaystyle k < 1

So there are no integer values of k that give convergence to an integer for (1+k^{1/3})^n. It seems evident the same is true for all m>2.