Fifths get lucky (part 2)

I’ve been learning a little about pyplot, and I’ve drawn a diagram:

figure_1This shows as horizontal bars the range of “fifths” (or generators) for which n generators give an adequate major third. “Adequate” here means within 25 cents, and n is the number on the vertical axis; negative n means fifths downwards, positive n is fifths upwards. The black line is at 702 cents, a just perfect fifth. The bar colors just help distinguish different values of n (and make it a little less boring than if everything were blue).

So you can see that for just fifths, four upward fifths or eight downward fifths will work. The range where four upward fifths will work is larger, but still only 12.5 cents wide. The just fifths line is very close to the upper edge of the band, and once you go beyond it in the positive direction, there’s a small gap before you get to the range where nine upward fifths give a major third. In the other direction, ten downward fifths work, but only after a somewhat larger gap where nothing smaller than eleven fifths will do it.

Around 800 cents you see lots of bars. I wrote before about how near 720 cents the number of fifths needed for a major third is very large, because 720 cents generates a 5-equal scale with no adequate thirds. On the other hand 800 cents generates a 3-equal scale with an adequate third (400 cents), so n = -10, -7, -4, -1, 2, 5, 8 (and maybe beyond) are all solutions there, and nearby. (Similarly, near 700 cents, you have n = -8 and 4.)


Fifths get lucky

I’ve been thinking more about musical tuning than CAs lately. For instance, four perfect fifths make (more or less) a major third. How crazy is that?

The diatonic scale most western music’s been based on for the past couple millennia comes from ancient Greece; they developed it by tuning their tetrachords using seven consecutive notes on a circle of fifths — F, C, G, D, A, E, B, for instance. Using a just perfect fifth, 701.96 cents, (Pythagorean tuning) the A (four fifths up from the F) is 2807.82 cents above F, or, dropping it down two octaves, 407.82 cents. This is not a particularly great approximation to a just major third, 386.31 cents, but it’s fairly close. Close enough that when music written in octaves or fifths or fourths gave way to use of thirds, musicians developed other ways of tuning diatonic scales for better thirds rather than dumping the entire system. What we’ve ended up with is equal temperament, with 700.00 cent fifths, four of which make a 400.00 cent major third.

So for many centuries we’ve been using a scale that was based on octaves and fifths to make music that uses thirds, because it happens to contain thirds that are close enough to just. Now, it’s a little odd to be using probabilistic terms to talk about simple arithmetic — two and two isn’t likely to be four, it is four — but I think you know what I mean when I ask, how likely is that? How lucky did we get?

It’s a hard question to quantify. But let’s generously say a major third is adequate if it’s within 25 cents of just. That’s a 50 cent range you want four fifths to fall into, so a single fifth has to be within a range a quarter of that size, 12.5 cents. If you didn’t know beforehand the size of a just perfect fifth, but knew it was somewhere between 650 and 750 cents, you might guess the odds of four fifths making a major third would at most be 12.5/100, or one in eight. Though worse, maybe, because the 12.5 cent range where it works might not be entirely contained within 650 to 750. In fact it might not overlap that range at all. (Though in actuality the range is from 690.33 to 702.83 cents.)

On the other hand, maybe the 25 cent range where two “fifths” make a major third does overlap, or the 16.7 cent range where three “fifths” will work does. So the odds of four or fewer fifths making an adequate major third might be a little better. Still seems small though.

Oh… but you also want to consider four or fewer downward fifths, or equivalently, four or fewer upward fourths. That improves the odds.

So let’s do a little simulation. Pick a number in the range, say, from 650.0 to 750.0 and see how many fifths, up or down, it takes to make an adequate third. Repeat, and get the distribution. Then ask about things like the average number of fifths needed.

There’s a difficulty here, though: Sometimes the answers get very large. Think about 720.00 cents. The notes that “fifth” generates are 720.0, 240.0, 960.0, 480.0, 0.0, 720.0… and it just repeats those five notes over and over; 720.00 generates a 5-equal scale. None of those notes is an adequate third, so you can run forever looking for it.

Of course you have pretty much zero chance of picking 720.00 at random, but if you pick, say, 719.932771, you’ll have to add a lot of fifths before hitting an adequate third. (1099 of them, looks like.) You’ll get occasional large numbers, then, and they’ll have a big impact on the mean value. The answer you get will fluctuate a lot depending on which large numbers you end up with.

This is why medians were invented.

So I wrote a little Python script to do this. If you take the range of possible “fifths” as 650 to 750 cents, then there’s about a 22% chance four or fewer fifths, up or down, will produce an adequate third. The median number of fifths required to make an adequate third: 11.

I think it’s safe to say if you needed 11 perfect fifths to make an adequate major third, the system upon which western music developed would have been entirely different. Different how, I have no idea, but different. Needing only four fifths was a “lucky” break… not win-the-lottery lucky, but definitely beating-the-odds lucky.

Big and natural and (5,2)c/190

Generally speaking the larger a Life object is, the less likely it is to arise from a random soup. Going by the current Catagolue census, for instance, gliders arise in Life 684 times as often as lightweight spaceships, which are seen 3.8 times as often as middleweight spaceships, which turn up 5.8 times as often as heavyweight spaceships. Or look at the statistics page: All of the still lifes of size up to 13 have arisen, and 616 of the 619 size 14 still lifes, but only 1256 out of 1353 size 15, 2484 out of 3286 size 16, 4199 out of 7773 size 17 and so on… to only 7769 out of 4,051,711 still lifes of size 24.

Now, the smallest known Life spaceship that isn’t a glider, a *WSS, or a flotilla of *WSSs is the loafer, which has population 20 in a 9 by 9 bounding box. For comparison the HWSS is 13 cells in a 7 by 4 bounding box. There are 2^81 possible states for a 9 by 9 box versus 2^28 for a 7 by 4, or 2^53 times as many — about 9 quadrillion. From that point of view it’s not too surprising no loafer has evolved naturally from a soup so far. Only 111 trillion objects have been seen so far, after all.

So what are the odds of natural occurrence of a population 49 spaceship in a 47 by 17 bounding box? Incomprehensibly tiny, you would think — never in many times the lifetime of the universe would it happen.

Well, so you might think, anyway. Evidently that thinking’s not entirely correct:Screen Shot 2016-07-09 at 6.55.05 PMBecause that pattern evolved, not in Life but in the Life-like B38/S23, from a random D2_+2 soup, on my computer in the past few hours. It may not look like much… but it’s a spaceship. A spaceship which in 190 generations travels obliquely, 5 cells up and 2 cells to the left.

I was pretty excited by this discovery, until I checked the census for B38/S23 C1 soups, and saw that a bunch of p190 ships have been found already, the first by David S. Miller last April. Then I found out, well, re-found out these ships had been discussed extensively in a forum thread shortly after that. A thread which I read. And forgot about.

All these ships are based on the same fundamental engine. Take a look at the part on the right of the above ship. Run just that for 190 generations and you get this:Screen Shot 2016-07-09 at 7.28.16 PMThree of the pieces of the original pattern come back, shifted by (5, 2). The fourth piece gets changed. So this is a near spaceship by itself.

Now if you look at the part on the left and run that 52 generations you get:Screen Shot 2016-07-09 at 7.31.34 PMThe same thing as the right half at generation zero, minus the boat. So the ship consists basically of two out of phase copies of a single engine, plus a boat, evolving in such a way that the interaction between them makes up for the lack of a boat for the left engine, and changes the evolution of the 7-bit piece in both engines to make it recur in 190 generations.

Another way to look at it: Start with an R pentomino and a boat:Screen Shot 2016-07-09 at 8.34.21 PMAfter 192 generations you get this:Screen Shot 2016-07-09 at 8.35.04 PMAnd if you add a second R pentomino in just the right place at just the right phase, it’ll react with the first R and boat in just such a way as to make a spaceship. Seems kind of miraculous, but in fact there several ways to accomplish it. According to David S. Miller, at least 692 ways. Of which, as of today, apparently 11 have turned up in soup searches. There are also another 120 combinations of two Rs and a boat that produce puffers, rakes, and so on.

So a 47 by 17 spaceship evolving naturally? Not quite as astronomically unlikely as it looks. A remarkable system, though, and there’s nothing like it known in Life. Yet.

Integer linear recurrence sequences of order 2 for dummies


F_0 = 0\\F_1 = 1\\F_n=F_{n-1}+F_{n-2}

It goes 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55…

So, what do you think? Kind of remarkable, with those irrational numbers, trigonometric functions, complex roots, and chaos, right? What? You don’t see them? Look more closely.

Take ratios of consecutive Fibonacci numbers and they approach \phi=(1+\sqrt{5})/2 = 1.61803..., the “Golden Ratio”. Weird, right? It’s all integers, and yet they point to an irrational number:

n Fn Fn/Fn-1
0 0
1 1
2 1 1
3 2 2
4 3 1.5
5 5 1.666666667
6 8 1.6
7 13 1.625
8 21 1.615384615
9 34 1.619047619
10 55 1.617647059
11 89 1.618181818
12 144 1.617977528
13 233 1.618055556
14 377 1.618025751
15 610 1.618037135

But the Fibonacci sequence is just one instance of a more general integer linear recurrence sequence of order 2, defined by:

L_0 = 0\\L_1 = 1\\L_n=pL_{n-1}+qL_{n-2}

where p, q are integer constants. p=1,q=1 gives the Fibonacci sequence. (One can get even more general by choosing other starting values;  p=1,q=1,L_0=2,L_1=1 gives the Lucas sequence, for instance. But for our purposes let’s stick with L_0=0, L_1=1.)

Keeping p=1 but using q=10 we get a sequence that increases much faster, not surprisingly. It looks like the ratio of consecutive terms converges to about 3.7, but the convergence is much slower:

n Ln Ln/Ln-1
0 0
1 1
2 1 1
3 11 11
4 21 1.909090909
5 131 6.238095238
6 341 2.603053435
7 1651 4.841642229
8 5061 3.0654149
9 21571 4.262201146
10 72181 3.346205554
11 287891 3.988459567
12 1009701 3.507233641
13 3888611 3.851250024
14 13985621 3.596559543
15 52871731 3.780434991

If someone asks you, “what’s the next number in this sequence? 0, 1, 2, 3, 4, 5…” you can say, “that’s easy, it’s an integer linear recurrence sequence of order 2 with p=2,q=-1,L_0=0,L_1=1:”

n Ln Ln/Ln-1
0 0
1 1
2 2 2
3 3 1.5
4 4 1.333333333
5 5 1.25
6 6 1.2
7 7 1.166666667
8 8 1.142857143
9 9 1.125
10 10 1.111111111
11 11 1.1
12 12 1.090909091
13 13 1.083333333
14 14 1.076923077
15 15 1.071428571

And there of course the ratios of consecutive terms converge to 1.0, but slowly.

It might take you a little longer to come to grips with 0, 1, 1, –1, –3, –1, 5, 7, –3… though. But that’s just p=1,q=-2:

n Ln Ln/Ln-1
0 0
1 1
2 1 1
3 -1 -1
4 -3 3
5 -1 0.3333333333
6 5 -5
7 7 1.4
8 -3 -0.4285714286
9 -17 5.666666667
10 -11 0.6470588235
11 23 -2.090909091
12 45 1.956521739
13 -1 -0.02222222222
14 -91 91
15 -89 0.978021978

Whoah, chaotic! Doesn’t look like that one’s consecutive term ratio is converging to anything. Or diverging. It’s just all over the place.

One more: p=2,q=-2:

n Ln Ln/Ln-1
0 0
1 1
2 2 2
3 2 1
4 0 0
5 -4
6 -8 2
7 -8 1
8 0 0
9 16
10 32 2
11 32 1
12 0 0
13 -64
14 -128 2
15 -128 1

Huh. Not converging, but not chaotic. So what’s going on?

Well, let’s go back to the Fibonacci sequence. It seems to be connected with \phi. And in fact, a closed formula is

F_n = \left[\phi^n/\sqrt{5}\right]

where [x] denotes the nearest integer to x. Or another formula:

F_n = \frac{\phi^n-\left(-\phi^{-n}\right)}{\sqrt{5}}

Wait, what? You’re taking ratios of differences of powers of irrational numbers and irrational numbers and getting integers? Well, yes. Because powers of \phi and -1/\phi contain terms with even powers of \sqrt{5}, which are positive integers and cancel in the difference, and odd powers of -\sqrt{5}, which are negative integers times \sqrt{5}; those don’t cancel in the difference, and then the factor of \sqrt{5} cancels in the ratio, so you’re left with integers. Still, seems kind of weird, this intimate relationship between \phi and the Fibonacci sequence.

But consider this quadratic equation:

x^2 = x + 1

whose solutions are \phi and \psi=(1-\sqrt{5})/2=-1/\phi. Then \phi and \psi also are solutions of

x^n = x^{n-1}+x^{n-2}

(which is just the previous equation multiplied by x^{n-2}). If we define U(n)=a\phi^n+b\psi^n, where a and b are constants, then


so U(n) satisfies the Fibonacci recurrence relation, and if we solve the simultaneous equations


we get a = 1/\sqrt{5} = -b, and with those values, U(n) = F_n. Ta daa.

Well, we can generalize this. Consider the sequence


where p and q are integers; this is an integer linear recurrence sequence. To get a closed formula for S(n) start with the equation


whose solutions are \lambda_{1,2}=(p\pm\sqrt{p^2+4q})/2. If p^2+4q\ne 0, then setting S(n) to a linear combination of the \lambdas gives you the general solution — I won’t prove that, but it’s true.You can find a proof elsewhere, e.g. here. Imposing S(0)=0, S(1)=1 you get the particular solution


What if p^2+4q=0? Then \lambda_1=\lambda_2=\lambda=p/2 and the general solution is U(n)=a\lambda^n+bn\lambda^n. (Once again, follow the above link or do some Googling if you want the proof.) Imposing S(0)=0, S(1)=1 you get


So what’s the behavior as n increases? If p^2+4q>0 and |\lambda_1|>|\lambda_2| then as n gets large, S(n) approaches \lambda_1^n/\sqrt{p^2+4q} and the ratio S(n)/S(n-1) approaches \lambda_1. Similarly ifp^2+4q>0 and |\lambda_2|>|\lambda_1|, the ratio approaches \lambda_2. Clearly the larger the difference in magnitude of the \lambdas the faster this convergence happens.

If p^2+4q=0, the ratio of consecutive terms approaches p/2.

And what if p^2+4q<0? Now it’s a whole new ball game; the \lambdas are complex, with |\lambda_1|=|\lambda_2|. Taking the case p>0, we have

\lambda_{1,2}=\frac{p}{2}\pm i\frac{\sqrt{-\left(p^2+4q\right)}}{2}\\=\sqrt{-q}e^{\pm i\theta}

where \theta=\arcsin{\frac{\sqrt{-(p^2/4+q)}}{\sqrt{-q}}}=\arcsin{\sqrt{1+\frac{p^2}{4q}}}. Now

S(n)=\frac{\lambda_1^n-\lambda_2^n}{\sqrt{p^2+4q}}\\=\frac{2\left(\sqrt{-q}\right)^n}{\sqrt{-\left(p^2+4q\right)}}\Im e^{\pm in\theta}

(\Im denotes imaginary part)


When p<0 you get something similar, but multiplied by (-1)^{n-1}.

(So, uh, wait, we’re saying that right hand side is always an integer? Yes. You want proof? Heh.)

And that means the behavior as n increases is… well. It’s complicated, isn’t it? That \sin{(n\theta)} throws a monkey wrench into the whole business.

So let’s go back to our examples from above. If p=q=1 then we have the Fibonacci series. \lambda_{1,2}=\phi,\psi and |\phi| is about two and a half times larger than |\psi|, so the ratio of consecutive values converges quickly to \phi.

If p=1,q=10 then \lambda_{1,2}=3.70156...,-2.70156...; the ratio converges to 3.70156 but more slowly.

The case p=2,q=-1 gives p^2+4q=0, so the ratio approaches 2/2=1. Specifically it’s n/(n-1), so the convergence is pretty slow.

And for p=1,q=-2, p^2+4q<0 and

S(n)=\frac{1.414214^{n-1}}{.935414}\sin{1.209429 n}

and the ratios of consecutive terms don’t behave smoothly.

The p=2,q=-2 case also has p^2+4q<0, but in this case the angle is \theta=\arcsin{\sqrt{2}/2}=\pi/4, so the value of \sin{n\theta} repeats after four steps and the sequence behaves regularly, with the ratios of consecutive terms cycling through 2, 1, 0, (undefined).

So you see? You thought when we started all this was just multiplying and adding integers. We ended up with irrationals and powers of complex numbers and trigonometric functions and chaos and all kinds of stuff. Amazing.


Iterating to negative one

I’ve been looking into reducing the volume of my paper files by scanning some of what’s there, or just downloading stuff that’s available online. I found this, though, which I wrote up ages ago, and rather than scanning it, I thought I’d write it up here. And then I can download it…

Hofstadter [I think this must have been in one of his Metamagical Themas columns] poses  the problem: Find a real-valued function f(x) operating on the real numbers such that f(f(x))=-x. (If f(x) could be complex, this would be trivial: f(x)=ix (or -ix).)

We stalk f(x) by learning some of its properties…

(i) Suppose f(x)=f(y). Then -x=f(f(x))=f(f(y))=-y and so x=y:

f(x)=f(y)\Leftrightarrow x=y

(ii) f(-x)=f(f(f(x)))=-f(x):

f(-x)=-f(x) — f(x) is an odd function.

(i) and (ii) together imply:

f(0)=0 and f(x\ne 0)\ne 0

(iii) Suppose f(x)>0 for all x>0. Then f(f(x))>0. But f(f(x))=-x — contradiction. So for some x>0f(x)<0. Similarly, for some f>0f(x)>0:

f(x) has positive and negative values for positive x.

But since f(x\ne 0)\ne 0,

f(x) has at least one discontinuity for positive x.

(iv) Split the positive reals \mathbb{R}^+ into two sets, \mathbb{P} and \mathbb{N}, such that


By (ii),

f(x) < 0 if -x\in\mathbb{P}
f(x) > 0 if -x\in\mathbb{N}

If  x\in\mathbb{P}, f(x)>0; f(f(x))=-x<0\Rightarrow f(x)\in\mathbb{N}. Similarly, x\in\mathbb{N}\Rightarrow -f(x)\in\mathbb{P}:

x\in\mathbb{P}\Rightarrow f(x)\in\mathbb{N}\\x\in\mathbb{N}\Rightarrow -f(x)\in\mathbb{P}

So f(x) (sort of) maps half the real numbers into the other half, and the other half back into the first half. (Sort of because of that minus sign. Zero is the exception; it maps to itself.)

The task is to find a \mathbb{P} and an \mathbb{N} such that one can be mapped onto the other.

One solution is:


f(x\in\mathbb{P})=x+1 and so f(x)\in\mathbb{N}
f(x\in\mathbb{N})=-(x-1) and so -f(x)\in\mathbb{P}

The function fully specified is

f(x) = \begin{cases} x+1& \mbox{if } x\in\mathbb{P}\\ -x+1& \mbox{if } x\in\mathbb{N}\\ 0& \mbox{if }x=0\\x-1& \mbox{if } -x\in\mathbb{P}\\ -x-1& \mbox{if } -x\in\mathbb{N}\end{cases}

And here’s a plot:iteratingtonegativeone

Season to taste

Yesterday was the vernal equinox, or as Phil Plait calls it, the “March and not Vernal because of the Southern Hemisphere not quite equinox so let’s say it’s the quasinox.” And about damn time, too.

In the older post he links to, he discusses the fact that we (officially) use the solstices and equinoxes to mark the boundaries between the seasons, and that in some ways it makes more sense to do what’s more common in some countries and traditions, to regard the solstices and equinoxes as the midpoints of the seasons. So, for example, June 21 (or 20) would be the middle of summer (i.e. Midsummer; here and elsewhere, I’m speaking from a Northern Hemisphere-centric point of view; adjust as needed for the Southern Hemisphere), not the start of it: summer would run from about six weeks before that — early May — to six weeks after, early August. Likewise December 21 (or 22) would be the middle of winter.

That’s even in our own cultural heritage. If the vernal equinox were the central day of spring, then spring would begin about six weeks earlier — around February 2. And in our folklore, it does… if the groundhog doesn’t see his shadow.

From a weather point of view (and Plait emphasizes he’s not considering weather, but then he says to him it makes more sense for the Sun to hit its peak at summer’s midpoint, but why that makes more sense if not for weather reasons, he doesn’t make clear) that seems, on the face of it, much more sensible. You think of summer as the hottest time of the year, and the reason it’s hot is because as the Sun’s path moves northwards, the angle of incidence gets smaller and smaller, and the amount of solar radiation per second striking a plane tangent to the Earth’s surface gets larger and larger. The angle of incidence is smallest, and the amount of solar radiation is the largest, at the summer solstice.

But, continuing from a weather point of view, the summer solstice is not the hottest day of the year (on average). Just as it takes a turkey several hours to cook after you start heating it, it takes time for the atmosphere, the water bodies, and the land to warm up. How long that takes, and therefore when peak temperature occurs, I would guess depends on a number of factors including latitude, altitude, and proximity to large water bodies and to water currents (e.g. the Gulf Stream). Just looking at a number of U.S. cities I find Phoenix, AZ seems to reach peak temperature about July 7, while Los Angeles apparently doesn’t start cooling off until around August 20. (But Los Angeles’s minimum temperature seems to be in mid-December, before the solstice! Same for Denver. Very weird, I don’t understand that.) Mid to late July, say around July 21 — a month after the summer solstice — seems to be fairly typical, though, and early to mid January for the coldest date. Center the seasons around those, rounded to a convenient nearby date, and you’d have spring from about March 1 to June 1, summer from June 1 to September 1, fall from September 1 to December 1, and winter from December 1 to March 1. Which in fact is the definition meteorologists use.