Why an almost integer?

Over on https://mathlesstraveled.com there’s a series of posts going on having to do with this:

\displaystyle \begin{array}{cc} n & (1 + \sqrt 2)^n \\ \hline 1 & 2.414213562373095 \\ 2 & 5.82842712474619 \\ 3 & 14.071067811865474 \\ 4 & 33.970562748477136 \\ 5 & 82.01219330881975 \\ 6 & 197.99494936611663 \\ 7 & 478.002092041053 \\ 8 & 1153.9991334482227 \\ 9 & 2786.0003589374983 \\ 10 & 6725.9998513232185 \end{array}

See that? As n increases, (1+\sqrt 2)^n approaches integer values. Odd, huh? Why does it do that?

Despite what should have been a dead giveaway hint, I didn’t figure out the approach revealed in this post. Embarrassing. But having failed on the insight front, what can I do on the obvious generalization front?

Let’s think about quantities of the form

\displaystyle (j+k^{l/m})^n

where j, k, l, m, and n are nonzero integers; l/m is in lowest terms and l>0, m>1, and n>0. For now let’s also restrict m to primes.

To investigate that we’ll consider

\displaystyle \sum_{p=0}^{m-1} (j+e^{i2\pi p/m}k^{l/m})^n

The complex quantities e^{i2\pi p/m} lie on the unit circle in the complex plane and are the vertices of an m-gon. Using the binomial expansion, the sum is

\displaystyle \sum_{p=0}^{m-1} \sum_{q=0}^{n} \binom{n}{q} j^{n-q} \left (e^{i2\pi p/m}k^{l/m} \right)^q


\displaystyle \sum_{q=0}^{n} \binom{n}{q} j^{n-q} \left (\sum_{p=0}^{m-1} e^{i2\pi pq/m}\right) k^{lq/m}

Now, for the terms where q is a multiple of m, e^{i2\pi pq/m} is equal to 1 and the sum over p equals m.

Otherwise, we’re summing over the points on the unit circle:

\displaystyle \sum_{p=0}^{m-1} e^{i2\pi p/m}

which is the sum of a geometric series so

\displaystyle = \frac{1-e^{i2\pi}}{1-e^{i2\pi/m}} = 0

For instance, when m=2, the sum is e^0+e^{i\pi} = 1 - 1 = 0. When m = 3, it’s e^0 + e^{2\pi/3} + e {4\pi/3} = 1 - \frac{1}{2} - \frac{1}{2} = 0.

All right then. This means we keep only the terms where q is a multiple of m:

\displaystyle \sum_{q=\textrm{multiple of }m}^n \binom{n}{q} j^{n-q} m k^{lq/m}

which is an integer. Call it I. Then

\displaystyle \sum_{p=0}^{m-1} (j+e^{i2\pi p/m}k^{l/m})^n = (j+k^{l/m})^n + \sum_{p=1}^{m-1} (j+e^{i2\pi p/m}k^{l/m})^n = I


\displaystyle (j+k^{l/m})^n= I - \sum_{p=1}^{m-1} (j+e^{i2\pi p/m}k^{l/m})^n

So for large n(j+k^{l/m})^n approaches an integer if and only if the magnitudes of all the m-1 quantities j+e^{i2\pi p/m}k^{l/m} have magnitude <1.

For instance: Let j = 1, l = 1. Then

\displaystyle (1+k^{1/m})^n= I - \sum_{p=1}^{m-1} (1+e^{i2\pi p/m}k^{1/m})^n

Now in the case m = 2,

\displaystyle (1+k^{1/2})^n= I -  (1+e^{i\pi}k^{1/2})^n = I - (1-k^{1/2})^n

The magnitude of 1-k^{1/2}<1 for k<4. In fact it’s zero for k=1, but then 1^{1/2} is an integer anyway; point is, k = 2 or 3 works: (1+\sqrt 2)^n and (1+\sqrt 3)^n both approach integers (the former much more quickly than the latter).

How about m = 3? Then

\displaystyle (1+k^{1/3})^n= I - (1+e^{i2\pi/3}k^{1/3})^n - (1+e^{i4\pi/3}k^{1/3})^n

By symmetry the magnitudes of the two complex numbers are the same, so what we need is

\displaystyle |1+e^{i2\pi/3}k^{1/3}|<1

\displaystyle 1 + k^{2/3} -k^{1/3}<1


\displaystyle k < 1

So there are no integer values of k that give convergence to an integer for (1+k^{1/3})^n. It seems evident the same is true for all m>2.


An honor just to be nominated

Rich’s p16 came in at 11th place in the 2016 Pattern of the Year awards. First place was never even a remote possibility, not in a year that produced the Caterloopillar and the Copperhead. (I actually thought the latter would win handily, but I guess that’s just my relative lack of interest in engineered spaceships showing.)

Fifths get lucky (part 2)

I’ve been learning a little about pyplot, and I’ve drawn a diagram:

figure_1This shows as horizontal bars the range of “fifths” (or generators) for which n generators give an adequate major third. “Adequate” here means within 25 cents, and n is the number on the vertical axis; negative n means fifths downwards, positive n is fifths upwards. The black line is at 702 cents, a just perfect fifth. The bar colors just help distinguish different values of n (and make it a little less boring than if everything were blue).

So you can see that for just fifths, four upward fifths or eight downward fifths will work. The range where four upward fifths will work is larger, but still only 12.5 cents wide. The just fifths line is very close to the upper edge of the band, and once you go beyond it in the positive direction, there’s a small gap before you get to the range where nine upward fifths give a major third. In the other direction, ten downward fifths work, but only after a somewhat larger gap where nothing smaller than eleven fifths will do it.

Around 800 cents you see lots of bars. I wrote before about how near 720 cents the number of fifths needed for a major third is very large, because 720 cents generates a 5-equal scale with no adequate thirds. On the other hand 800 cents generates a 3-equal scale with an adequate third (400 cents), so n = -10, -7, -4, -1, 2, 5, 8 (and maybe beyond) are all solutions there, and nearby. (Similarly, near 700 cents, you have n = -8 and 4.)


Fifths get lucky

I’ve been thinking more about musical tuning than CAs lately. For instance, four perfect fifths make (more or less) a major third. How crazy is that?

The diatonic scale most western music’s been based on for the past couple millennia comes from ancient Greece; they developed it by tuning their tetrachords using seven consecutive notes on a circle of fifths — F, C, G, D, A, E, B, for instance. Using a just perfect fifth, 701.96 cents, (Pythagorean tuning) the A (four fifths up from the F) is 2807.82 cents above F, or, dropping it down two octaves, 407.82 cents. This is not a particularly great approximation to a just major third, 386.31 cents, but it’s fairly close. Close enough that when music written in octaves or fifths or fourths gave way to use of thirds, musicians developed other ways of tuning diatonic scales for better thirds rather than dumping the entire system. What we’ve ended up with is equal temperament, with 700.00 cent fifths, four of which make a 400.00 cent major third.

So for many centuries we’ve been using a scale that was based on octaves and fifths to make music that uses thirds, because it happens to contain thirds that are close enough to just. Now, it’s a little odd to be using probabilistic terms to talk about simple arithmetic — two and two isn’t likely to be four, it is four — but I think you know what I mean when I ask, how likely is that? How lucky did we get?

It’s a hard question to quantify. But let’s generously say a major third is adequate if it’s within 25 cents of just. That’s a 50 cent range you want four fifths to fall into, so a single fifth has to be within a range a quarter of that size, 12.5 cents. If you didn’t know beforehand the size of a just perfect fifth, but knew it was somewhere between 650 and 750 cents, you might guess the odds of four fifths making a major third would at most be 12.5/100, or one in eight. Though worse, maybe, because the 12.5 cent range where it works might not be entirely contained within 650 to 750. In fact it might not overlap that range at all. (Though in actuality the range is from 690.33 to 702.83 cents.)

On the other hand, maybe the 25 cent range where two “fifths” make a major third does overlap, or the 16.7 cent range where three “fifths” will work does. So the odds of four or fewer fifths making an adequate major third might be a little better. Still seems small though.

Oh… but you also want to consider four or fewer downward fifths, or equivalently, four or fewer upward fourths. That improves the odds.

So let’s do a little simulation. Pick a number in the range, say, from 650.0 to 750.0 and see how many fifths, up or down, it takes to make an adequate third. Repeat, and get the distribution. Then ask about things like the average number of fifths needed.

There’s a difficulty here, though: Sometimes the answers get very large. Think about 720.00 cents. The notes that “fifth” generates are 720.0, 240.0, 960.0, 480.0, 0.0, 720.0… and it just repeats those five notes over and over; 720.00 generates a 5-equal scale. None of those notes is an adequate third, so you can run forever looking for it.

Of course you have pretty much zero chance of picking 720.00 at random, but if you pick, say, 719.932771, you’ll have to add a lot of fifths before hitting an adequate third. (1099 of them, looks like.) You’ll get occasional large numbers, then, and they’ll have a big impact on the mean value. The answer you get will fluctuate a lot depending on which large numbers you end up with.

This is why medians were invented.

So I wrote a little Python script to do this. If you take the range of possible “fifths” as 650 to 750 cents, then there’s about a 22% chance four or fewer fifths, up or down, will produce an adequate third. The median number of fifths required to make an adequate third: 11.

I think it’s safe to say if you needed 11 perfect fifths to make an adequate major third, the system upon which western music developed would have been entirely different. Different how, I have no idea, but different. Needing only four fifths was a “lucky” break… not win-the-lottery lucky, but definitely beating-the-odds lucky.

Changing the rules (part 3)

Take a look at a pre-loaf and a pi:screen-shot-2016-09-07-at-11-37-22-pm

If you run them in B37c/S23, the pre-loaf stabilizes quickly, but the pi takes a while — and some space. It needs 110 generations to settle down.

If you run them in B37e/S23, though, the pre-loaf just becomes a loaf immediately and the pi stabilizes much more quickly, in only 23 generations, and without spreading out so much.

So based on that, maybe it’s plausible B37c/S23 could be explosive while B37e/S23 isn’t. And then that might help account for why B37/S23 is explosive while B36/S23 and B38/S23 aren’t.

But this is still rather hand-wavy. Plausible is one thing, proved is something else entirely. First, it’s not just what happens to these objects that matters, but also how likely they are to arise in the first place. No matter how long and over what area a pi evolves, it won’t cause anything if it never crops up in the first place.

Second, in addition to pre-loafs and pis, you also need to consider larger sub-patterns with pre-loaves and pis embedded in them. How likely are they and what do they do under the two rules?

Anyway, proof or no proof, B37e/S23 really isn’t explosive. That means — unlike B37/S23 — it can be explored with a soup search. But apgmera, the C++ version 3 of apgsearch, can’t handle non-totalistic rules. The Python version 1.0 can, or rather a hacked version of it can. Slowly! I’ve been running it and it’s done 2 or 3 soups per second, about 3 orders of magnitude slower than apgmera can run B3/S23. And for some reason it seemed not to be sending results to Catagolue.*

But anyway, 11c/52 diagonal puffers cropped up several times, laying trails of blocks and loaves:

screen-shot-2016-09-07-at-11-49-25-pmThis puffer evolves from this 7 cell object:screen-shot-2016-09-07-at-11-54-50-pmAnd if there’s a suitably placed ship nearby, it becomes an 11c/52 diagonal spaceship. This has cropped up in several soups.screen-shot-2016-09-07-at-11-44-59-pm

*Edit: My B37e/S23 hauls are on Catagolue, but under the enharmonic name B37-c/S23.

Changing the rules (part 2a)

That lame explanation seems even more lame when you consider this: The non totalistic rule B37c/S23 (meaning birth occurs if there are 3 live neighbors, or if there are 7 live neighbors with the dead neighbor in the corner of the neighborhood) is explosive, but B37e/S23 (birth occurs if there are 3 live neighbors, or if there are 7 live neighbors with the dead neighbor on the edge of the neighborhood) isn’t.

Changing the rules (part 2)

Here’s an even more perplexing (to me, at least) instance of different CA behavior under similar-but-different rules. Consider this 32 x 32 soup:Screen Shot 2016-09-04 at 10.14.02 AM B36/S23 is a Life-like rule sometimes called HighLife. Many objects behave the same way as in Life; in particular, blocks, loaves, boats, and beehives are still lifes; blinkers are p2 oscillators; gliders are c/4 diagonal spaceships. So after 378 generations in B36/S23 when that soup looks like this, it’s stabilized:Screen Shot 2016-09-04 at 10.10.26 AM B38/S23 has no nickname I know of. Under that rule, the same soup stabilizes in 483 generations:Screen Shot 2016-09-04 at 10.10.52 AM And in B37/S23… here’s what it evolves to after 10,000 generations:Screen Shot 2016-09-04 at 10.11.08 AMPopulation 17,298 and growing, presumably forever.

Fairly typical. I’ve seen some soups take several thousand generations to stabilize in B38/S23, and I’ve seen a few — very few — stabilize in B37/S23. But most soups stabilize in 1000 generations or so in B36/S23 and B38/S23… and almost all soups explode in B37/S23.

Does that make any sense to you? Explain it to me, then.

I guess you can wave your hands and say “well, if you have few births you don’t get explosive behavior, and if you have many births in some small region you get momentary overpopulation which then crashes in the next generation, but somewhere in the middle there’s a point where you have not too few births but not enough crashes and it explodes”. But is that the best we can do at understanding this?

Edit: In fact, that lame explanation seems even more lame when you consider this: The non totalistic rule B37c/S23 (meaning birth occurs if there are 3 live neighbors, or if there are 7 live neighbors with the dead neighbor in the corner of the neighborhood) is explosive, but B37e/S23 (birth occurs if there are 3 live neighbors, or if there are 7 live neighbors with the dead neighbor on the edge of the neighborhood) isn’t.