My brother the chimp

I’ve written about this before, over on my genealogy blog, but it’s only tangentially related to genealogy. It’s not really recreational mathematics, either, but I want to expand on it somewhere, and since it involves a logical proof, this seems like the least silly place to choose.

So: All humans descend from a common ancestor. Or so many people believe. Literalist believers in the Old Testament would say Adam and Eve are common ancestors of us all. Believers in evolution go further and believe all life on Earth descends from a common ancestor. I’m in the latter group.

When did the most recent common ancestor (MRCA) of all humans live? Simulations suggest it was more recently than you might expect: 2000 to 5000 years ago. (It’s been argued that all persons of European descent share a common ancestor only about 1000 years ago, and genetic analysis [paperFAQ] seems to support that.) Furthermore, going only a few thousand more years back, you arrive at a time when literally every person then alive fell into one of two categories: Those who are ancestors of no one alive today, and those who are ancestors of everyone alive today. I don’t know if there’s a term for that state, so I’ll make one up: Universal common ancestry (UCA). Understand these are probabilistic claims; it would certainly be possible that 20,000 years ago there were people whose living descendants are some but not all of us. But the probability of such a thing is so small it is almost indistinguishable from zero: Something that improbable really cannot be expected to happen even once in the history of the universe. And even if it did, another few thousand years back a non-UCA state would be even more unlikely than that. The probability of UCA back then, and any time previous to that, is absolute certainty for all practical purposes.

Of course this doesn’t just apply to humans: We can talk about the MRCA of all chimpanzees alive today, or all hominins (humans and chimpanzees), or all ruby throated hummingbirds, or whatever.

Now, here’s the weird thing: Long ago, 5 million years or more, amongst members of the species Nakalipithecus or something like it, there lived an individual (call them Nico) who had (at least) two offspring, Chris and Harley. They were Nakalipithecus (or whatever) too, of course, and they mated with other Nakalipithecus and had Nakalipithecus children, who had more Nakalipithecus children, who had … well, you get it. But here’s the thing: All of Chris’s living descendants are chimpanzees. All of Harley’s are human.

That might surprise you. And think how surprised Chris and Harley would be.

But it’s true, and it’s provable (in, again, a certain-for-all-practical-purposes sense).

Nico, you see, was the MRCA of all living hominins. Nico has all living humans and all living chimps as their descendants, and being they’re the most recent common ancestor, no one in subsequent generations can make the same claim.

Now of course much more recent individuals are common ancestors of all living humans (and no chimps), or of all living chimps (and no humans). How about ancestors of all living humans and some living chimps? Or vice versa? But no. We have UCA for humans anytime before several thousand years ago, and I’m sure chimp UCA must be even more recent than that. More than a few thousand years ago, let alone a few million, anyone who was an ancestor of some living chimps was an ancestor of all living chimps. And likewise humans, so any individual with any living humans and any living chimps in their descendants would have to be a common ancestor to all hominins. And Nico was the MRCA, so anyone born after Nico would have to have no living descendants, or humans but no chimps, or chimps but no humans.

We know Nico must have had two or more offspring. If they had only one, then it would have been a common ancestor of all hominins and Nico would not be the MRCA. One of them was Chris, and the other was Harley… and they were born after Nico (duh). The argument above applies to anyone born after Nico, and that includes Chris and Harley. One had to be an ancestor of all living chimps (but no living humans) — that’s Chris — and the other the ancestor of all living humans (but no living chimps) — that’s Harley.

A scenario in which that makes sense is one like this. Seven million years ago, a group of Nakalipithecus woke up one morning, checked Facebook, went to work, or went out to play, or whatever, not realizing that fifty miles away there was a great huge lake held back by a natural dam which had been slowly eroding. That day the dam broke. There was an enormous flood, many of the Nakalipithecus died, and the ones that survived found themselves on two sides of a new, permanent body of water. And the topography was such that none of them were able to cross the water or go around it, so the survivors on one side never again encountered the survivors on the other.

All right, this is not a particularly likely scenario, but it’s a thought experiment, okay?

Nico, alas, died in the flood, but not before saving the life of Chris. Harley, meanwhile, was swept off and ended up on the other side of the water. Within a few thousand years all the Nakalipithecus on one side were descendants of Chris (but not Harley) and all on the other side were descendants of Harley (but not Chris). The two groups evolved apart and eventually gave rise to chimps on one side, humans on the other.

Granted, speciation doesn’t usually work like that. Much of the time geographical separation is a factor in speciation, but not necessarily all the time, and in any case the geographic separation usually isn’t instantaneous. Mountains rise slowly, rivers change course over centuries, and so on.

But it doesn’t matter whether the two groups are sundered in hours or eons. Or even if they are geographically sundered at all. The point is, for whatever reason, two groups evolve away from one another. At the start they are one species and they interbreed frequently, but the frequency of that decreases more and more. And it may decrease over a very long time period, but: There’s a single, momentary last time. A last time a member of one of the groups breeds with a member of the other group. And not long before that, a last time one individual breeds with a member of one group while their sibling breeds with a member of the other, and they have fertile offspring leading to descendants alive today. Their parent is our common ancestor, and the common ancestor of all chimps too. Their children, though, may be your ancestor, may have been Washoe’s, but are not both.

 

Advertisements

Why an almost integer?

Over on https://mathlesstraveled.com there’s a series of posts going on having to do with this:

\displaystyle \begin{array}{cc} n & (1 + \sqrt 2)^n \\ \hline 1 & 2.414213562373095 \\ 2 & 5.82842712474619 \\ 3 & 14.071067811865474 \\ 4 & 33.970562748477136 \\ 5 & 82.01219330881975 \\ 6 & 197.99494936611663 \\ 7 & 478.002092041053 \\ 8 & 1153.9991334482227 \\ 9 & 2786.0003589374983 \\ 10 & 6725.9998513232185 \end{array}

See that? As n increases, (1+\sqrt 2)^n approaches integer values. Odd, huh? Why does it do that?

Despite what should have been a dead giveaway hint, I didn’t figure out the approach revealed in this post. Embarrassing. But having failed on the insight front, what can I do on the obvious generalization front?

Let’s think about quantities of the form

\displaystyle (j+k^{l/m})^n

where j, k, l, m, and n are nonzero integers; l/m is in lowest terms and l>0, m>1, and n>0. For now let’s also restrict m to primes.

To investigate that we’ll consider

\displaystyle \sum_{p=0}^{m-1} (j+e^{i2\pi p/m}k^{l/m})^n

The complex quantities e^{i2\pi p/m} lie on the unit circle in the complex plane and are the vertices of an m-gon. Using the binomial expansion, the sum is

\displaystyle \sum_{p=0}^{m-1} \sum_{q=0}^{n} \binom{n}{q} j^{n-q} \left (e^{i2\pi p/m}k^{l/m} \right)^q

or

\displaystyle \sum_{q=0}^{n} \binom{n}{q} j^{n-q} \left (\sum_{p=0}^{m-1} e^{i2\pi pq/m}\right) k^{lq/m}

Now, for the terms where q is a multiple of m, e^{i2\pi pq/m} is equal to 1 and the sum over p equals m.

Otherwise, we’re summing over the points on the unit circle:

\displaystyle \sum_{p=0}^{m-1} e^{i2\pi p/m}

which is the sum of a geometric series so

\displaystyle = \frac{1-e^{i2\pi}}{1-e^{i2\pi/m}} = 0

For instance, when m=2, the sum is e^0+e^{i\pi} = 1 - 1 = 0. When m = 3, it’s e^0 + e^{2i\pi/3} + e^{4i\pi/3} = 1 - \frac{1}{2} - \frac{1}{2} = 0.

All right then. This means we keep only the terms where q is a multiple of m:

\displaystyle \sum_{q=\textrm{multiple of }m}^n \binom{n}{q} j^{n-q} m k^{lq/m}

which is an integer. Call it I. Then

\displaystyle \sum_{p=0}^{m-1} (j+e^{i2\pi p/m}k^{l/m})^n = (j+k^{l/m})^n + \sum_{p=1}^{m-1} (j+e^{i2\pi p/m}k^{l/m})^n = I

or

\displaystyle (j+k^{l/m})^n= I - \sum_{p=1}^{m-1} (j+e^{i2\pi p/m}k^{l/m})^n

So for large n(j+k^{l/m})^n approaches an integer if and only if the magnitudes of all the m-1 quantities j+e^{i2\pi p/m}k^{l/m} have magnitude <1.

For instance: Let j = 1, l = 1. Then

\displaystyle (1+k^{1/m})^n= I - \sum_{p=1}^{m-1} (1+e^{i2\pi p/m}k^{1/m})^n

Now in the case m = 2,

\displaystyle (1+k^{1/2})^n= I -  (1+e^{i\pi}k^{1/2})^n = I - (1-k^{1/2})^n

The magnitude of 1-k^{1/2}<1 for k<4. In fact it’s zero for k=1, but then 1^{1/2} is an integer anyway; point is, k = 2 or 3 works: (1+\sqrt 2)^n and (1+\sqrt 3)^n both approach integers (the former much more quickly than the latter).

How about m = 3? Then

\displaystyle (1+k^{1/3})^n= I - (1+e^{i2\pi/3}k^{1/3})^n - (1+e^{i4\pi/3}k^{1/3})^n

By symmetry the magnitudes of the two complex numbers are the same, so what we need is

\displaystyle |1+e^{i2\pi/3}k^{1/3}|<1

\displaystyle 1 + k^{2/3} -k^{1/3}<1

or

\displaystyle k < 1

So there are no integer values of k that give convergence to an integer for (1+k^{1/3})^n. It seems evident the same is true for all m>2.

 

Fifths get lucky (part 2)

I’ve been learning a little about pyplot, and I’ve drawn a diagram:

figure_1This shows as horizontal bars the range of “fifths” (or generators) for which n generators give an adequate major third. “Adequate” here means within 25 cents, and n is the number on the vertical axis; negative n means fifths downwards, positive n is fifths upwards. The black line is at 702 cents, a just perfect fifth. The bar colors just help distinguish different values of n (and make it a little less boring than if everything were blue).

So you can see that for just fifths, four upward fifths or eight downward fifths will work. The range where four upward fifths will work is larger, but still only 12.5 cents wide. The just fifths line is very close to the upper edge of the band, and once you go beyond it in the positive direction, there’s a small gap before you get to the range where nine upward fifths give a major third. In the other direction, ten downward fifths work, but only after a somewhat larger gap where nothing smaller than eleven fifths will do it.

Around 800 cents you see lots of bars. I wrote before about how near 720 cents the number of fifths needed for a major third is very large, because 720 cents generates a 5-equal scale with no adequate thirds. On the other hand 800 cents generates a 3-equal scale with an adequate third (400 cents), so n = -10, -7, -4, -1, 2, 5, 8 (and maybe beyond) are all solutions there, and nearby. (Similarly, near 700 cents, you have n = -8 and 4.)

 

Fifths get lucky

I’ve been thinking more about musical tuning than CAs lately. For instance, four perfect fifths make (more or less) a major third. How crazy is that?

The diatonic scale most western music’s been based on for the past couple millennia comes from ancient Greece; they developed it by tuning their tetrachords using seven consecutive notes on a circle of fifths — F, C, G, D, A, E, B, for instance. Using a just perfect fifth, 701.96 cents, (Pythagorean tuning) the A (four fifths up from the F) is 2807.82 cents above F, or, dropping it down two octaves, 407.82 cents. This is not a particularly great approximation to a just major third, 386.31 cents, but it’s fairly close. Close enough that when music written in octaves or fifths or fourths gave way to use of thirds, musicians developed other ways of tuning diatonic scales for better thirds rather than dumping the entire system. What we’ve ended up with is equal temperament, with 700.00 cent fifths, four of which make a 400.00 cent major third.

So for many centuries we’ve been using a scale that was based on octaves and fifths to make music that uses thirds, because it happens to contain thirds that are close enough to just. Now, it’s a little odd to be using probabilistic terms to talk about simple arithmetic — two and two isn’t likely to be four, it is four — but I think you know what I mean when I ask, how likely is that? How lucky did we get?

It’s a hard question to quantify. But let’s generously say a major third is adequate if it’s within 25 cents of just. That’s a 50 cent range you want four fifths to fall into, so a single fifth has to be within a range a quarter of that size, 12.5 cents. If you didn’t know beforehand the size of a just perfect fifth, but knew it was somewhere between 650 and 750 cents, you might guess the odds of four fifths making a major third would at most be 12.5/100, or one in eight. Though worse, maybe, because the 12.5 cent range where it works might not be entirely contained within 650 to 750. In fact it might not overlap that range at all. (Though in actuality the range is from 690.33 to 702.83 cents.)

On the other hand, maybe the 25 cent range where two “fifths” make a major third does overlap, or the 16.7 cent range where three “fifths” will work does. So the odds of four or fewer fifths making an adequate major third might be a little better. Still seems small though.

Oh… but you also want to consider four or fewer downward fifths, or equivalently, four or fewer upward fourths. That improves the odds.

So let’s do a little simulation. Pick a number in the range, say, from 650.0 to 750.0 and see how many fifths, up or down, it takes to make an adequate third. Repeat, and get the distribution. Then ask about things like the average number of fifths needed.

There’s a difficulty here, though: Sometimes the answers get very large. Think about 720.00 cents. The notes that “fifth” generates are 720.0, 240.0, 960.0, 480.0, 0.0, 720.0… and it just repeats those five notes over and over; 720.00 generates a 5-equal scale. None of those notes is an adequate third, so you can run forever looking for it.

Of course you have pretty much zero chance of picking 720.00 at random, but if you pick, say, 719.932771, you’ll have to add a lot of fifths before hitting an adequate third. (1099 of them, looks like.) You’ll get occasional large numbers, then, and they’ll have a big impact on the mean value. The answer you get will fluctuate a lot depending on which large numbers you end up with.

This is why medians were invented.

So I wrote a little Python script to do this. If you take the range of possible “fifths” as 650 to 750 cents, then there’s about a 22% chance four or fewer fifths, up or down, will produce an adequate third. The median number of fifths required to make an adequate third: 11.

I think it’s safe to say if you needed 11 perfect fifths to make an adequate major third, the system upon which western music developed would have been entirely different. Different how, I have no idea, but different. Needing only four fifths was a “lucky” break… not win-the-lottery lucky, but definitely beating-the-odds lucky.

Big and natural and (5,2)c/190

Generally speaking the larger a Life object is, the less likely it is to arise from a random soup. Going by the current Catagolue census, for instance, gliders arise in Life 684 times as often as lightweight spaceships, which are seen 3.8 times as often as middleweight spaceships, which turn up 5.8 times as often as heavyweight spaceships. Or look at the statistics page: All of the still lifes of size up to 13 have arisen, and 616 of the 619 size 14 still lifes, but only 1256 out of 1353 size 15, 2484 out of 3286 size 16, 4199 out of 7773 size 17 and so on… to only 7769 out of 4,051,711 still lifes of size 24.

Now, the smallest known Life spaceship that isn’t a glider, a *WSS, or a flotilla of *WSSs is the loafer, which has population 20 in a 9 by 9 bounding box. For comparison the HWSS is 13 cells in a 7 by 4 bounding box. There are 2^81 possible states for a 9 by 9 box versus 2^28 for a 7 by 4, or 2^53 times as many — about 9 quadrillion. From that point of view it’s not too surprising no loafer has evolved naturally from a soup so far. Only 111 trillion objects have been seen so far, after all.

So what are the odds of natural occurrence of a population 49 spaceship in a 47 by 17 bounding box? Incomprehensibly tiny, you would think — never in many times the lifetime of the universe would it happen.

Well, so you might think, anyway. Evidently that thinking’s not entirely correct:Screen Shot 2016-07-09 at 6.55.05 PMBecause that pattern evolved, not in Life but in the Life-like B38/S23, from a random D2_+2 soup, on my computer in the past few hours. It may not look like much… but it’s a spaceship. A spaceship which in 190 generations travels obliquely, 5 cells up and 2 cells to the left.

I was pretty excited by this discovery, until I checked the census for B38/S23 C1 soups, and saw that a bunch of p190 ships have been found already, the first by David S. Miller last April. Then I found out, well, re-found out these ships had been discussed extensively in a forum thread shortly after that. A thread which I read. And forgot about.

All these ships are based on the same fundamental engine. Take a look at the part on the right of the above ship. Run just that for 190 generations and you get this:Screen Shot 2016-07-09 at 7.28.16 PMThree of the pieces of the original pattern come back, shifted by (5, 2). The fourth piece gets changed. So this is a near spaceship by itself.

Now if you look at the part on the left and run that 52 generations you get:Screen Shot 2016-07-09 at 7.31.34 PMThe same thing as the right half at generation zero, minus the boat. So the ship consists basically of two out of phase copies of a single engine, plus a boat, evolving in such a way that the interaction between them makes up for the lack of a boat for the left engine, and changes the evolution of the 7-bit piece in both engines to make it recur in 190 generations.

Another way to look at it: Start with an R pentomino and a boat:Screen Shot 2016-07-09 at 8.34.21 PMAfter 192 generations you get this:Screen Shot 2016-07-09 at 8.35.04 PMAnd if you add a second R pentomino in just the right place at just the right phase, it’ll react with the first R and boat in just such a way as to make a spaceship. Seems kind of miraculous, but in fact there several ways to accomplish it. According to David S. Miller, at least 692 ways. Of which, as of today, apparently 11 have turned up in soup searches. There are also another 120 combinations of two Rs and a boat that produce puffers, rakes, and so on.

So a 47 by 17 spaceship evolving naturally? Not quite as astronomically unlikely as it looks. A remarkable system, though, and there’s nothing like it known in Life. Yet.

Integer linear recurrence sequences of order 2 for dummies

Fibonacci!

F_0 = 0\\F_1 = 1\\F_n=F_{n-1}+F_{n-2}

It goes 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55…

So, what do you think? Kind of remarkable, with those irrational numbers, trigonometric functions, complex roots, and chaos, right? What? You don’t see them? Look more closely.

Take ratios of consecutive Fibonacci numbers and they approach \phi=(1+\sqrt{5})/2 = 1.61803..., the “Golden Ratio”. Weird, right? It’s all integers, and yet they point to an irrational number:

n Fn Fn/Fn-1
0 0
1 1
2 1 1
3 2 2
4 3 1.5
5 5 1.666666667
6 8 1.6
7 13 1.625
8 21 1.615384615
9 34 1.619047619
10 55 1.617647059
11 89 1.618181818
12 144 1.617977528
13 233 1.618055556
14 377 1.618025751
15 610 1.618037135

But the Fibonacci sequence is just one instance of a more general integer linear recurrence sequence of order 2, defined by:

L_0 = 0\\L_1 = 1\\L_n=pL_{n-1}+qL_{n-2}

where p, q are integer constants. p=1,q=1 gives the Fibonacci sequence. (One can get even more general by choosing other starting values;  p=1,q=1,L_0=2,L_1=1 gives the Lucas sequence, for instance. But for our purposes let’s stick with L_0=0, L_1=1.)

Keeping p=1 but using q=10 we get a sequence that increases much faster, not surprisingly. It looks like the ratio of consecutive terms converges to about 3.7, but the convergence is much slower:

n Ln Ln/Ln-1
0 0
1 1
2 1 1
3 11 11
4 21 1.909090909
5 131 6.238095238
6 341 2.603053435
7 1651 4.841642229
8 5061 3.0654149
9 21571 4.262201146
10 72181 3.346205554
11 287891 3.988459567
12 1009701 3.507233641
13 3888611 3.851250024
14 13985621 3.596559543
15 52871731 3.780434991

If someone asks you, “what’s the next number in this sequence? 0, 1, 2, 3, 4, 5…” you can say, “that’s easy, it’s an integer linear recurrence sequence of order 2 with p=2,q=-1,L_0=0,L_1=1:”

n Ln Ln/Ln-1
0 0
1 1
2 2 2
3 3 1.5
4 4 1.333333333
5 5 1.25
6 6 1.2
7 7 1.166666667
8 8 1.142857143
9 9 1.125
10 10 1.111111111
11 11 1.1
12 12 1.090909091
13 13 1.083333333
14 14 1.076923077
15 15 1.071428571

And there of course the ratios of consecutive terms converge to 1.0, but slowly.

It might take you a little longer to come to grips with 0, 1, 1, –1, –3, –1, 5, 7, –3… though. But that’s just p=1,q=-2:

n Ln Ln/Ln-1
0 0
1 1
2 1 1
3 -1 -1
4 -3 3
5 -1 0.3333333333
6 5 -5
7 7 1.4
8 -3 -0.4285714286
9 -17 5.666666667
10 -11 0.6470588235
11 23 -2.090909091
12 45 1.956521739
13 -1 -0.02222222222
14 -91 91
15 -89 0.978021978

Whoah, chaotic! Doesn’t look like that one’s consecutive term ratio is converging to anything. Or diverging. It’s just all over the place.

One more: p=2,q=-2:

n Ln Ln/Ln-1
0 0
1 1
2 2 2
3 2 1
4 0 0
5 -4
6 -8 2
7 -8 1
8 0 0
9 16
10 32 2
11 32 1
12 0 0
13 -64
14 -128 2
15 -128 1

Huh. Not converging, but not chaotic. So what’s going on?

Well, let’s go back to the Fibonacci sequence. It seems to be connected with \phi. And in fact, a closed formula is

F_n = \left[\phi^n/\sqrt{5}\right]

where [x] denotes the nearest integer to x. Or another formula:

F_n = \frac{\phi^n-\left(-\phi^{-n}\right)}{\sqrt{5}}

Wait, what? You’re taking ratios of differences of powers of irrational numbers and irrational numbers and getting integers? Well, yes. Because powers of \phi and -1/\phi contain terms with even powers of \sqrt{5}, which are positive integers and cancel in the difference, and odd powers of -\sqrt{5}, which are negative integers times \sqrt{5}; those don’t cancel in the difference, and then the factor of \sqrt{5} cancels in the ratio, so you’re left with integers. Still, seems kind of weird, this intimate relationship between \phi and the Fibonacci sequence.

But consider this quadratic equation:

x^2 = x + 1

whose solutions are \phi and \psi=(1-\sqrt{5})/2=-1/\phi. Then \phi and \psi also are solutions of

x^n = x^{n-1}+x^{n-2}

(which is just the previous equation multiplied by x^{n-2}). If we define U(n)=a\phi^n+b\psi^n, where a and b are constants, then

U(n)=a\phi^n+b\psi^n=a(\phi^{n-1}+\phi^{n-2})+b(\psi^{n-1}+\psi^{n-2})\\=a\phi^{n-1}+b\psi^{n-1}+a\phi^{n-2}+b\psi^{n-2}\\=U(n-1)+U(n-2)

so U(n) satisfies the Fibonacci recurrence relation, and if we solve the simultaneous equations

U(0)=F_0=0\\U(1)=F_1=1

we get a = 1/\sqrt{5} = -b, and with those values, U(n) = F_n. Ta daa.

Well, we can generalize this. Consider the sequence

S(0)=0\\S(1)=1\\S(n)=pS(n-1)+qS(n-2)

where p and q are integers; this is an integer linear recurrence sequence. To get a closed formula for S(n) start with the equation

x^2=px+q

whose solutions are \lambda_{1,2}=(p\pm\sqrt{p^2+4q})/2. If p^2+4q\ne 0, then setting S(n) to a linear combination of the \lambdas gives you the general solution — I won’t prove that, but it’s true.You can find a proof elsewhere, e.g. here. Imposing S(0)=0, S(1)=1 you get the particular solution

S(n)=\frac{\lambda_1^n-\lambda_2^n}{\sqrt{p^2+4q}}.

What if p^2+4q=0? Then \lambda_1=\lambda_2=\lambda=p/2 and the general solution is U(n)=a\lambda^n+bn\lambda^n. (Once again, follow the above link or do some Googling if you want the proof.) Imposing S(0)=0, S(1)=1 you get

S(n)=n\left(\frac{p}{2}\right)^{n-1}.

So what’s the behavior as n increases? If p^2+4q>0 and |\lambda_1|>|\lambda_2| then as n gets large, S(n) approaches \lambda_1^n/\sqrt{p^2+4q} and the ratio S(n)/S(n-1) approaches \lambda_1. Similarly ifp^2+4q>0 and |\lambda_2|>|\lambda_1|, the ratio approaches \lambda_2. Clearly the larger the difference in magnitude of the \lambdas the faster this convergence happens.

If p^2+4q=0, the ratio of consecutive terms approaches p/2.

And what if p^2+4q<0? Now it’s a whole new ball game; the \lambdas are complex, with |\lambda_1|=|\lambda_2|. Taking the case p>0, we have

\lambda_{1,2}=\frac{p}{2}\pm i\frac{\sqrt{-\left(p^2+4q\right)}}{2}\\=\sqrt{-q}e^{\pm i\theta}

where \theta=\arcsin{\frac{\sqrt{-(p^2/4+q)}}{\sqrt{-q}}}=\arcsin{\sqrt{1+\frac{p^2}{4q}}}. Now

S(n)=\frac{\lambda_1^n-\lambda_2^n}{\sqrt{p^2+4q}}\\=\frac{2\left(\sqrt{-q}\right)^n}{\sqrt{-\left(p^2+4q\right)}}\Im e^{\pm in\theta}

(\Im denotes imaginary part)

S(n)=\frac{\left(\sqrt{-q}\right)^{n-1}}{\sqrt{1+p^2/4q}}\sin{\left(n\arcsin{\sqrt{1+p^2/4q}}\right)}

When p<0 you get something similar, but multiplied by (-1)^{n-1}.

(So, uh, wait, we’re saying that right hand side is always an integer? Yes. You want proof? Heh.)

And that means the behavior as n increases is… well. It’s complicated, isn’t it? That \sin{(n\theta)} throws a monkey wrench into the whole business.

So let’s go back to our examples from above. If p=q=1 then we have the Fibonacci series. \lambda_{1,2}=\phi,\psi and |\phi| is about two and a half times larger than |\psi|, so the ratio of consecutive values converges quickly to \phi.

If p=1,q=10 then \lambda_{1,2}=3.70156...,-2.70156...; the ratio converges to 3.70156 but more slowly.

The case p=2,q=-1 gives p^2+4q=0, so the ratio approaches 2/2=1. Specifically it’s n/(n-1), so the convergence is pretty slow.

And for p=1,q=-2, p^2+4q<0 and

S(n)=\frac{1.414214^{n-1}}{.935414}\sin{1.209429 n}

and the ratios of consecutive terms don’t behave smoothly.

The p=2,q=-2 case also has p^2+4q<0, but in this case the angle is \theta=\arcsin{\sqrt{2}/2}=\pi/4, so the value of \sin{n\theta} repeats after four steps and the sequence behaves regularly, with the ratios of consecutive terms cycling through 2, 1, 0, (undefined).

So you see? You thought when we started all this was just multiplying and adding integers. We ended up with irrationals and powers of complex numbers and trigonometric functions and chaos and all kinds of stuff. Amazing.