So how is *that* proved? Lots of ways, some of which rely on the Jordan curve theorem, so then you need a proof of that. An outline of one that doesn’t use the Jordan curve theorem, Triangle Removal, is as follows.

First, transform the polyhedron into a planar graph. You can think of this as removing one face of the polyhedron, and then stretching the boundaries of that face and deforming the other edges until all the edges lie in a plane. Or you can think of it as putting a light source near one face and a plane on the opposite side: the shadow cast onto the plane is the planar graph. This graph has the same number of vertices and edges as the polyhedron, and apparently one fewer face — until you realize the entire plane exterior to the graph can be thought of as a face too.

Okay, we have our graph. Now do the following: Find any interior face that has more than three edges, and draw a diagonal through it from one vertex to another. Doing that increases the number of edges by 1, and divides one face into two, but leaves the vertices unchanged. That means is unchanged. Repeat that until all the interior faces are triangles.

Now do this:

- Identify a triangle that has two edges on the border of the graph. If you can’t find one, go to step 2. If you can find one, remove those two edges. This eliminates two edges, one face, and one vertex, so again is unchanged. Now repeat step 1.
- Identify a triangle that has one edge on the border of the graph. Remove that edge. This eliminates one edge, one face, and no vertices, so again is unchanged. Now repeat step 1.

Keep repeating as long as you can. Here I get hand wavy, because I’m less than clear on the argument, but: You don’t want to do step 2 if you can do step 1, because that leads to dead ends — specifically, a graph broken up into two disconnected graphs. Always do step 1 if you can, step 2 only if you can’t do step 1, and the graph will stay connected.

For instance, starting from a (kinda irregular) square pyramid. Here it’s turned into a planar graph with one quadrilateral and three triangular faces (the fourth triangle mapping to the exterior of the graph). The quadrilateral is divided into two triangles, and then triangles are removed one at a time.

Since the graph stays connected, the process continues until there’s only one triangle left. At this point we stop and take a tally: There are 3 vertices, 3 edges, and 2 faces (the interior of the triangle, and the exterior). So , and since every step we took preserved the value of this proves for the original polyhedron.

Something about that proof appeals to me, even though there’s that one part I’m hazy on.

The Sum of Angles proof I find clearer, but less cute. Here again you turn the polyhedron into a planar graph with straight edges. Now add up the interior angles of all the faces (including the exterior face). For face , which has edges, the sum of the interior angles is . Summing over all the faces,

But we get the same result if we sum the angles meeting at each vertex. For each interior vertex, the sum is , adding up to where is the number of vertices on the border. For the vertices on the border, it’s where is the exterior angle at the th vertex of the exterior polygon, and the 2 is because we include the (equal) contributions of the internal and external polygons. But the sum of the exterior angles of a polygon is , so the exterior vertices contribute a total of . Altogether, then, the sum is .

So , or ,

Either way, no Jordan curve theorem required! Yay.

]]>

The problem is, if it’s hexagons all the way around, it’s a geometric impossibility. You can’t have a convex polyhedron consisting solely of hexagons, regular or otherwise. In fact: If a convex polyhedron has only hexagons and/or pentagons as faces, then there must be at least 12 pentagons.

The proof is based on Euler’s polyhedron formula: If a convex polyhedron has vertices, edges, and faces, then

.

For our hex-and-pent-agon polyhedron, consisting of hexagons and pentagons, , of course. What’s ? Well, each hexagon has 6 edges and each pentagon has 5, but that counts each edge twice since each edge belongs to two faces. So .

Now, at each vertex, at least 3 edges meet. If you add up the number of edges at each vertex over all vertices you get at least ; but since each edge contributes 2 to that total, .

But from Euler’s formula,

or

and substituting our formulas for and ,

or

or

or

.

QED.

(The equality holds if every vertex has 3 edges — or equivalently, 3 faces. In particular, if you’re using only regular hexagons and pentagons, then there isn’t room at a vertex for more than 3 faces to meet there. So for a polyhedron made of regular hexagons and pentagons, or any polyhedron with exactly three hexagons and/or pentagons meeting at each vertex, .)

Hence the government must be petitioned. As The Aperiodical says, “Ban this hexagonal filth!”

]]>

So: All humans descend from a common ancestor. Or so many people believe. Literalist believers in the Old Testament would say Adam and Eve are common ancestors of us all. Believers in evolution go further and believe all life on Earth descends from a common ancestor. I’m in the latter group.

When did the most recent common ancestor (MRCA) of all humans live? Simulations suggest it was more recently than you might expect: 2000 to 5000 years ago. (It’s been argued that all persons of European descent share a common ancestor only about 1000 years ago, and genetic analysis [paper, FAQ] seems to support that.) Furthermore, going only a few thousand more years back, you arrive at a time when literally every person then alive fell into one of two categories: Those who are ancestors of *no one* alive today, and those who are ancestors of *everyone* alive today. I don’t know if there’s a term for that state, so I’ll make one up: Universal common ancestry (UCA). Understand these are probabilistic claims; it would certainly be *possible* that 20,000 years ago there were people whose living descendants are some but not all of us. But the probability of such a thing is so small it is almost indistinguishable from zero: Something that improbable really cannot be expected to happen even once in the history of the universe. And even if it did, another few thousand years back a non-UCA state would be even more unlikely than that. The probability of UCA back then, and any time previous to that, is absolute certainty for all practical purposes.

Of course this doesn’t just apply to humans: We can talk about the MRCA of all chimpanzees alive today, or all hominins (humans and chimpanzees), or all ruby throated hummingbirds, or whatever.

Now, here’s the weird thing: Long ago, 5 million years or more, amongst members of the species Nakalipithecus or something like it, there lived an individual (call them Nico) who had (at least) two offspring, Chris and Harley. They were Nakalipithecus (or whatever) too, of course, and they mated with other Nakalipithecus and had Nakalipithecus children, who had more Nakalipithecus children, who had … well, you get it. But here’s the thing: *All of Chris’s living descendants are chimpanzees. All of Harley’s are human.*

That might surprise you. And think how surprised Chris and Harley would be.

But it’s true, and it’s provable (in, again, a certain-for-all-practical-purposes sense).

Nico, you see, was the MRCA of all living hominins. Nico has all living humans and all living chimps as their descendants, and being they’re the most recent common ancestor, no one in subsequent generations can make the same claim.

Now of course much more recent individuals are common ancestors of all living humans (and no chimps), or of all living chimps (and no humans). How about ancestors of all living humans and *some* living chimps? Or vice versa? But no. We have UCA for humans anytime before several thousand years ago, and I’m sure chimp UCA must be even more recent than that. More than a few thousand years ago, let alone a few million, anyone who was an ancestor of *some* living chimps was an ancestor of *all* living chimps. And likewise humans, so any individual with *any* living humans and *any *living chimps in their descendants would have to be a common ancestor to *all* hominins. And Nico was the MRCA, so anyone born after Nico would have to have no living descendants, or humans but no chimps, or chimps but no humans.

We know Nico must have had two or more offspring. If they had only one, then it would have been a common ancestor of all hominins and Nico would not be the MRCA. One of them was Chris, and the other was Harley… and they were born after Nico (duh). The argument above applies to *a**nyone* born after Nico, and that includes Chris and Harley. One had to be an ancestor of all living chimps (but no living humans) — that’s Chris — and the other the ancestor of all living humans (but no living chimps) — that’s Harley.

A scenario in which that makes sense is one like this. Seven million years ago, a group of Nakalipithecus woke up one morning, checked Facebook, went to work, or went out to play, or whatever, not realizing that fifty miles away there was a great huge lake held back by a natural dam which had been slowly eroding. That day the dam broke. There was an enormous flood, many of the Nakalipithecus died, and the ones that survived found themselves on two sides of a new, permanent body of water. And the topography was such that none of them were able to cross the water or go around it, so the survivors on one side never again encountered the survivors on the other.

All right, this is not a particularly likely scenario, but it’s a thought experiment, okay?

Nico, alas, died in the flood, but not before saving the life of Chris. Harley, meanwhile, was swept off and ended up on the other side of the water. Within a few thousand years all the Nakalipithecus on one side were descendants of Chris (but not Harley) and all on the other side were descendants of Harley (but not Chris). The two groups evolved apart and eventually gave rise to chimps on one side, humans on the other.

Granted, speciation doesn’t usually work like that. Much of the time geographical separation is a factor in speciation, but not necessarily all the time, and in any case the geographic separation usually isn’t instantaneous. Mountains rise slowly, rivers change course over centuries, and so on.

But it doesn’t matter whether the two groups are sundered in hours or eons. Or even if they are geographically sundered at all. The point is, for whatever reason, two groups evolve away from one another. At the start they are one species and they interbreed frequently, but the frequency of that decreases more and more. And it may decrease over a very long time period, *but*: There’s a single, momentary last time. A last time a member of one of the groups breeds with a member of the other group. And not long before that, a last time one individual breeds with a member of one group while their sibling breeds with a member of the other, and they have fertile offspring leading to descendants alive today. Their parent is our common ancestor, and the common ancestor of all chimps too. Their children, though, may be your ancestor, may have been Washoe’s, but are not both.

]]>

See that? As increases, approaches integer values. Odd, huh? Why does it do that?

Despite what should have been a dead giveaway hint, I didn’t figure out the approach revealed in this post. Embarrassing. But having failed on the insight front, what can I do on the obvious generalization front?

Let’s think about quantities of the form

where , , , , and are nonzero integers; is in lowest terms and , , and . For now let’s also restrict to primes.

To investigate that we’ll consider

The complex quantities lie on the unit circle in the complex plane and are the vertices of an -gon. Using the binomial expansion, the sum is

or

Now, for the terms where is a multiple of , is equal to 1 and the sum over equals .

Otherwise, we’re summing over the points on the unit circle:

which is the sum of a geometric series so

For instance, when , the sum is . When , it’s .

All right then. This means we keep only the terms where is a multiple of :

which is an integer. Call it . Then

or

So for large , approaches an integer if and only if the magnitudes of all the quantities have magnitude .

For instance: Let , . Then

Now in the case ,

The magnitude of for . In fact it’s zero for , but then is an integer anyway; point is, or works: and both approach integers (the former much more quickly than the latter).

How about ? Then

By symmetry the magnitudes of the two complex numbers are the same, so what we need is

or

So there are no integer values of that give convergence to an integer for . It seems evident the same is true for all .

]]>

]]>

This shows as horizontal bars the range of “fifths” (or generators) for which *n* generators give an adequate major third. “Adequate” here means within 25 cents, and *n* is the number on the vertical axis; negative *n* means fifths downwards, positive *n* is fifths upwards. The black line is at 702 cents, a just perfect fifth. The bar colors just help distinguish different values of *n* (and make it a little less boring than if everything were blue).

So you can see that for just fifths, four upward fifths or eight downward fifths will work. The range where four upward fifths will work is larger, but still only 12.5 cents wide. The just fifths line is very close to the upper edge of the band, and once you go beyond it in the positive direction, there’s a small gap before you get to the range where nine upward fifths give a major third. In the other direction, ten downward fifths work, but only after a somewhat larger gap where nothing smaller than eleven fifths will do it.

Around 800 cents you see lots of bars. I wrote before about how near 720 cents the number of fifths needed for a major third is very large, because 720 cents generates a 5-equal scale with no adequate thirds. On the other hand 800 cents generates a 3-equal scale with an adequate third (400 cents), so *n* = -10, -7, -4, -1, 2, 5, 8 (and maybe beyond) are all solutions there, and nearby. (Similarly, near 700 cents, you have *n* = -8 and 4.)

]]>

The diatonic scale most western music’s been based on for the past couple millennia comes from ancient Greece; they developed it by tuning their tetrachords using seven consecutive notes on a circle of fifths — F, C, G, D, A, E, B, for instance. Using a just perfect fifth, 701.96 cents, (Pythagorean tuning) the A (four fifths up from the F) is 2807.82 cents above F, or, dropping it down two octaves, 407.82 cents. This is not a particularly great approximation to a just major third, 386.31 cents, but it’s fairly close. Close enough that when music written in octaves or fifths or fourths gave way to use of thirds, musicians developed other ways of tuning diatonic scales for better thirds rather than dumping the entire system. What we’ve ended up with is equal temperament, with 700.00 cent fifths, four of which make a 400.00 cent major third.

So for many centuries we’ve been using a scale that was based on octaves and fifths to make music that uses thirds, because it happens to contain thirds that are close enough to just. Now, it’s a little odd to be using probabilistic terms to talk about simple arithmetic — two and two isn’t likely to be four, it *is* four — but I think you know what I mean when I ask, how likely is that? How lucky did we get?

It’s a hard question to quantify. But let’s generously say a major third is adequate if it’s within 25 cents of just. That’s a 50 cent range you want four fifths to fall into, so a single fifth has to be within a range a quarter of that size, 12.5 cents. If you didn’t know beforehand the size of a just perfect fifth, but knew it was somewhere between 650 and 750 cents, you might guess the odds of four fifths making a major third would at most be 12.5/100, or one in eight. Though worse, maybe, because the 12.5 cent range where it works might not be entirely contained within 650 to 750. In fact it might not overlap that range at all. (Though in actuality the range is from 690.33 to 702.83 cents.)

On the other hand, maybe the 25 cent range where two “fifths” make a major third does overlap, or the 16.7 cent range where three “fifths” will work does. So the odds of four *or fewer* fifths making an adequate major third might be a little better. Still seems small though.

Oh… but you also want to consider four or fewer downward fifths, or equivalently, four or fewer upward fourths. That improves the odds.

So let’s do a little simulation. Pick a number in the range, say, from 650.0 to 750.0 and see how many fifths, up or down, it takes to make an adequate third. Repeat, and get the distribution. Then ask about things like the average number of fifths needed.

There’s a difficulty here, though: Sometimes the answers get very large. Think about 720.00 cents. The notes that “fifth” generates are 720.0, 240.0, 960.0, 480.0, 0.0, 720.0… and it just repeats those five notes over and over; 720.00 generates a 5-equal scale. None of those notes is an adequate third, so you can run forever looking for it.

Of course you have pretty much zero chance of picking 720.00 at random, but if you pick, say, 719.932771, you’ll have to add a lot of fifths before hitting an adequate third. (1099 of them, looks like.) You’ll get occasional large numbers, then, and they’ll have a big impact on the mean value. The answer you get will fluctuate a lot depending on which large numbers you end up with.

This is why medians were invented.

So I wrote a little Python script to do this. If you take the range of possible “fifths” as 650 to 750 cents, then there’s about a 22% chance four or fewer fifths, up or down, will produce an adequate third. The median number of fifths required to make an adequate third: 11.

I think it’s safe to say if you needed 11 perfect fifths to make an adequate major third, the system upon which western music developed would have been entirely different. Different how, I have no idea, but different. Needing only four fifths was a “lucky” break… not win-the-lottery lucky, but definitely beating-the-odds lucky.

]]>

If you run them in B37c/S23, the pre-loaf stabilizes quickly, but the pi takes a while — and some space. It needs 110 generations to settle down.

If you run them in B37e/S23, though, the pre-loaf just becomes a loaf immediately and the pi stabilizes much more quickly, in only 23 generations, and without spreading out so much.

So based on that, maybe it’s plausible B37c/S23 could be explosive while B37e/S23 isn’t. And then that might help account for why B37/S23 is explosive while B36/S23 and B38/S23 aren’t.

But this is still rather hand-wavy. Plausible is one thing, proved is something else entirely. First, it’s not just what happens to these objects that matters, but also how likely they are to arise in the first place. No matter how long and over what area a pi evolves, it won’t cause anything if it never crops up in the first place.

Second, in addition to pre-loafs and pis, you also need to consider larger sub-patterns with pre-loaves and pis embedded in them. How likely are they and what do they do under the two rules?

Anyway, proof or no proof, B37e/S23 really isn’t explosive. That means — unlike B37/S23 — it can be explored with a soup search. But apgmera, the C++ version 3 of apgsearch, can’t handle non-totalistic rules. The Python version 1.0 can, or rather a hacked version of it can. Slowly! I’ve been running it and it’s done 2 or 3 soups per second, about 3 orders of magnitude slower than apgmera can run B3/S23. ~~And for some reason it seemed not to be sending results to Catagolue.~~*

But anyway, 11c/52 diagonal puffers cropped up several times, laying trails of blocks and loaves:

This puffer evolves from this 7 cell object:And if there’s a suitably placed ship nearby, it becomes an 11c/52 diagonal spaceship. This has cropped up in several soups.

**Edit:* My B37e/S23 hauls *are* on Catagolue, but under the enharmonic name B37-c/S23.

]]>

]]>

Fairly typical. I’ve seen some soups take several thousand generations to stabilize in B38/S23, and I’ve seen a few — *very* few — stabilize in B37/S23. But most soups stabilize in 1000 generations or so in B36/S23 and B38/S23… and almost all soups explode in B37/S23.

Does that make any sense to you? Explain it to me, then.

I guess you can wave your hands and say “well, if you have few births you don’t get explosive behavior, and if you have many births in some small region you get momentary overpopulation which then crashes in the next generation, but somewhere in the middle there’s a point where you have not too few births but not enough crashes and it explodes”. But is that the best we can do at understanding this?

*Edit:* In fact, that lame explanation seems even more lame when you consider this: The non totalistic rule B37c/S23 (meaning birth occurs if there are 3 live neighbors, or if there are 7 live neighbors with the dead neighbor in the corner of the neighborhood) is explosive, but B37e/S23 (birth occurs if there are 3 live neighbors, or if there are 7 live neighbors with the dead neighbor on the edge of the neighborhood) isn’t.

]]>