One rule of linking, though, is that links can’t cross. If you want to link two portals, and a straight line segment between them intersects some other link, then tough luck: you can’t link them.

(Technically, if the Earth is regarded as a sphere, presumably links are great circle segments and control fields are spherical triangles. But for our purposes we pretend we’re working in a plane. Links are line segments, control fields are Euclidean plane triangles.)

So a question you might ask is, for some collection of portals, is there an optimal way to link them? Can you make more links or more control fields if you do it one way than if you do it another?

Answer: No. If you start creating links, and keep adding links until no more links are possible, then regardless of how you decide which links to make, you always get the same number of links, and the same number of control fields.

And how many is that?

You get the answer using the Euler characteristic. For any convex polyhedron,

where , , and are respectively the number of faces, edges, and vertices the polyhedron has. A cube, for instance, has 6 faces, 12 edges, and 8 vertices:

.

What do polyhedra have to do with portals? Well, the same idea applies to any connected planar graph (a graph with non crossing edges), which can be thought of as the shadow of a polyhedron, provided you realize every region of the plane bounded by edges is a face, *including the region extending to infinity.* So this graph

also has 6 faces, 12 edges, and 8 vertices, if you count as faces the central quadrilateral, the four quadrilaterals surrounding it, and the infinite region surrounding them.

Viewed as linked Ingress portals, though, it’s not maxed out; in fact, there are no control fields because none of the polygons are triangles. So you can add more links until you have, for instance

and now there are 11 faces, 17 edges, and 8 vertices; .

But to maximize links and control fields, all the polygons in the graph, except the external one, must be triangles. Each of the triangular faces has 3 edges. But , because where two triangles adjoin, a single edge is one of the bounding edges of both triangles. It serves as two edges, in a way. On the other hand, edges of triangles along the boundary are shared with the external polyhedron. So if we regard the external polyhedron as contributing zero edges then each internal triangle (not along the boundary) contributes edges, and each boundary triangle contributes edges. Or

where is the number of triangles and is the number of boundary triangles, which is the same as the number of edges or vertices in the bounding polygon (which is the convex hull of all the vertices). Solving for ,

Then combining with the Euler characteristic,

or

In the above diagram, and = 4 so , which is what we see. This tells us no matter how we link portals, once we’ve made all possible links, the number of links we end up with is the same. And the number of control fields, , is

And again in the above diagram, which is correct. No matter how we link a given set of portals, we end up with a fixed number of control fields. All that matters is the number of portals, and the number in the convex hull.

]]>One way is zeta function regularization. For this we start with the sum

.

For example, . This series converges to the limit . In fact converges for any complex where the real part .

For , and this diverges. If you approach along the real axis you find increases without limit. Off the real axis, things are a little different. At , for example, the sum fails to converge, but as you approach t from the right, approaches . Similar behavior is found elsewhere on the line, other than .

That suggests there might be a way to construct a function that is equal to for but which has well defined values elsewhere, except . And indeed there is: analytic continuation.

Imagine I give you the following function: for real . Outside that interval is undefined. But you obviously could define another function which is defined on the whole real number line and has the property that in the range where is defined. Obviously is continuous, and is differentiable everywhere.

On the other hand, you could instead define as being quadratics grafted onto the line from to :

which has the same properties. Or you could use cubics, or quartics, or, well, anything provided it has the right value and derivative at and . There’s an infinite number of ways to continue to the entire real line.

In the complex plane you can do something similar. I give you a function defined for within some region of the complex plane. is analytic, that is, it has a complex derivative everywhere it’s defined. Then you can give me an analytic function defined everywhere in the complex plane and equal to everywhere is defined. (I’m being sloppy and informal here; there could be poles where neither function is defined, for example.)

Here’s the thing, though: Unlike on the real line, is *unique*. There is exactly one analytic function that continues my analytic function to the entire complex plane.

So, getting back to our sum (which is analytic), we can define an analytic function for , whose behavior for is given by analytic continuation. One can show

where is the usual gamma function. has a pole at but is well defined everywhere else. is known as the Riemann zeta function.

Now, we know is the value of wherever that sum converges. Zeta regularization just assigns the value of to that sum where it does not converge as well. For instance, when , we have , and .

The somewhat notorious sum of the positive integers, , is , to which is assigned the value . If you want to start an argument on the Internet, claiming that is a good way to do it. Of course that claim glosses over a lot.

It turns out the negative even integers are (“trivial”) zeros of the zeta function, so by this summation method. Generally, for integer exponents,

,

where is the *n*th Bernoulli number,

.

So

and on from there.

]]>then we can define

for . Then if exists and is finite, that limit is the Abel sum of .

As a simple example, apply this to Grandi’s series, . Here for . The limit as is , the same result as we obtained using Cesàro summation. In fact it can be shown that Abel summation is stronger than Cesàro summation, i.e., for series that can be Cesàro summed, Abel summation give the same result, but there are additional series which can be Abel summed but not Cesàro summed. Of course Cesàro summation is consistent with ordinary summation for convergent series, and therefore so is Abel summation: that is, Abel summation is regular.

Here’s another example. Consider

.

Not only does this series not converge, but the partial sum averages don’t converge either, so it is not Cesàro summable. But it is Abel summable. We make this sum into a function

But now notice: , , , and so on:

and, again, for ,

Now the Abel sum is .

A couple more properties (besides regularity) a summation method might have are **linearity** and **stability**. For the following let denote the result of applying summation method to series . By linearity is meant: if and then . By stability is meant: if then , and conversely. Cesàro summation and Abel summation both are linear and stable. So is classical summation, for that matter.

You can prove that for any linear and stable summation method , the sum of the Grandi series is , *if that sum exists*:

(by stability)

(by linearity)

and so

.

That “if that sum exists” provision is important. For instance, classical summation of the Grandi series is undefined, not , even though classical summation is linear and stable. You can come up with similar proofs about linear and stable sums of other series, that they must always have some particular value if they have a value at all. Showing that they do indeed have a value is another matter!

Conversely, you can prove some series do not have values for any summation method that is linear and/or stable. For example, suppose is stable and has value . Then

(by stability)

,

an impossibility. So cannot be summed by any stable summation method. There are unstable methods, however, that can sum that series.

]]>A divergent series has no limit, so we can’t assign that as a value. Conventionally we just say it has no value, its value is undefined. But in fact some divergent series can be associated with a definite quantity by means other than the limit of the sequence of partial sums. For some purposes, it can be useful to regard that quantity as the value, or *a* value, of the series.

There are lots of ways to associate a value with a divergent series; lots of summation methods, as they are called. It’s unfortunate terminology, in that it suggests they’re methods for finding the unfindable sum of the infinite number of numbers. But it’s the terminology we’re stuck with.

Summation methods are kind of like technical standards: the great thing about them is there’s so many to choose from. Generally a given summation method can be used with some series, but not with others. Some methods are stronger than others, in the sense that the one can be applied to any series the other can, with the same result, but it can also be applied to some series the other can’t handle.

Perhaps the simplest summation method applicable to a divergent series is Cesàro summation. In its simplest form, this is just finding the limit not of the partial sums of a series, but the average of the partial sums. For example, Grandi’s series is

.

The first partial sum is , the second is , the third is , the fourth is , and so on — they alternate between and . They don’t converge. But the average of the first one partial sum is , the average of the first two is , the average of the first three is , the average of the first four is , and so on, forming the sequence

and that sequence *does* converge, to . This value is the Cesàro summation of Grandi’s series.

Now, we know if we add together a finite number of integers, we get an integer, and it seems crazy to think you could sit down and add an infinite number of integers and get a fraction. Then again, it’s crazy to think you could sit down and add an infinite number of integers. And that’s not what we’re doing. But we know the partial sums alternate between and , so the value halfway between, , in some sense does characterize the behavior of the infinite series.

A reasonable question to ask is, what’s the Cesàro summation of a *convergent* series? It doesn’t take too much thinking to realize intuitively that if a series converges, then the average of the partial sums also should converge and to the same value. For instance, the partial sums of

converge to , and so do the averages of the partial sums. Granted, the partial sums converge much faster: after just 12 terms the partial sum is while after 10000 terms the average of the partial sums is still . But it’s getting there. A summation method that gives the conventional limit when applied to a convergent series is called **regular**. Cesàro summation is regular, and clearly that’s a nice attribute to have: it means Cesàro summation is consistent with ordinary summation, but is stronger in the sense that it also gives results for some series which have no classical value.

But does it make sense to say

?

It’s an unconventional and probably misleading use of the equal sign, but if it’s understood you’re talking about a value assigned using a summation method, specifically Cesàro summation, you maybe can get away with it. But you can also make the sense more explicit. Hardy again:

We shall make systematic use of the following notations. If we define the sum of , in some new sense, say the ‘Pickwickian’ sense, as , we shall say that is *summable* (P), call the P *sum* of , and write

(P).

We shall also say that is the P *limit* of the partial sum , and write

(P).

[I]t is broadly true to say that mathematicians before Cauchy asked not “How shall we *define* 1 – 1 + 1 – …?” but “What *is* 1 – 1 + 1 – …?”, and that this habit of mind led them into unnecessary perplexities and controversies which were often really verbal.

What is the value of an infinite series?

I mean, what does “value” mean here?

With a finite series, you can (in principle) just add all the numbers together. You take the result and call that the value of the series. The value of is . No problem. But you can’t do that with an infinite series. You’d never complete the process — and so you can’t get its result.

You learn about infinite series in school, and what you learn is that some series converge to a limit. That is, if you have the infinite series

it converges if, loosely speaking, the sequence of partial sums

approaches some value arbitrarily closely as gets larger; that value is called the limit and we can write

.

And then there’s a bit of a leap as we regard as not just the limit of the partial sums, but as the value of the infinite series:

But not all series have limits, and if the limit doesn’t exist, then we regard the value of the infinite series as undefined.

That all can be learned in an intuitive way, though it can be made much more formal, and on the surface it makes sense. But there’s a sort of swindle going on here. You’re thinking of the “value” of the series as “what you would get if you added up all the infinite number of numbers”… but you can’t add them all up, so how can you assert what you would get if you did?

Here’s a demonstration of why that way of thinking about it is deceptive: the Riemann series theorem. Consider the series

You can show this converges to the limit . But you can take the same series and rearrange the terms like this:

and that converges to . You’re adding up “the same” numbers in the second series as in the first, and addition is commutative, so you should get the same answer if you could add up all the numbers in the series (which you can’t)… but the limits are different! In fact, Riemann proved that any conditionally convergent series (one that has a limit, but the sum of the absolute values of the terms does not) can be rearranged to give you *any* limit, including or , or no limit at all. It seems the “value” of a conditionally convergent series is a property not just of the numbers being summed, but the order they’re being summed in, and that’s not at all true of the value of a finite sum. Really the limit is a number that can be unambiguously associated with an infinite series, and we can *define* it as that series’s value, but “value” here means something different than in the case of a finite series where it’s just what you get if you add all the terms together. In fact, what the value means is just what it is: the limit of the sequence of the finite sums.

That being said, what about a divergent series?

Stay tuned.

]]>Define the zeroth ramp number R(0) to be 123456789. Then define the nth ramp number R(n) to be R(n-1) with a 1 prepended and a 9 appended: R(1) = 11234567899, R(2) = 1112345678999, and so on.

Likewise, the antiramp numbers R'(n) are R'(0) = 987654321, R'(1) = 99876543211, R'(2) = 9998765432111, and so on.

Then R(17), R(19), and R'(38) are (if the Python primefac library is to be believed) primes. And they are the *only* ramp/antiramp primes for a good long time. I checked up through R(1000) and R'(1000)… and just when it seemed there were no more, R'(926) turns out to be prime:

9999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999876543211111111111111111111111111111111111111111111111

11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111

11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111

11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111

11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111

11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111

11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111

11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111

11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111

and my first thought was, “Let’s see, so if each destroys half of all life…”

Or am I getting my pop culture mixed up? Well. Suppose we did have a line of, say, 63 Thanoses. Thani? Thanoxen? Something. One by one they snap their fingers and each time, half of all life is destroyed. You end up with of all life surviving. About 1.1E-19. Good luck surviving that.

But wait, that’s wrong. The first Thanos snaps and half of all life is destroyed… *including half of all Thanoses*. Of the 62 who haven’t snapped yet, now there are only 31 of them. After two snaps there are 15 who haven’t snapped. After three there are 7, after four there are 3, after five there is 1. That one snaps and we’re done. The surviving fraction is . Your odds still aren’t good but they’re enormously better.

6 is ; the surviving fraction is , or for initial Thanoses.

Well, but not exactly. Presumably this is a random, probabilistic thing. On average half of all Thanoses die on each snap, but there’s some probability they all survive, and some probability they all die. If they all die on the first snap, of us survive; if they all survive every snap, of us die. The former is a lot more likely than the latter, though.

I wrote a little Python script and got this distribution of survival fractions with 64 initial Thanoses: *[edited after bug fix]*

Fraction | Number of times |
---|---|

1/4 or more | 0 |

1/8 | 671 |

1/16 | 54,214 |

1/32 | 343,104 |

1/64 | 430,491 |

1/128 | 152,643 |

1/256 | 18,035 |

1/512 | 832 |

1/1024 | 10 |

So, good news, in the worst case there still are several million people alive. Just probably not you.

]]>I talked about how, instead of checking in base 10 every number up to, say, 10^{40}, you can check only numbers with no 0 or 1 digits, that do not have both a 5 and an even digit, with digits in increasing order, and that’s a vastly smaller number to check.

Similarly for other bases, but think about base 3. The only digits you have are 0, 1, and 2, so the only numbers that do not have 0 or 1 digits are all 2s: 2_{3}, 22_{3}, 222_{3}, 2222_{3}, …

There are 343,585,013,821,340,887,357,640,753,177,080,848,468,071,681,334 numbers that have 100 digits in base 3. Out of those only one needs to be checked (if you’ve already checked the 99 other numbers consisting of fewer 2s). Which is one heck of a speedup.

In base 4, any number with more than one digit equal to 2 has maximum persistence 2, so there are only two numbers per decade worth checking (23_{4}, 33_{4}, 233_{4}, 333_{4}, 2333_{4}, 3333_{4}, …)

So if we want to check numbers up to 100 digits, there are 100 base 3 numbers and 200 base 4 numbers. For base 5, there are 176,850 numbers. 181,900 in base 6, benefiting from the factors of 6, but 96,560,645 in base 7! Still a lot less than 7^{100}, but things are slowing down pretty hard.

In base 2, as I said, every number > 1 has persistence 1. (Either it contains a 0, so goes to 0, or it’s all 1s, so goes to 1.)

In base 3, only one number in each decade (of the form 222…222_{3}) is worth looking at. From 22222_{3} on most of the digit products seem to contain zeros so the persistence is 2. 22_{3} and 2222_{3} also have persistence 2. The only numbers up to 1000 digits with larger persistence seem to be 222_{3} and 222222222222222_{3} both with persistence 3.

For bases 3 through 16, as far as I’ve checked in each:

Base | Max persist | Min example |

3 | 3 | 222_{3} |

4 | 3 | 333_{4} |

5 | 6 | 3344444444444444444444_{5} |

6 | 5 | 24445_{6} |

7 | 8 | 444555555555555666_{7} |

8 | 5 | 333555577_{8} |

9 | 7 | 2577777_{9} |

10 | 11 | 277777788888899_{10} |

11 | 12 | 399999aaaaaaaaaaaaaaaaaaaaaaa_{11} |

12 | 7 | 3577777799_{12} |

13 | 14 | 7777779aaaaaaaaabcccccc_{13} |

14 | 13 | 55599999999999999aaaabbbbbb_{14} |

15 | 11 | 2bbbbccccdddddde_{15} |

16 | 8 | 379bdd_{16} |

One thing that seems to be happening is that you tend to get larger maximum persistences in prime bases, smaller ones in composite bases. Presumably that’s because of the analog to the 5-and-even situation in base 10: in a prime base, the digit product cannot be a multiple of the base, while in a composite base it can, in which case the digit product ends in a zero and terminates the sequence. Notice how every prime base from 5 on up has larger maximum persistence than the subsequent base. Also, recall 12 and 16 have respectively four and three proper divisors larger than 1, and then notice how small the maximum persistence is in bases 12 and 16 compared to bases 10, 11, 13, 14, and 15.

]]>But in written words, here it is: Take a number and multiply its digits (in base 10 unless otherwise noted) together. Then the product of the digits of that number. Then the product of the digits of that number. Keep going. Eventually you will reach a single digit number (the digit product for a multi-digit number is always less than that number, so it decreases at each step until a single digit is reached), and of course the digit product of a single digit number is itself, the end.

For instance: Starting from 28, we have 2×8 = 16, then 1×6 = 6 and you get to a single digit number in two steps. We say the multiplicative persistence of 28 is 2. From 88 it goes: 88, 64, 24, 8. Persistence is 3.

You might guess there’s no upper limit on persistence, that there can be numbers with persistence of 100 or 1000 or 1,000,000 or whatever; but actually the conjecture is that no number has persistence higher than 11. The smallest number with persistence 11 is 277,777,788,888,899:

277777788888899, 4996238671872, 438939648, 4478976, 338688, 27648, 2688, 768, 336, 54, 20, 0

Obviously the persistence doesn’t depend on the order in which the digits occur, so for instance 998,888,887,777,772 also has persistence 11. So does 27,777,772,228,888,899, where an 8 has been replaced by three 2s. Or by replacing all the 8s, and all the 9s by two 3s, you get numbers like 22,222,222,222,222,222,223,333,777,777, again with persistence 11. And of course adding a 1 digit doesn’t change anything, so 111,122,222,222,222,222,222,223,333,777,777 has persistence 11 too.

But no one has ever found a number with greater persistence. Why not?

Well, there’s a zero trap. Any number > 9 with a 0 digit in it has persistence 1. Any number that has no zero in it but does have a 5 and an even digit has persistence 2, because the product of its digits ends in 0. And even if a number has no zero and either no 5 or no 2, the same must be true of the product of its digits, and of the product of the digits of that product, and so on, for eleven steps in order for there to be a twelfth step. And when you’re dealing with numbers up around 277,777,788,888,899, the likelihood of that gets vanishingly small.

But how would you check if it’s *really* vanishing? The naive thing to do is to check all numbers up through, say, 10^{40}, but that would take, ah, a while. (What’s your computer’s clock speed? The age of the Universe is 4.3 × 10^{26} nanoseconds…). Checking all the *n-*digit numbers would take around ten times longer than the *n–1-*digit ones, and that gets intractable pretty fast.

But as we just noticed, there’s no point in checking numbers containing a zero. And up around 277,777,788,888,899, a lot of numbers do. You can also skip any number containing a 5 and an even digit. Numbers containing a 1 are equivalent to a smaller number without a 1, so no point in bothering with them either. And of course, there’s no point in checking a number whose digits are a permutation of a number you’ve already checked.

In fact, suppose you don’t worry about the 5-and-even check. Then you can just look at numbers containing only digits > 1 in nondecreasing order. Like this:

2, 3, 4, 5, 6, 7, 8, 9, 22, 23, 24, 25, 26, 27, 28, 29, 33, 34, 35… , 77, 78, 79, 88, 89, 99, 222, 223, 224…

There are 8 valid single digit numbers. For 2-digit numbers there are 8 valid first digits, but the second digit must not be less than the first, so there are 8 numbers starting with 2, 7 starting with 3, 6 starting with 4, and so on: 8+7+6+5+4+3+2+1 = 36. That’s the 8th triangular number. For three digits, the number of valid numbers is similarly the 8th tetrahedral number, and generally for *n* digits, the 8th *n*-simplex number:

That increases pretty slowly with *n*. You have to check 8 out of 10 1-digit numbers, but only 36 out of 90 2-digit numbers and 120 out of 900 3-digit numbers, and by the time you’re up to 40-digit numbers, you only have to check 62,891,499 of them! If you do skip over numbers with a 5 and an even digit, that drops even further to only 9,378,299 numbers. For 41 digits the number is 10,749,914, so instead of ten times as many numbers to check in that decade, there’s only about 15% more. A Python script can blow pretty easily through 50 or more digits in a fairly short time.

What you find is this: For valid (nondecreasing, all digits > 1) 15 digit numbers only one, 277,777,788,888,899, has persistence 11. For 16 digits, there’s only 2,247,777,778,888,899. For 17 digits there are two: 22,227,777,778,888,899 and 27,777,789,999,999,999. The first of these is a trivial modification of the 15-digit number, with the same digit product; the second has a different digit product, but the digit product of its digit product is the same as for the 15-digit number; it goes 27777789999999999, 937638166841712, 438939648, 4478976, 338688, 27648, 2688, 768, 336, 54, 20, 0.

Up to 29 digits there are only two valid numbers per decade with persistence 11, based on those two 17 digit numbers. The longest valid variant of 277,777,788,888,899 is 22,222,222,222,222,222,223,333,777,777 as noted above, and the other also has variants up to 29 digits. From 30 digits on up the maximum persistence is 10, with variants of a single number that peter out at 36 digits. For 37 through 42 digits maximum persistence is 8; for 43 through 44, 6: for 45 through 58, 5, and that’s all I’ve got so far.

The sequence of the persistences of the counting numbers is A031346 in OEIS, and the smallest number with persistence *n* is A003001. Per comments there, Martin Gardner wrote about the subject (of course), and there are results implying there are no numbers with persistence greater than 11 through 10^{20585}.

That’s in base 10. In binary, every number > 1 has persistence 1. I’ll leave the other bases to you.

]]>