## Emergence

One of the most miraculous phenomena in the universe is, unarguably, the existence of biological life. Whilst evolution nicely accounts for the gradual progression from the earliest archaeobacteria to the vast array of multicellular creatures capable of astounding feats of creativity, it does not explain how life originated ab initio. The origin of life, or abiogenesis, still remains something of a mystery, but it is definitely the most impressive example of emergence: how complex structures can form naturally from the uncoordinated interactions of simpler agents.

Of course, we cannot hope to simulate anything like abiogenesis on our primitive computers, which struggle with even determining how a single protein will develop into a tertiary structure. However, a simple model seems to demonstrate emergence of rather complicated structures and interactions (including logic gates and feedback loops) from random initial conditions over reasonably short timescales (about 24 hours on a modern computer).

### The environment

We operate in a cellular automaton developed by Mirek Wojtowicz in March 1999. The world is a two-dimensional square grid of cells (potentially infinite, although the results described here were on a 4096-by-4096 flat torus), each of which can interact with its eight Moore neighbours.

Each cell has a state (either 0, 1, 2 or 3), which can vary over time (which is discrete) according to the following rules:

• If a cell is in state 0 at time t and has precisely two neighbours in state 1, then it will itself change to state 1 by time t + 1.
• A state-1 cell remains live if it has between three and five live neighbours, and otherwise decays into state 2;
• A state-2 cell decays into state 3, irrespective of its surroundings;
• A state-3 cell decays into state 0, irrespective of its surroundings.

### Experiment 1: Bounded 480-by-480 grid

I initially seeded a 480-by-480 grid with completely random noise. After several generations, slight order began to emerge from the chaos; specifically, most of the cells had subsided into state 0, leaving a few small localisations scattered sporadically in an otherwise empty universe. Some of these were able to grow rapidly, filling the space with a bedlam of colliding pulses. Eventually, after many thousands of generations, this eventually settled into the periodic behaviour shown below:

The main features are static state-1 ‘islands’, many of which have oscillating active boundaries. A few of these boundaries produce periodic emissions (or ‘spaceships’). Where two streams of spaceships collide, varied interactions are possible; the animation above shows a naturally-occurring Rube-Goldberg machine of interacting spaceship streams.

Compare this with similar experiments in Conway’s Game of Life, where no glider guns have been observed to occur naturally from random soup due to their complexity and fragility. As such, this 4-state cellular automaton (known as Star Wars) is a much better demonstration of emergence.

### Experiment 2: Toroidal 4096-by-4096 grid

I decided that a 480-by-480 bounded grid was too small to exhibit much complexity, so my next experiment was an overnight run on a 4096-by-4096 torus. I wanted to see more rich structures than before, and indeed I was rewarded by the emergence of a couple of stable feedback loops, one of which is capable of storing information:

In particular, the chain of five pulses is capable of propagating itself on a passive background of uniformly-spaced isolated pulses; this forms a rudimentary delay-line memory. The other stable feedback loop I found behaves more like a pseudo-random number generator. See if you can find it in the picture below (and indeed, tell me if you find anything else exciting).

Observe that the islands are much larger, and some of them occasionally contain lakes, at least one of which also contains a sub-island. The archipelago is also punctuated by vast swathes of vacuum, with a sporadicity of crosses (some of which function as impulse reflectors).

Extrapolating, there appears to be a sufficient amount of interaction that a particularly large random grid could lead to a large, sprawling computer capable of modifying its own internal wiring, much like the B-type machines proposed by Turing. A relevant quotation is this one of Conway’s. Unfortunately, I can’t seem to remember where I found it, so I’ll have to paraphrase instead:

I can imagine that if you have a large room full of circuitry, and connect the inputs and outputs totally randomly and arbitrarily, then the resulting complex would probably be capable of universal computation.

### Other applications

Several years ago, a certain Tosik realised that this same cellular automaton could be used as the basis of an amazingly simple Atari-style ’shooter’ game, where everything apart from the player and the goal is just this basic 4-state cellular automaton. Here’s a very representative sample video provided by Tosik:

Considerably more recently, Katsunobu Imai (who, inter alia, was the second person to discover a glider in a cellular automaton on a Penrose tiling!) programmed an implementation of Panic Shooter in Codea, which allows it to be executed on platforms such as smartphones. More information can be found on his Google+ page.

Posted in Uncategorized | 1 Comment

This is another particularly satisfying one to solve.

XGJDGSZUIDSSDDDOURPMHDNOUZJMTBMVNAFQTVBISIBSUIDS
HOHNGHOUDHFQTHOUGFOTNAFSEJDMEFFODSZUFCCZHUEPSLBM
JOSFHQBKEPLBHOUGBUHTDVDKJEDBMCVSONUONSNDVBMJCFZO

Good luck. As usual, the password is in lower-case and contains no punctuation. Also, it is precisely twelve letters in length.

## Gears

If I have seen further it is by standing on the shoulders of giants.

This memorable quotation from Sir Isaac Newton, it has been speculated, was a subtle derogatory reference to the physical stature of his rival, Robert Hooke. Many derivative quotations have emerged from it, my favourite being this intensely humorous one of Hal Abelson:

If I have not seen as far as others, it is because there were giants standing on my shoulders.

Anyway, one of the more ubiquitous immortalisations of the original quotation is on the milled edge of a two-pound coin, which bears the inscription ‘standing on the shoulders of giants’. Incidentally, the milled edge was itself an invention of the same Sir Isaac Newton, which he incorporated into coins whilst Warden of the Royal Mint to prevent the entirely dishonourable practice of ‘clipping’, where fruadulent scoundrels would shave off a thin layer of gold from coins.

The innermost annulus, surrounding a chariot wheel and bounded by a design reminiscent of a printed circuit board, features a ring of interlocking gears of various types, all of which mesh together. Do you notice anything bizarre about them?

A quick enumeration confirms that there are precisely nineteen gears in this closed loop. If you were to rotate them, adjacent gears would turn in opposite directions, which is of course impossible for an odd number of gears in a planar loop. Several people complained about this, for instance here.

At the Archimedeans’ annual dinner, Joseph Myers happened to mention in passing that this ‘would not work unless the gears were arranged in a Möbius strip’. If they were, then an odd number of gears is actually required: the lack of orientability means that a clockwise-rotating gear becomes an anticlockwise-rotating gear when moved around the strip, and we can indeed have nineteen perfectly rotating gears. Conveniently, I still have the gear-drawing code from my Antikythera mechanism, so we can see nineteen interlocking gears in action:

Michael Trott has gone two steps further. Firstly, he noticed that the gear chain can double-cover a Möbius strip such that the gears are Hopf-linked together to form the first iteration of Antoine’s necklace:

Secondly, Trott proceeded to bend the resulting chain into a trefoil knot:

### Earliest examples of gears?

Intricate mechanical gears have existed since antiquity and beyond. I already mentioned the Antikythera mechanism, which astounded archaeologists by predating any other known machines of that complexity by several centuries. An earlier example of gears is the Chinese south-pointing chariot, which uses a simple system of cogs to ensure that a pointer continually points south, even as the chariot itself turns. This facilitated navigation before the magnetic compass had been developed.

However, researchers at the Department of Zoology (University of Cambridge) recently discovered that the leaf-hopping insect Issus has a pair of interlocking gears used to synchronise leg motions to within 30 microseconds of each other, far beyond what is achievable by the nervous system. This, of course, is profoundly older than either the Antikythera mechanism or the south-pointing chariot, although admittedly far simpler than either.

Somewhat curiously, the gears are only present in the juvenile Issus, rather than its adult form.

Posted in Uncategorized | 3 Comments

## Schönhage-Strassen algorithm

Interestingly, the first article on cp4space was concerned with Fourier series and pathological functions. Assuming you haven’t read it (which is plausible, since it precluded the early-2013 popularity surge), I suppose I should start by re-introducing them.

Fourier reasoned that periodic functions such as the sound waves produced by musical instruments could be expressed as superpositions of simple sinusoidal waves of different frequencies. As a basic example, successive approximations to the square wave are summarised in this animated GIF:

As a slight aside, I’m always slightly wary about including animated GIFs in my articles, since the last one ended up on the website Tumblr (home to such delights as Sherlock Holmes doffing his scarf) and featuring some rather interesting tags:

What if we want to perform a similar treatment to aperiodic functions? It transpires that it is possible, as long as we allow a continuum of frequencies instead of a discrete series. In this case, the summation is replaced with integration, and we obtain the beautiful Fourier transform:

The inverse Fourier transform is obtained by deleting the minus sign in the above expression. Interestingly, the four-fold composition of the Fourier transform operator is the identity (F^^^^ = F), which led people to consider the concept of a fractional Fourier transform by an arbitrary angle (where the original Fourier transform operator corresponds to a 90° rotation in time-frequency space), a useful tool for noise reduction in signal processing.

Anyway, the Fourier transform has some nice properties. For instance, if we view functions from $\mathbb{R}$ to $\mathbb{C}$ as elements of an infinite-dimensional complex vector space, Fourier transforms are linear operators. This suggests another idea: what if, instead of an infinite-dimensional complex vector space, we consider an n-dimensional one instead?

### Discrete Fourier transform

Since we’re now living in a finite-dimensional vector space $\mathbb{C}^n$, linear transformations are represented by matrices. The matrix representing the discrete Fourier transform has entries $DFT_{i j} = \dfrac{1}{\sqrt{n}} \zeta^{i j}$, where ζ is a principal nth root of unity.

We also have the beautiful relationship between pointwise multiplication and sequence convolution of vectors:

• DFT(x convolve y) = DFT(x) pointwise DFT(y)

This will become useful later. Note also that there is nothing special about the complex numbers in this context, and we can replace them with another ring (such as modular arithmetic). If we do so, we obtain the number-theoretic tranform (NTT), which has the nice property that it can be implemented on a computer with exact integer arithmetic instead of all of that tedious fiddling around with complex numbers.

Now, convolution is a rather common operation, which is used (amongst other things) for multiplying two polynomials:

(a0 + a1 X + a2 X^2 + …)(b0 + b1 X + b2 X^2 + …) = (a0 b0 + (a1 b0 + a0 b1) X + (a2 b0 + a1 b1 + a0 b2) X^2 + …)

Strictly speaking, the convolution in the context of the discrete Fourier transform is cyclic convolution, since the sequences ‘wrap around’. However, this only produces minor annoyances that do not affect the asymptotic analysis of…

### …the Schönhage-Strassen algorithm

Basically, the ordinary algorithm for multiplying integers treats them as polynomials in some number (the base) and convolves them. But using the number-theoretic transform, we can reduce convolution (the expensive task) to pointwise multiplication, which in turn can be performed recursively using this algorithm. If we carefully choose the base at each stage (say by chopping the original N-bit numbers into about sqrt(N) blocks), then we get a recursion depth of log log N and an asymptotic running time of O(N log N log log N).

A slightly more sophisticated variant, Fürer’s algorithm, reduces the running time even further to O(N log N 2^log*(N)), where log* is the iterated logarithm function. However, for all practical applications, it is faster to use Schönhage-Strassen (with Fürer only overtaking for astronomically huge numbers).

Wait… did I say practical applications?

Surprisingly, multiplying huge numbers does indeed have its advantages. Although the numbers used in RSA cryptography are sufficiently small that the Karatsuba multiplication algorithm is better, testing Mersenne primes using the Lucas-Lehmer test requires multiplying integers with millions of digits (safely within the realm of optimality of Schönhage-Strassen).

## Cipher 58: Regarding knot theory

Sorry about the slight delay in posting this cipher. There’s plenty of background contextual information, all of which will be revealed once you’ve solved it:

TYFXTGXQYGTUYIIUHIFQDIXTDBQVITGEO
QVTYRLOEITLYQOINBULWJLYLQOBXQPUIT
GDEPDITIEITLYGTBXUVWTVDISUDGVTPUS

This cipher happened to have a nice property, which is a complete coincidence. The original title of the cipher was supposed to be ‘vexed’, hence the filename.

Hopefully you’ll have more luck with this one than its immediate predecessor, which remains unsolved after a week.

Posted in Ciphers | 3 Comments

## Soddy’s hexlet

Suppose we have three semi-transparent light blue spheres, S1, S2 and S3, which are all externally tangent to each other. Then, we have a series of metallic green spheres, Γ0, Γ1, Γ2, Γ3, Γ4, Γ5, such that each one touches the previous one and the three light blue spheres. It is a remarkable fact that they complete a chain (Γ5 touches Γ0), irrespective of the original three light blue spheres. Of course, there is a one-parameter family of possible starting locations for Γ0, as visualised below:

There is quite a simple explanation for this fact. Recall that we can append a point at infinity to $\mathbb{R}^3$ to obtain a topological 3-sphere, and we can give an exact mapping to the unit 3-sphere in 4-space by means of stereographic projection. Then, the family of projective transformations of 4-space which fix the unit 3-sphere map planes to planes, and therefore 2-spheres on the unit 3-sphere (intersections of planes with the unit 3-sphere) to 2-spheres.

In other words, we obtain a group of conformal transformations mapping spheres to spheres (or planes). Now, if we send the tangency point of two of the blue spheres to infinity, those two blue spheres will become parallel planes sandwiching a third blue sphere, as in the following situation:

But all spheres that are mutually tangent to both planes must necessarily have the same radius, and taking a cross-section halfway between the two planes reduces this to a very simple problem: how many unit-radius circles can simultaneously touch another unit-radius circle? The answer, of course, is six, as epitomised by the hexagonal packing of circles.

Suddenly, at this very moment, I had a realisation: we can add extra layers of spheres!

And reapply the transformation of (real!) projective 4-space to give a less trivial example:

Note that for any orbit of metallic green spheres, the envelope is a transformed torus, or Dupin cyclide. The orbit can be classified as elliptic (if the envelope is bounded), parabolic (if it touches the point at infinity), or hyperbolic (if the envelope contains the point at infinity). Now, when we have more than six spheres (as in the example above), we have multiple orbits, which may be of different types.

The spheres on parabolic orbits grow infinite in size before shrinking again. The hyperbolic orbits are even worse, with spheres temporarily turning inside-out as they engulf the point at infinity! The following animation shows all three types in some crazy uncivilised menage-a-trois: