Generalised Riemann Hypothesis

You are probably familiar with the Riemann hypothesis. This concerns the behaviour of the Riemann zeta function, which is defined on the complex plane by analytic continuation of the following series:

\zeta(s) := \sum\limits_{n=1}^{\infty} n^{-s}

The behaviour of the zeroes of the function outside the strip 0 < \Re(s) < 1 is well-known: they are precisely at the even negative integers. The Riemann hypothesis states that within this strip, the only zeroes have real part exactly equal to ½. It is customary to include, at this point, an entirely gratuitous plot of the real part of the zeta function near the origin:

rzeta

Whilst interesting enough for the Clay Mathematics Institute to offer a prize of 10^6 USD for its resolution, the generalised Riemann hypothesis has far more far-reaching implications. This states that for every Dirichlet L-function, all of its zeroes within the critical strip have real part exactly equal to ½. Since the Riemann zeta function is the simplest example of a Dirichlet L-function, the ordinary Riemann hypothesis is subsumed by the generalised Riemann hypothesis.

What is a Dirichlet L-function?

A Dirichlet L-function is specified by a positive integer k and a completely multiplicative function \chi : \mathbb{Z} \rightarrow \mathbb{C} satisfying:

  • χ(n) = 0 if and only if gcd(n, k) ≠ 1;
  • Periodicity: χ(n + k) = χ(n) for all integers n;
  • Complete multiplicativity: χ(m) χ(n) = χ(mn) for all integers m, n.

Given such a Dirichlet character χ (there are finitely many choices, namely φ(k), for any given k), the corresponding L-function is defined as:

L_{k, \chi}(s) := \sum\limits_{n=1}^{\infty} \chi(n) n^{-s}

When k = 1, the unique Dirichlet character is the constant function 1, whence we recover the original Riemann hypothesis.

Knottedness is in NP, modulo GRH

This is the title of a very interesting paper by Greg Kuperberg. It has been proved that an unknotted knot diagram can be disentangled into a simple closed loop in a number of Reidemeister moves polynomial in the number of crossings in the original diagram. It follows that unknottedness is in NP, as an optimal disentangling sequence is a polynomial-time-checkable certificate of unknottedness.

Kuperberg’s paper proves that unknottedness is in co-NP, i.e. knottedness is in NP. However, the proof relies on the unproven generalised Riemann hypothesis in a very interesting way. Specifically, if the fundamental group of the complement of the knot is non-Abelian — as is the case for any non-trivial knot — then it has a non-Abelian representation over SU(2). If the generalised Riemann hypothesis holds, it is possible to find a reasonably small prime p for which there is such a modulo-p representation; in this case, the images of the generators under this representation (expressed as 2-by-2 matrices with entries modulo p) form a certificate for knottedness. It then suffices to check that they do not all commute, and that they satisfy the relations of the knot group presentation.

ab + bc + ca

Which positive integers cannot be expressed in the form ab + bc + ca, where a, b, c are positive integers? There are either 18 or 19 such numbers, the first 18 of which are enumerated in A025052:

1, 2, 4, 6, 10, 18, 22, 30, 42, 58, 70, 78, 102, 130, 190, 210, 330, 462

There may be a 19th such number, which must necessarily be greater than 10^11, and can only exist provided the generalised Riemann hypothesis is false. In particular, Jonathan Borwein proved that the existence of this 19th number requires a Dirichlet L-function to have a Siegel zero, which would contradict GRH.

Deterministic Miller-Rabin

The fastest known deterministic algorithm for determining if a number is prime is AKS, which takes time O(\log(n)^{6 + \varepsilon}). This is, unfortunately, rather slow.

The Miller-Rabin probabilistic test tells you whether a number is prime in time O(\log(n)^{2 + \varepsilon}). If the number is prime, it correctly claims that it is prime. If it is composite, however, it can lie with probability 1/4. In practice, running this test with many randomly-chosen initial parameters will give a result with sufficiently high probability that it is useful in practice (e.g. in cryptography).

Assuming GRH, it can be shown that 2 \log(n)^2 applications of Miller-Rabin are sufficient to guarantee primality, and therefore that there exists a deterministic algorithm with running time O(\log(n)^{4 + \varepsilon}), noticeably faster than AKS. The Shanks-Tonelli algorithm for finding square-roots modulo a prime will also run deterministically in polynomial time as a result of the same idea.

Artin’s conjecture on primitive roots

This states that every integer a which is neither −1 nor a perfect square is a primitive root modulo infinitely many primes. In 1967, Hooley proved this conditional on the Generalised Riemann Hypothesis, establishing that the set of primes for which a is a primitive root has positive density (among the set of all primes).

Advertisements
Posted in Uncategorized | Leave a comment

AlphaGo Zero

Something amazing has happened. A couple of years ago, we closely followed the progress of AlphaGo, the distributed DeepMind algorithm which defeated Lee Sedol in four out of five games of the ancient board game Go. This has since been defeated by a variant, AlphaGo Zero, which is considerably stronger — after 20 days of training, it was able to win 100 out of 100 games against the version of AlphaGo which played against Sedol. After a further 20 days of training, it won 89 out of 100 games against a stronger instantiation of AlphaGo, namely the one which defeated the world champion Ke.

However, its superiority over the previous algorithm isn’t the most interesting aspect. What makes it interesting is that, unlike AlphaGo which both trained on human games and made use of hardcoded features (such as ‘liberties’), AlphaGo Zero is remarkably simple:

  • The algorithm has no external inputs, learning only from games against itself;
  • The input to the neural network is just 17 inputs, namely the parity of the turn, the indicator functions of white stones for the last 8 positions, and the indicator functions of black stones for the last 8 positions. (Storing the immediate history is a necessity due to ko rules.)
  • Instead of separate policy and value networks, the algorithm uses only one neural network;
  • Monte Carlo rollouts are ditched in favour of a feedback loop where the tree search evolves together with the network.

Read the Nature paper for more details. AlphaGo Zero was trained on just four tensor processing units (TPUs), which are fast hardware implementations of fixed-point limited-precision linear algebra. This is much more efficient (but less numerically precise) than a GPU, which is in turn much more efficient (but less flexible) than a CPU.

Posted in Uncategorized | 2 Comments

Undecidability of contractibility

In the last post, we discussed Voevodsky’s homotopy type theory. One of the important notions is whether a space is contractible, this being the base case for the inductive definition of homotopy levels. It turns out that the algorithmic undecidability of whether a finite simplicial complex is contractible appears not to have been discussed much, the result instead buried in Appendix A of an arXiv paper. The strongest version of the result is:

  • It is undecidable to determine whether a finite 4-dimensional simplicial complex is contractible or not.

An analogous result about fundamental groups, also worth mentioning, can be proved immediately from the undecidability of the word problem in groups:

  • It is undecidable to determine whether a finite 2-dimensional simplicial complex is simply-connected or not.

(Proof: we can consider the presentation complex of any finitely-presented group G, where we take a single point, together with a loop for every generator, and a disc for every relator. This has fundamental group isomorphic to G. After two successive applications of barycentric subdivision, we obtain an abstract simplicial complex homotopy-equivalent to the original presentation complex.)

Clearly, the ‘2’ in the second theorem cannot be lowered; a 1-dimensional simplicial complex is a graph, and it is easy to verify whether or not it is a tree (equivalent to both contractibility and simply-connectedness). I believe it is unknown as to whether the ‘4’ in the first theorem can be lowered to 3 or even 2.

Posted in Uncategorized | 1 Comment

Homotopy Type Theory

2017 has been an unfortunate year for Fields medallists. Maryam Mirzakhani, who won the Fields medal in 2014, passed away at the untimely age of 40. Two days ago, she was joined by Vladimir Voevodsky, 2002 Fields medallist, who was awarded the prize for introducing a homotopy theory on schemes, uniting algebraic topology and algebraic geometry.

He also defined the ‘univalence axiom’ which underpins the recent area of homotopy type theory. This is an especially elegant approach to the foundations of mathematics, with several particularly desirable properties.

Problems with set theory

The most common foundation of mathematics is first-order ZFC set theory, where the objects are sets in the von Neumann universe. It is extremely powerful and expressive: all naturally-occurring mathematics seems to be readily formalised in ZFC. It does, however, involve somewhat arbitrary and convoluted definitions of relatively simple concepts such as the reals. Let us, for instance, consider how the real numbers are encoded in set theory:

A real is encoded as a Dedekind cut (down-set) of rationals, which are themselves encoded as equivalence classes (under scalar multiplication) of ordered pairs of an integer and a non-zero natural number, where an integer is encoded as an equivalence class (under scalar addition) of ordered pairs of natural numbers, where a natural number is a hereditarily finite transitive well-ordered set, and an ordered pair (a, b) is encoded as the pair {{a, b}, {a}}.

Whilst this can be abstracted away, it does expose some awkward concepts: there is nothing to stop you from asking whether a particular real contains some other set, even though we like to be able to think of reals as atomic objects with no substructure.

Another issue is that when we use ZFC in practice, we also have to operate within first-order predicate logic. So, whilst ZFC is considered a one-sorted theory (by contrast with NBG set theory which is two-sorted, having both sets and classes), our language is two-sorted: we have both Booleans and sets, which is why we need both predicate-symbols and function-symbols.

Type theory

In type theory, we have a many-sorted world where every object must have a type. When we informally write \forall x \in \mathbb{R} . \phi(x), we are thinking type-theoretically, that ‘the proposition φ(x) holds for for all reals x’. In set theory, this is actually shorthand for \forall x . ((x \in \mathbb{R}) \implies \phi(x)), namely ‘for every object x, if x is a real then the proposition φ(x) holds’.

This idea of types should be familiar from most programming languages. In C, for instance, ‘double’ and ‘unsigned int’ are types. It is syntactically impossible to declare a variable such as x without stating its type; the same applies in type theory. Functional programming languages such as Haskell have more sophisticated type-theoretic concepts, such as dependent types, allowing functions to be ‘first-class citizens’ of the theory: given types A and B, there is a type (A → B) of functions mapping objects of type A to objects of type B.

These parallels between type theory and computation are not superficial; they are part of a profound equivalence called the Curry-Howard isomorphism.

Types can themselves be objects of other types (in Python, this is familiar from the idea of a metaclass), and indeed every type belongs to some other type. Similar to the construction of the von Neumann hierarchy of sets, we have a sequence of nested universes, such that every type belongs to some universe. This principle, together with a handful of rules for building new types from old (dependent product types, dependent sum types, equality types, inductive types, etc.), is the basis for intuitionistic type theory. Voevodsky extended this with a further axiom, the univalence axiom, to obtain the richer homotopy type theory, which is sufficiently expressive to be used as a foundation for all mathematics.

Homotopy type theory

In type theory, it is convenient to think of types as sets and objects as elements. Homotopy type theory endows this with more structure, viewing types as spaces and objects as points within the space. Type equivalence is exactly the same notion as homotopy equivalence. More interestingly, given two terms a and b belonging to some type T, the equality type (a = b) is defined as the space of paths between a and b. From this, there is an inductive hierarchy of homotopy levels:

  • A type is homotopy level 0 if it is contractible.
  • A type is homotopy level n+1 if, for every pair of points a and b, the type (a = b) is homotopy level n.

They are nested, so anything belonging to one homotopy level automatically belongs to all greater homotopy levels (proof: the path spaces between two points in a contractible space are contractible, so level 1 contains level 0; the rest follows inductively). The first few homotopy levels have intuitive descriptions:

  • Homotopy level 0 consists of contractible spaces, which are all equivalent types;
  • Homotopy level 1 consists of empty and contractible spaces, called mere propositions, which can be thought of as ‘false’ and ‘true’ respectively;
  • Homotopy level 2 consists of disjoint unions of contractible spaces, which can be viewed as sets (the elements of which are identified with the connected components);
  • Homotopy level 3 consists of types with trivial homotopy in dimensions 2 and higher, which are groupoids;
  • Homotopy level 4 consists of types with trivial homotopy in dimensions 3 and higher, which are 2-groupoids, and so on.

More generally, n-groupoids correspond to types of homotopy level n+2. The ‘sets’ (types in homotopy level 2) are more akin to category-theoretic sets than set-theoretic sets; there is no notion of sets containing other sets. I find it particularly beautiful how the familiar concepts of Booleans, sets and groupoids just ‘fall out’ in the form of strata in this inductive definition.

Voevodsky’s univalence property states that for two types A and B, the type of equivalences between them is equivalent to their equality type: (A ≈ B) ≈ (A = B). Note that the equality type (A = B) is the space of paths between A and B in the ambient universe U, so this is actually a statement about the universe U. A universe with this property is described as univalent, and the univalence axiom states that all universes are univalent.

Further reading

I definitely recommend reading the HoTT book — it is a well-written exposition of a beautiful theory.

[P. S. My partner and I are attending a debate this evening with Randall Munroe of xkcd fame. Watch this (cp4)space…]

Posted in Uncategorized | 1 Comment

Wallis Workshop

We’re pleased to announce a sequel to the Ada Lovelace Hackathon. Specifically, we’re organising a workshop on Sunday 27th November 2016 to commemorate the 400th birthday of John Wallis, Cambridge mathematician and Parliament’s chief cryptographer. His developments include early analysis (infinitesimals and convergence) and the eponymous Wallis’ Product which expresses π as an infinite product expansion:

wallis

He also analysed a reciprocal structure designed by Leonardo da Vinci, resolving forces to show that it is stable without any external support. This is considered to be the first example of structural analysis, and involved solving linear equations in the 25 variables illustrated in the following diagram:

frame

He also coined the lemniscate symbol for infinity, ∞, and wrote a book on grammar. As such, broad themes of the event will include infinity, non-standard analysis, reciprocal structures, and cryptography. More suggestions are available on the website.

There will also be a baking competition, as before, and fancy dress is encouraged. If you need reminding, here is one of the entries from last year:

edible-tantrix

We’ll also endeavour to actually nominate a winner this time…

Posted in Uncategorized | 1 Comment

Is Craig Wright?

Today there has been an explosion of media interest in the claim by Craig Wright that he is the identity behind the pseudonymous creator of Bitcoin, Satoshi Nakamoto. However, his blog post is rather suspicious, as it contains various misconceptions that one would not expect from an expert in the field, let alone the originator of Bitcoin.

“A process known as combinatorics”

Several paragraphs into the post, he begins discussing technical details and making various errors:

Wright writes: The SHA256 algorithm provides for a maximum message size of 2^128 – 1 bits of information whilst returning 32 bytes or 256 bits as an output value.

No, the SHA-256 algorithm only supports inputs of length up to 2^{64}-1. Specifically, there is a preprocessing stage where extra bits are appended to the end of the input as follows:

  • Input data (n bits)
  • Padding (512 – (n+64 mod 512) bits)
  • Binary representation of n (64 bits)

This ensures that the prepared input has a length divisible by 512, which is necessary for the mixing algorithm which operates on blocks of 512 bits.

Anyway, this error is just about excusable since it pertains to the obscured internal details of an algorithm which people often use simply as a ‘black box’ for generating cryptographically secure message digests. The next sentence was much more concerning, since it suggests a serious mathematical misconception:

Wright writes: The number of possible messages that can be input into the SHA256 hash function totals (2^128 – 1)! possible input values ranging in size from 0 bits through to the maximal acceptable range that we noted above.

This does not even remotely resemble the correct number of possible inputs, which is 2^{2^{64}} - 1. The use of a factorial to count the number of binary strings should immediately trigger alarm bells in anyone with a rudimentary undergraduate-level understanding of discrete mathematics.

This is followed by the rather amusing deviation:

Wright writes: In determining the possible range of collisions that would be available on average, we have a binomial coefficient (n choose k) that determines the permutations through a process known as combinatorics [1].

The reference is to a paper by Lovasz, a great mathematician who would be either amused or offended to hear the field of combinatorics described as ‘a process‘. Moreover, binomial coefficients count subsets, rather than ‘determine permutations’, and most professional cryptanalysts would struggle to decipher the phrase ‘possible range of collisions that would be available on average’.

Nitpicking the code

In one of the images on Craig Wright’s blog post, there is a screenshot of Notepad displaying a putative shell script for verifying an ECDSA signature. With the comments removed, the code reads as follows:

filename=$1
signature=$2
publickey=$3

if [[ $# -lt 3 ]] ; then
 echo "Usage: verify <file> <signature> <public_key>"
 exit 1
fi

base64 --decode $signiture > /tmp/$filename.sig
openssl dgst --verify $publickey -signature /tmp/$filename.sig $filename
rm /tmp/$filename.sig

Note that the antepenultimate line says ‘signiture’ instead of ‘signature’, so the script doesn’t do what is claimed. In particular, it reads the signature from the environment variable ‘signiture’ rather than from the command-line argument. Hence, if you populate the environment variable with your own public-key, rather than Satoshi’s, you can cause the test to pass!

Whether this was indeed a malicious trick to convince spectators (or economists, as the case may be) or simply an innocent typo is unclear. But in the latter case, the script clearly was never tested; otherwise, the error would have been quickly detected. Either way, this seems somewhat suspicious.

“I’m Satoshi, and so’s my wife”

This is by no means the first time someone has claimed to be Satoshi. However, on this occasion there is the added caveat that two well-known Bitcoin developers, Jon Matonis and Gavin Andresen, purport that Wright is indeed right. This rules out the possibility that Wright is merely trying to seek attention, and instead suggests the following dichotomy:

  1. Matonis and Andresen genuinely believe that Satoshi is Wright.
  2. The triumvirate have ulterior motives for perpetuating a ruse.

Several explanations for (2) have been proposed. In particular, there is a rift amongst the Bitcoin developers between the ‘big-blockians’ and the ‘little-blockians’ (to parody Jonathan Swift), which I shall attempt to summarise here. Firstly, note that block size is essentially a measure of how many transactions can be handled in a 10-minute interval.

The little-blockians want the block sizes of Bitcoin to remain small, and thus for it to be a pure decentralised currency that can be used by anyone with a computer. This would maintain it as a peer-to-peer currency, but would limit its growth.

By comparison, the big-blockians believe Bitcoin should grow into a universal currency, expanding the block size to accommodate absolutely every transaction. The downside is that this is beyond the computational limits of domestic machines, thereby meaning that Bitcoin could only be regulated by banks, governments, and other large organisations: thereby moving it away from a libertarian idyll into something more akin to a regular currency.

Matonis, Andresen and Wright are all big-blockians. Having the esteemed creator Satoshi on their side would help their argument, and it is entirely plausible that there are several large organisations who would benefit from having more control over the regulation of Bitcoin.

Whether these motives are indeed the case, rather than mere speculation, will require further evidence. But as the evidence stands, I would not like to bet any money, cryptographic or otherwise, on the validity of Wright’s claim…

Posted in Uncategorized | 20 Comments

March miscellany

Several rather interesting developments have occurred this month. In inverse chronological order, they are summarised below:

E8 and Λ24 lattices actually are optimal

The lattices E8 and Λ24 are lattice packings of unit spheres in 8 and 24 dimensions, respectively. They possess many exceptional properties, including being highly symmetrical and efficient sphere packings. Indeed, it was shown that, among all lattice* packings of spheres in 8 and 24 dimensions, E8 and Λ24 have the highest density. Moreover, it was known that no packing whatsoever could be more than microscopically more efficient than either of these exceptional lattices.

*that is to say, ones where the group of translational symmetries acts transitively on the set of spheres.

Now, it has actually been proved that E8 and Λ24 have the highest density among all sphere packings, not just lattice packings. Maryna Viazovska made a breakthrough which allowed her to prove the optimality of the 8-dimensional packing; subsequently, she led a group of collaborators who refined this approach to apply to the 24-dimensional Leech lattice.

More on this story here.

AlphaGo defeats Lee Sedol in a 4-1 victory

A team of researchers at Google DeepMind developed a program to play the board game Go at the professional level. It was pitted against a 9-dan professional Go player (that is to say, one of the very best Go players in the world), Lee Sedol, winning four of the five matches. Lee Sedol was able to obtain a single victory, his fourth game, by observing that AlphaGo was less strong when faced with unexpected moves (despite having made several of its own, described by spectators as ‘creative’!).

The algorithm ran on a distributed network of 1202 CPUs and 176 GPUs. It uses a combination of brute-force tree searching (like conventional chess algorithms), Monte Carlo simulations (where many plausible futures are simulated and evaluated) and pattern recognition (using convolutional neural networks). These convolutional neural networks are much larger and deeper than the one I implemented on the Analytical Engine, with 192 feature-maps in the first convolutional layer. In particular, AlphaGo used:

  • A value network to evaluate how favourable a position appears to be;
  • A supervised policy network to learn how a human would play;
  • A reinforcement policy network to decide how it should play;
  • A fast rollout policy network which is a simpler, cruder, and faster counterpart of the above (used for the Monte Carlo rollouts).

The supervised policy network was trained on a huge database of human matches, learning to recognise spatial patterns with its convolutional networks. After quickly exhausting the database, it had no option but to play lots of instances of itself via reinforcement learning, using competition and natural selection to evolve itself. In doing so, AlphaGo had effectively played more matches than any human had ever seen, growing increasingly powerful over a period of several months (from when it played the 2-dan European Go champion Fan Hui).

A much more detailed description of the algorithm is in the Nature paper.

Unexpected pattern discovered in Conway’s Game of Life

It transpires that, despite 46 years of interest and hundreds of thousands of CPU-hours of Monte Carlo searching, there was a piece of low-hanging fruit in an area no-one had thought to explore. This was the copperhead, a spaceship with the unusually slow speed of c/10:

copperhead

Alexey Nigin wrote an excellent article describing its discovery. In the next few days after the article’s publication, there were additional breakthroughs: Simon Ekström discovered that the spaceship could be extended to produce a profusion of sparks:

tagalong

This extension is capable of perturbing passing objects, such as reflecting gliders. A convoy of these creatures can consequently catalyse more complex interactions, such as duplicating and recirculating gliders to yield an unterminating output; the following example was constructed by Matthias Merzenich:

sokwe-rake

As you can see, there is a steady stream of NE-directed gliders escaping to the right.

Posted in Uncategorized | 4 Comments