In 2012–13, the Institute’s School of Mathematics hosted a special year devoted to the topic “Univalent Foundations of Mathematics,” organized by Steve Awodey, Professor at Carnegie Mellon University, Thierry Coquand, Professor at the University of Gothenburg, and Vladimir Voevodsky, Professor in the School of Mathematics. This research program was centered on developing new foundations of mathematics that are well suited to the use of computerized proof assistants as an aid in formalizing mathematics. Such proof systems can be used to verify the correctness of individual mathematical proofs and can also allow a community of mathematicians to build shared, searchable libraries of formalized definitions, theorems, and proofs, facilitating the large-scale formalization of mathematics.
The possibility of such computational tools is based ultimately on the idea of logical foundations of mathematics, a philosophically fascinating development that, since its beginnings in the nineteenth century, has, however, had little practical influence on everyday mathematics. But advances in computer formalizations in the last decade have increased the practical utility of logical foundations of mathematics. Univalent foundations is the next step in this development: a new foundation based on a logical system called type theory that is well suited both to human mathematical practice and to computer formalization. It draws moreover on new insights from homotopy theory—the branch of mathematics devoted to the study of continuous deformations in space. This is a particularly surprising source, since the field is generally seen as far distant from foundations.
For the special year, a team of thirty-two leading researchers in the areas of computer science, logic, and mathematics from around the world was assembled at IAS to develop this new foundation of mathematics. An ambitious program of weekly seminars, lectures, working groups, tutorials, and other activities led to a lively interaction and a vigorous exchange of ideas, skills, and viewpoints, resulting in numerous collaborations among the participants. The program’s goals were realized beyond expectations, producing a powerful and flexible new foundational system called homotopy type theory, based on earlier systems of type theory that were originally intended for constructive mathematics and computer programming, and augmented by new principles motivated by homotopy theory. In addition to a body of theoretical results pertaining to the foundations, a substantial amount of mathematics was developed in this new system, including basic results in homotopy theory, higher category theory, set theory, and the beginnings of real analysis. In parallel, efforts were focused on the development of new and existing computer proof assistants for the formalization of these and future results. An extensive library of code was established on which future work can be built, and formalized proofs of significant results in homotopy theory were given, such as computing many homotopy groups of spheres. In a remarkable, collaborative effort, a textbook was also written by the special-year participants, developing both the foundations and various specialized areas of mathematics in the new logical system. This book not only serves as a record of the results of the special year, but also as a useful introduction for future researchers entering the field.
_____________________
The idea of logical foundations of mathematics goes back at least to Gottlob Frege’s Begriffsschrift of 1879, which set out to show that arithmetic could be deduced entirely from logic in a way that was “free of gaps” and thus requiring no leaps of intuition. Frege’s system of logical deductions—which looked a bit like complicated wiring diagrams —was soon discovered by Bertrand Russell to contain a contradiction: a disastrous logical inconsistency, which had the effect that mathematicians otherwise unconcerned with logic began to pay increased attention to logical precision. Russell himself proposed a solution based on what he called the theory of types, and Ernst Zermelo proposed another based on axioms for Georg Cantor’s set theory. During the 1920s and ’30s, mathematicians as prominent as David Hilbert, Hermann Weyl (Professor, 1933–51), and John von Neumann (Professor, 1933–57) worked on the foundations of mathematics, culminating in the famous discoveries of Kurt Gödel (Member, beginning in 1933; Professor, 1953–76) about the limits of logical formalization. Gödel showed namely that a complete and consistent logical formalization of even arithmetic was mathematically impossible; moreover, this result agreed with the practical experience of many mathematicians, that the formalization of even the most basic mathematical theories was impractically complicated and irrelevantly elaborate. Russell’s system arrived at the result that 1 + 1 = 2 only after 362 pages of laborious formal deductions!
By the 1950s, a consensus had settled in mathematics that the program of logical foundations, while perhaps interesting in principle or as its own branch of mathematics, was going to be of little use to the general practice of mathematics as a whole. This view was only reinforced by the results of Gödel and Paul Cohen (Member, 1959–61, 67) on the formal undecidability of the famous continuum hypothesis. Much subsequent research in logical theory was related instead to the new field of computation; indeed, it was the early work in logic that had led to the development of the modern computer, and subsequent advances in theoretical and practical computation were closely tied to logical research.
But with recent advances in the speed and capacity of modern computers and theoretical advances in their programming has come a remarkable and somewhat ironic possibility: the use of computers to aid in the nearly forgotten program of formalization of mathematics. For what was once too complicated or tedious to be done by a human could now become the job of a computer. With this advance comes the very real potential that logical foundations, in the form of computer formalizations, could finally become a practical aid to the mathematician’s everyday work, through verifying the correctness of definitions and proofs, organizing large-scale theories, making use of libraries of formalized results, and facilitating collaboration on the development of a unified system of formalized mathematics. Gödel may have shown that mathematics cannot be entirely formalized in principle, but in practice there are still great benefits to be had from formalization when sophisticated computer systems can be brought to bear.
This new conception of foundations of mathematics, so closely tied to the use of computers to provide the guarantee of formal rigor and to aid in handling the explosion of complexity, seems so natural that future historians of mathematics may well wonder how Frege and Russell could have invented the idea of formal systems of foundations before there were any computers to run them on. Nor is it a coincidence that foundations work so well in combination with computers; as already stated, the modern computer was essentially born from the early research in logic, and its modern programs and operating systems are still closely related to the logical systems from which they evolved. In a sense, modern computers can be said to “run on logic.”
This is the starting point of the univalent foundations program: the idea that the time is now ripe for the development of computer proof assistants based on new foundations of mathematics. But it is not only the advance in technology that has made this program feasible today; recent breakthroughs in logical theory also play an important role. One such advance was the discovery of a connection between the system of type theory used by some modern proof systems and the mathematical field of homotopy theory, which usually requires a high level of mathematical abstraction to even get off the ground. This connection permits direct, logical formalizations of some important concepts having broad application in various fields of mathematics. An important example is the fundamental notion of a set, which in univalent foundations turns out to be definable from more primitive concepts, as was recently discovered by Voevodsky. A related discovery, also due to Voevodsky, is the univalence axiom, which states, roughly, that isomorphic mathematical objects may be identified. This powerful new principle of reasoning, which agrees with everyday mathematical practice but is not part of the traditional set-theoretic foundation, is fully compatible with the homotopical view, and indeed strengthens it, while greatly simplifying the use of type theory as a system of foundations. Finally, the discovery of direct, logical descriptions of some basic mathematical spaces, such as the n-dimensional spheres Sn, and various other fundamental constructions, has led to a system that is both comprehensive and powerful, while still being closely tied to implementation on a computer.
References
Steve Awodey, Alvaro Pelayo, and Michael A. Warren, “Voevodsky’s Univalence Axiom in Homotopy Type Theory,” Notices of the American Mathematical Society (forthcoming).
George Dyson, “Julian Bigelow: Bridging Abstract Logic and Practical Machines,” The Institute Letter (Spring 2013): 14–15.
Thomas C. Hales, “Formal Proof,” Notices of the American Mathematical Society 55, no. 11 (2008): 1370–80.
The Univalent Foundations Program at the Institute for Advanced Study, Homotopy Type Theory: Univalent Foundations of Mathematics (2013). http://homotopytypetheory.org/book/.
____________________
Homotopy Type Theory Permits 'Logic of Homotopy Types'
The following text is excerpted from the introduction to the book Homotopy Type Theory: Univalent Foundations of Mathematics, written jointly by the participants of the special year on univalent foundations, which may be obtained at http://homotopytypetheory.org/book/.
Homotopy type theory is a new branch of mathematics that combines aspects of several different fields in a surprising way. It is based on a recently discovered connection between homotopy theory and type theory. Homotopy theory is an outgrowth of algebraic topology and homological algebra, with relationships to higher category theory, while type theory is a branch of mathematical logic and theoretical computer science. Although the connections between the two are currently the focus of intense investigation, it is increasingly clear that they are just the beginning of a subject that will take more time and more hard work to fully understand. It touches on topics as seemingly distant as the homotopy groups of spheres, the algorithms for type checking, and the definition of weak ∞-groupoids.
Homotopy type theory also brings new ideas into the very foundation of mathematics. On the one hand, there is Voevodsky’s subtle and beautiful univalence axiom. The univalence axiom implies, in particular, that isomorphic structures can be identified, a principle that mathematicians have been happily using on workdays, despite its incompatibility with the “official” doctrines of conventional foundations. On the other hand, we have higher inductive types, which provide direct, logical descriptions of some of the basic spaces and constructions of homotopy theory: spheres, cylinders, truncations, localizations, etc. When combined in homotopy type theory, these innovations permit an entirely new kind of “logic of homotopy types.”
This suggests a new conception of foundations of mathematics with intrinsic homotopical content, an “invariant” conception of the objects of mathematics, and convenient machine implementations, which can serve as a practical aid to the working mathematician. This is the univalent foundations program. The present book is intended as a first systematic exposition of the basics of univalent foundations and a collection of examples of this new style of reasoning—but without requiring the reader to know or learn any formal logic or to use any computer proof assistant.
We emphasize that homotopy type theory is a young field, and univalent foundations is very much a work in progress. This book should be regarded as a “snapshot” of the state of the field at the time it was written, rather than a polished exposition of an established edifice. As we will discuss briefly later, there are many aspects of homotopy type theory that are not yet fully understood—but as of this writing, its broad outlines seem clear enough. The eventual theory will probably not look exactly like the one described in this book, but it will certainly be at least as capable and powerful; given the results presented here, we therefore believe that univalent foundations could eventually replace set theory as the “implicit foundation” for the un-formalized mathematics done by most mathematicians.
The basic idea of the univalence axiom can be explained as follows. In type theory, one can have a universe U, the terms of which are themselves types, A : U, etc. Those types that are terms of U are commonly called small types. Like any type, U has an identity type IdU, which expresses the identity relation A = B between small types. Thinking of types as spaces, U is a space, the points of which are spaces; to understand its identity type, we must ask, what is a path between spaces in U? The univalence axiom says that such paths correspond to homotopy equivalences A ≃ B. At the risk of oversimplifying, we can state this succinctly as follows:
Univalence Axiom: (A = B) ≃ (A ≃ B).
In other words, identity is equivalent to equivalence. In particular, one may say that “equivalent types are identical.”
From the homotopical point of view, univalence implies that spaces of the same homotopy type are connected by a path in the universe U, in accord with the intuition of a classifying space for (small) spaces. From the logical point of view, however, it is a radically new idea: it says that isomorphic things can be identified! Mathematicians are of course used to identifying isomorphic structures in practice, but they generally do so by “abuse of notation,” or some other informal device, knowing that the objects involved are not “really” identical. But in this new foundational scheme, such structures can be formally identified, in the logical sense that every property or construction involving one also applies to the other. Indeed, the identification is now made explicit and properties and constructions can be systematically transported along it. Moreover, the different ways in which such identifications may be made themselves form a structure that one can (and should!) take into account.
By using the identification of paths with identities in homotopy type theory, these sorts of “inductively defined spaces” can be characterized in type theory by “induction principles,” entirely analogous to classical examples such as the natural numbers and the disjoint union. The resulting higher inductive types give a direct “logical” way to reason about familiar spaces such as spheres, which (in combination with univalence) can be used to perform familiar arguments from homotopy theory, such as calculating homotopy groups of spheres, in a purely formal way. The resulting proofs are a marriage of classical homotopy-theoretic ideas with classical type-theoretic ones, yielding new insight into both disciplines.
Moreover, this is only the tip of the iceberg: many abstract constructions from homotopy theory, such as homotopy colimits, suspensions, Postnikov towers, localization, completion, and spectrification, can also be expressed as higher inductive types. . . . Thus, the combination of univalence and higher inductive types suggests the possibility of a revolution, of sorts, in the practice of homotopy theory.
One difficulty often encountered by the classical mathematician when faced with learning about type theory is that it is usually presented as a fully or partially formalized deductive system. This style, which is very useful for proof-theoretic investigations, is not particularly convenient for use in applied, informal reasoning. Nor is it even familiar to most working mathematicians, even those who might be interested in foundations of mathematics. One objective of the present work is to develop an informal style of doing mathematics in univalent foundations that is at once rigorous and precise, but is also closer to the language and style of presentation of everyday mathematics.
In present-day mathematics, one usually constructs and reasons about mathematical objects in a way that could in principle, one presumes, be formalized in a system of elementary set theory, such as ZFC (the Zermelo–Fraenkel axioms plus the axiom of choice)—at least given enough ingenuity and patience. For the most part, one does not even need to be aware of this possibility, since it largely coincides with the condition that a proof be “fully rigorous” (in the sense that all mathematicians have come to understand intuitively through education and experience). But one does need to learn to be careful about a few aspects of “informal set theory”: the use of collections too large or inchoate to be sets; the axiom of choice and its equivalents; even (for undergraduates) the method of proof by contradiction; and so on. Adopting a new foundational system such as homotopy type theory as the implicit formal basis of informal reasoning will require adjusting some of one’s instincts and practices. The present text is intended to serve as an example of this “new kind of mathematics,” which is still informal, but could now in principle be formalized in homotopy type theory, rather than ZFC, again given enough ingenuity and patience.
It is worth emphasizing that, in this new system, such formalization can have real practical benefits. The formal system of type theory is suited to computer systems and has been implemented in existing proof assistants. A proof assistant is a computer program that guides the user in construction of a fully formal proof, only allowing valid steps of reasoning. It also provides some degree of automation, can search libraries for existing theorems, and can even extract numerical algorithms from the resulting (constructive) proofs.
We believe that this aspect of the univalent foundations program distinguishes it from other approaches to foundations, potentially providing a new practical utility for the working mathematician. Indeed, proof assistants based on older type theories have already been used to formalize substantial mathematical proofs, such as the four-color theorem and the Feit–Thompson theorem. Computer implementations of univalent foundations are presently works in progress (like the theory itself). However, even its currently available implementations (which are mostly small modifications to existing proof assistants such as Coq) have already demonstrated their worth, not only in the formalization of known proofs but in the discovery of new ones. Indeed, many of the proofs described in this book were actually first done in a fully formalized form in a proof assistant and are only now being “unformalized” for the first time—a reversal of the usual relation between formal and informal mathematics.
One can imagine a not-too-distant future when it will be possible for mathematicians to verify the correctness of their own papers by working within the system of univalent foundations, formalized in a proof assistant, and that doing so will become as natural as typesetting their own papers in TeX. (Whether this proves to be the publishers’ dream or their nightmare remains to be seen.) In principle, this could be equally true for any other foundational system, but we believe it to be more practically attainable using univalent foundations, as witnessed by the present work and its formal counterpart.