[#25] TOWARD A SCIENCE OF COMPUTER ARCHITECTURE
[#25] Toward a Science of Computer Architecture
By Piet Hut
Steam engines use energy to power motion, and computers use energy to power calculations. The invention of steam engines gave rise to the theory of thermodynamics, where entropy growth describes loss of information. What will the invention of computers give rise to? Whatever it will be, we should expect it to tell us at least about efficiency of computing.
Thermodynamics tells us, for a given heat engine, the maximum amount of work that can be extracted. And once we know the theoretical maximum, we can determine the efficiency as the ratio of the actual amount of work obtained and the theoretical maximum.
Interestingly, the historical order of discovery was the other way around, from the discovery of efficiency to the discovery of thermodynamics. By 1775, the Scottish engineer James Watt had made steam engines far more practical than they were before, by adding a separate condenser. Half a century later, in 1824, Sadi Carnot, a French engineer, would derive the theoretical maximum amount of work that such an engine could produce. This result was then used to develop thermodynamics, when Lord Kelvin in 1851 published the first version of the second law of thermodynamics, which gave rise to the concept of entropy, introduced by the German physicist Rudolf Clausius in 1865.
Clearly, great ideas take a long time to develop, then as well as now. And equally clearly, great insight is not necessarily related to what is now called "metrics" in academic publishing, such as the number of publications one produces, or the number of citations one attracts in those publications. Carnot's life-time output was only one publication, but a very important one, and his "h-index" was therefore 0 or 1, depending on whether he received any citation at all; in any case far too low to get tenure these days. Sadly, Carnot died eight years after his publication, at the young age of 36, but after having received one obscure citation in the Revue Encyclopédique, so at least his h-index now stands at h=1.
Although computers have been around for well over half a century, we still don't have a general theory of computing that is anywhere as universal as thermodynamics is for heat engines. This is perhaps not surprising, given that it took ninety years from James Watt's steam engine to Clausius' introduction of the notion of entropy. And if history is a guide, we are still waiting for the equivalent of a modern Carnot, to give us a measure for the efficiency of computers, to prepare the ground for a universal theory to be built.
A candidate for a successor of Carnot is Jun Makino, a Japanese astrophysicist whose work has focused not only on theory and simulations in stellar dynamics, but also on broad aspects of software development and even the design of special-purpose hardware. Jun and I have collaborated on many projects since we first met in 1986, when he was still a Master's student.
The way we met was rather unusual. A year earlier I had become a professor at the Institute for Advanced Study, and that summer I organized a conference on "The Use of Supercomputers in Stellar Dynamics". I had asked Jun's supervisor, Daiichiro Sugimoto, to give one of the invited talks, but at the last moment it turned out that Sugimoto was not able to come. So instead he sent this beginning graduate student whom nobody had heard about. It was Jun's first scientific presentation outside Japan, and we found it hard to understand what he was trying to convey.
However, I got a sense that there was something potentially important hidden in his talk, and when I talked with him one-on-one, it quickly became clear that he was the first one in the history of N-body calculations to give a detailed analytic derivation of the relative efficiency of different integration schemes, in terms of cost/performance ratio, cost in CPU cycles, and performance in terms of integration steps.
This was a real breakthrough in the field, and something not known, even by people who had been writing N-body calculations for decades. I immediately offered him to stay a few weeks longer, providing him with lodging, meals, and an extended-stay airplane ticket, and by the time he left, we had completed a jointly authored paper, the first of many to follow.
“31 years later”
Fast-forwarding to half a month ago: I again attended a talk by Jun Makino in Tokyo, more or less by chance (we met nearby with Michiko Fujii, associate professor at Tokyo University, to talk about her work in stellar dynamics, in connection with the recent gravitational wave detections). The title of his talk was "The Streamline Computer — or a science of computer architecture". I wasn't sure what to expect, but it turned out to be similar to the very first talk I heard Jun give, 31 years ago: it was also about efficiency, but on a much wider and more fundamental scale.
In his talk, Jun proposed a new measure for the efficiency of computer architectures. He started with two analogies. One was the efficiency of heat engines, as I described above. The other was the efficiency of flight, whether by birds or airplanes, in terms of minimizing drag by improved streamlining. This led him to the conclusion that computers are still in the early days of improvements toward efficiency, but that at least we are getting a sense of how to move forwards. And besides economic advantages, such a move may lead to a deeper theory of computing.
Here are some of his slides (click here for all the slides):
Piet Hut is President of YHouse (where this blog is hosted), Professor of Astrophysics and Head of the Program in Interdisciplinary Studies at the Institute for Advanced Study in Princeton, and a Principal Investigator and Councilor of the Earth-Life Science Institute in the Tokyo Institute of Technology.