Some milestones in the evolution of Theoretical Computer Science
Every description of this sort is necessarily biased and reflects personal tastes of its authors. With this disclaimer, here are some concepts and ideas that we consider as milestones in the development of Theoretical Computer Science.
- The focus of the field changed from the study of computability in finite (but unbounded) time, to the more practical (but mathematically subtle) study of efficient computation. The fundamental notion of NP-completeness was formulated, and its near-universal impact was gradually understood. Long term goals, such as the P vs. NP question, were set.
- The theory of algorithms was developed, with the fundamental focus on asymptotic and worst-case analysis. Numerous techniques, of increasing mathematical sophistication, were invented to efficiently solve major computational problems.
- A variety of computational models, designed to explain and sometimes anticipate existing computer systems, were developed and studied. Among them are parallel and distributed models, asynchronous and fault-tolerant computation, on-line algorithms and competitive analysis.
- Randomness was introduced as a key tool and resource. This revolutionized the theory of algorithms. In many cases, probabilistic algorithms and protocols can achieve goals which are impossible deterministically. In other cases they enable much more efficient solutions than deterministic ones. Following this, a series of derandomization techniques developed to convert in general cases probabilistic algorithms to deterministic ones.
- The emergence of the notion of one-way functions (which are "easy to compute" but "hard to invert"), together with the use of randomness, has lead to the development of modern cryptography. This theory has provided essential ideas for secure communication and computation over networks, and has played a crucial role in the evolution of electronic commerce.
- The notion of interactive proof systems, in which knowledge is gained through interaction with untrusted parties, has been pivotal to both computational complexity theory and cryptography. Probabilistic proof systems, with their many variants --- zero knowledge, Arthur-Merlin, multi-prover, and probabilistically checkable proofs (PCPs), have enriched to a tremendous extent our understanding of many basic computational problems, including ones which superficially have nothing to do with randomness or interaction. As a prime example, our understanding of PCPs has led to enormous strides in our knowledge of the difficulty of finding approximate solutions to many natural optimization problems.
- The theory of pseudo-randomness has revealed intimate connections between computational difficulty and the task of derandomizing algorithms, and has brought us closer to understanding the power of randomness in various computational settings. See also the Special Year on Computational Complexity 2000-2001, topic page.
- Complexity theory, attempting to classify problems according to their computational difficulty, has integrated many of the ideas above, and has become much more mature and intricate. There were many nontrivial successes in proving lower bounds on restricted models of computation. The failure to do so for general models (towards the P vs. NP question) has motivated the introspective field of proof complexity, which tries to quantify the resource requirements of proofs and the intrinsic difficulty of proving various mathematical statements.