According to the conventional account, American lawyers and judges from the 1870s through the 1920s believed in “legal formalism”—that law is a comprehensive and logically ordered body of rules and principles and judges mechanically deduce the correct answer in cases. In the 1920s and 1930s, the long assumed narrative goes, the legal realists destroyed the prevailing formalist view of judging by demonstrating that law is filled with gaps, uncertainties, and inconsistent precedents; they argued that judges decide cases based upon their personal preferences and work backwards to find legal justifications for their decisions.
My professors taught me this version of events and, in turn, I have taught my students the same. This narrative is not just a quaint historical account—it structures contemporary debates about judging among legal theorists as well as quantitative research on judging by political scientists.
Modern search technology helped me stumble onto a discovery that overturned this fundamental notion about judging in the United States. When fooling around one afternoon to familiarize myself with the search mechanism of a legal database, I input the phrase “judicial legislation” (prior to 1900), not expecting to find much. According to the conventional account, it was an article of faith during the formalist age that judges do not legislate—they merely interpret and apply pre-existing law.
Nearly four hundred documents were flagged by the search, a startlingly large number. But the real surprise came next. “We all know judges legislate”—jumped out of the second or third document I examined, published in the 1870s. Then I came across this stunning 1881 passage: “It is useless for judges to quote a score of cases from the digest to sustain almost every sentence, when everyone knows that another score might be collected to support the opposite ruling.” This consummately realistic observation was uttered in the heart of the formalist age, when everyone purportedly believed that judges mechanically deduce answers from a logically coherent body of law.
After thirty minutes of near frenzy, checking one document after another, I suspected that the conventional narrative was flawed. Two weeks of obsessive searching later, I knew it was flat wrong. It took me a year of research in residence at the Institute to fully comprehend the events that gave rise to this false story, how it took hold, and its distorting consequences for later generations.
Throughout the so-called formalist age, it turns out, many prominent judges and jurists acknowledged that there were gaps and uncertainties in the law and that judges must sometimes make choices. The period was marked by a severe economic depression and raging social and political conflict, especially between capital and labor, conflict that played out in courts. Progressive critics castigated judges for deciding cases in a logically blinkered fashion out of excessive fealty to formalism, erecting barriers against necessary legal reforms.
This charge of blind judicial formalism was embellished by the legal realists, who were critical of courts in the 1930s, and the image was repeatedly invoked by subsequent generations to serve as the exemplar of judicial folly. Reinforced by repetition over the course of decades, the political impetus behind the original charge faded from view and the story about the formalist age became a firmly entrenched verity within our legal culture. The legal formalists and legal realists, moreover, entered the standard textbook as contrasting extremes, a pairing of opposites that painted the legal realists, incorrectly, as radical skeptics of judging.
Debates about judging in the United States have been distorted for decades by this formalist-realist antithesis: either judging involves the objective application of legal rules with no discretion (formalism) or judicial decisions are determined by the subjective preferences of individual judges (realism).
The continuing impact of this antithesis is evident during Senate confirmation hearings when judicial nominees for the Supreme Court ritually intone that they decide cases purely based upon the law, denying that their personal views have an impact. This is false—a measurable proportion of Supreme Courts cases are legally open to more than one answer—but prudent to assert. The admission that personal views sometimes (inevitably) come into play in legal decisions would expose a nominee to the accusation of improper politics. Stuck in this formalist-realist divide, we oscillate from one extreme to the other.
This now-dominant formalist-realist divide, in hindsight, appears shockingly lacking in substance. It was a politically inspired story repeated innumerable times, given credibility by a string of citations to authoritative figures, resting on a wobbly, unsupported set of thin legs.
This is an unsettling image for anyone who believes, as I do, that scholars must strive to produce histories and theories that fit the facts without distortion. This is not the naïve assertion that the political views of scholars do not matter, but an insistence that this political bent be disciplined by a commitment to be true to the evidence (the same is asked of judges with respect to their legal decisions). The enterprise of knowledge production depends upon adherence to these commitments. If the standard account of the formalists and the realists is as comprehensively flawed as I believe it is, in this instance our collective construction of knowledge went spectacularly amiss.