Scientific method should be used more selectively

Professor of Economic Science at LSE 1931-1950, won Nobel prize for Economic Sciences in 1974.

Photo Courtesy of Library of the London School of Economics and Political Science

Professor of Economic Science at LSE 1931-1950, won Nobel prize for Economic Sciences in 1974.

By Cole Timmerwilke, Columnist

To modern ears, there is perhaps no more hallowed word than “science.” In many quarters, science is regarded with a pious reverence that borders on religious awe. Science is logical, rational and objective: the creator of the modern world. Science is the bringer of light, the mighty torrent by which the Augean stables of the former dark ages were cleansed of all manner of superstition. To question something given the imprimatur of science is not merely to expose oneself as ignorant, but also a heretic, a blasphemer against the “truth.”

Even if you don’t look to science to give meaning to your life, you are likely still deeply impressed whenever you hear the words, “studies show that…” In arguments, that magical invocation often acts as a one-punch knockout, the loser browbeaten into submission by the weight of unassailable evidence.

This is not to disparage science per se — in its proper sphere, the scientific method is the most reliable tool we have for understanding the world. However, when misapplied or poorly conducted, it can lend an unwarranted veneer of authority to questionable results. There is a great difference between the science that proved Newton’s laws of motion and the kind that “proves” eating chocolate causes cancer. We should be less credulous about the use of studies and statistics outside of the physical sciences, and we should be much more skeptical when we hear that something is ostensibly “proven” by a report.

Many of these problems stem from what the economist Friedrich Hayek labeled the “scientistic” attitude. In his Nobel Prize lecture entitled “The Pretense of Knowledge,” he defines it as a “mechanical and uncritical application of habits of thought to fields different from those in which they have been formed.”

While economics and other social sciences adopted wholesale the methods of the “brilliantly successful physical sciences,” they are, according to Hayek, fundamentally different enterprises. In the physical sciences, like chemistry or physics, factors relevant to the question at hand are likely to be observable and measurable; by contrast, economics is one of those disciplines that deals with “essentially complex phenomena.”

Get The Daily Illini in your inbox!

  • Catch the latest on University of Illinois news, sports, and more. Delivered every weekday.
  • Stay up to date on all things Illini sports. Delivered every Monday.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Thank you for subscribing!

By this, he means that it is impossible to capture all of the information relevant to determining economic relationships. While it may be possible to isolate all of the possible pathways of a chemical reaction, no one person could observe all of the infinitely varied social and economic causes that affect the unemployment rate, for instance. Furthermore, researchers focus exclusively on those variables that can be measured, but which may not be the important determinants, which calls to mind the story of the drunk who has lost his keys somewhere but searches for them under the streetlamp, simply because the light is best there.

Hayek’s theoretical concerns have been buttressed in recent years by work from across a number of fields of essential complexity. John Ioannidis, a Stanford professor of medicine, has written about problems with empirical research in several fields. His 2005 paper “Why Most Published Research Findings are False” profiled in a 2010 piece in the Atlantic conducted an analysis that showed how researcher bias toward finding significant results and the common practices of empirical research meant substantial numbers of studies that showed significant results were false.

Although relying on a paper to invalidate the results of other academic studies may seem paradoxical, Ioannides’s work was widely regarded as sound. In the Atlantic profile, Doug Altman, the director of Oxford University’s Centre for Statistics in Medicine, said, “You can question some of the details of John’s calculations, but it’s hard to argue that the essential ideas aren’t absolutely correct.”

The human body is similarly a system of essential complexity, and medicine has suffered many of the attendant problems. Several books in recent years have pointed to problems in the field: Vinayak Prasad and Adam Cifu wrote a 2015 book that detailed the problem of “medical reversal,” where a medical intervention is recommended and is later found to be either useless or actually harmful.

Jacob Stegenga, a Cambridge academic, elaborated on the same theme in his book “Medical Nihilism,” in which he argues that apart from a short list of “magic bullets,” most pharmaceutical interventions are ineffective, and their negative side effects are substantially understated.

The field perhaps most beset with the problems of inherent complexity is psychology, undergoing a so-called “replication crisis.” Replication is an essential part of the scientific method, and long-cherished results upon which decades of research and theory have been built do not seem to be on solid ground. Results that are taught in introductory psychology classes as gospel truth no longer seem so reliable.

All of these fields suffer from “p-hacking,” where researchers will analyze data in as many different ways as necessary to find a significant result, as there are very strong incentives to find novel results that are publishable.

Science is an exceptionally powerful tool, but it has limits. When misapplied, it creates a vulgar pride in those who think it is infallible. We should be humbler about what we know, and what we are capable of knowing.

Cole is a junior in LAS. 

[email protected]