It was recently announced that a new supply of transplantable human organs may come from growing organs from human stem cells implanted in animals. If proven effective and safe, this advance could revolutionize the treatment of many serious diseases.
Certainly the scientists who accomplished this feat deserve credit, but also the recognition that they stood on the shoulders of countless previous researchers in chemistry and molecular and cellular biology. Many earlier discoveries, some of which were not considered applicable to anything at the time, helped pave the way for this new technology, including research on what aspects of a molecule’s structure lead it to bind to other molecules and why some molecules emit light (fluoresce) while others do not.
Every groundbreaking technology we have was derived in some way – maybe three steps back, maybe 30 steps back – from a discovery that was driven either by curiosity or by research into a completely different problem. This is the value of what we call basic science: research that seeks to uncover the fundamental truths of the universe, but does not necessarily aim to solve an immediate societal or technological problem.
Basic science is the foundation for all scientific advances, from microwave ovens and smartphones to cutting-edge medical treatments. Yet scholars are increasingly asked to defend research that has no immediate, obvious application. To obtain grant funding, scientists doing even the most fundamental work must connect their research with an eventual application or relevance to the stated priorities of the funder.
For nearly twenty years, even the most basic-science-focused programs within the most basic-science-focused government agency, the National Science Foundation, have required scientists to articulate the potential “broader impacts” and benefit to society of their work.
Academic journals have also moved to an emphasis on outcomes. Decades ago, the introduction sections of scientific manuscripts were devoted to explaining the question being investigated, with little or no reference to why the research might be relevant to anyone other than a scholar. Investigations were warranted simply because there was an unanswered question or a disagreement in the literature on a particular topic.
Today’s journal articles, in contrast, feature discussions about the potential implications and applications of the topic, even when the research is squarely within the realm of basic science. Comparing academic articles from fifty years ago to now, we can see a real shift in the way scientists justify the importance of their work, even when preaching to the choir of fellow scholars who read scientific journals.(a)
It is understandable that funders and publishers want to devote limited resources, especially those derived from taxpayer money, to research with the potential for the greatest impact. Yet doing so threatens the contribution serendipity can make to scientific discovery and its potential to lead to unanticipated benefits.
As a Professor of Chemistry at a large research institution, I witness this process every day. For example, the multi-billion dollar drug Lyrica works to ease neuropathic pain and reduce the frequency of epileptic seizures for reasons that its original developer, my colleague Professor Richard Silverman, did not expect. His design of the drug made sense from a molecular standpoint, but there was no way for him to predict, a priori, how it would behave in humans.
Despite successes like Lyrica, many stakeholders, particularly those who control the distribution of funds at federal research agencies and private foundations, have trouble believing in connections between basic science and technology that they cannot plainly see. Scientists are not fortune tellers, however, and often cannot anticipate exactly where their discoveries will lead, much less provide concrete proof of these connections by outlining the many steps between a fundamental discovery and its eventual impact.
In the late 19th century, for instance, Sir William Crookes and Karl Ferdinand Braun began experimenting with cathode rays. These streams of electrons are formed when electrical current is passed through a vacuum-sealed tube. The curiosity and undirected tinkering of Crookes, Braun, and other scientists resulted in the discovery of the electron and the atomic nucleus, without which modern physics would not exist.
Cathode rays also led to all sorts of technologies that the first scientists studying them could never have predicted: the cathode ray tubes used in televisions and early computer monitors, x-ray and CT scan machines for medical diagnostics, and the x-ray crystallography that was essential to the discovery of DNA. Would these technologies have been developed if the scientists studying cathode rays had to justify their open-ended exploration of a phenomenon whose significance was unknown at the time?
The argument over the relative value of “basic” versus “applied” research, and how the two should inform each other, has been going on for years.1 One attempt to clarify their relationship (and counteract the Cold-War-era linear model that basic science feeds applied research but the two do not intersect), is a classification scheme known as Pasteur’s quadrant.
This concept was outlined in a 1997 book of the same name by political scientist Donald Stokes.2 Stokes proposed a “third mode of research” – use-inspired basic research, driven by a quest for knowledge but also by considerations about the usefulness of the research – as a more realistic and helpful view of how productive science is often done.
Stokes named this model of thinking “Pasteur’s quadrant” in honor of the famous scientist’s ability to keep an eye toward technology while producing fundamental advances of great influence in chemistry and microbiology. In the course of investigating wine fermentation, Pasteur not only conceived his most famous invention, pasteurization,(b) he also made a discovery that would influence all of pharmaceutical chemistry thereafter: that some molecules with identical chemical compositions can have different arrangements of their atoms in space and therefore interact with their environments in completely differently ways.
Considering Pasteur’s quadrant, the prominent chemist George Whitesides offered one compelling argument for planting oneself in the quadrant of use-inspired basic research: “As scientists who get our money from the public purse, we have an obligation to spend some time producing science that helps to solve problems.” Whitesides offers a caveat, though, “There are, of course, differences in opinion on what strategies for research best serve the interest of society.”3
Therein lies the rub. Nearly every scientist wants to make a difference in the world, whether this motivation is self-serving or philanthropic; a scientist who sees their work purely as a means of self-indulgence is a very rare species indeed. But there is no formula for connecting a particular line of scientific inquiry to all of its eventual benefits for society, or for weighing the hypothetical future benefits of two research projects against one another.
Is forcing scientists to choose their problems based on societal need (or justify their research as relevant after the fact) a useful strategy to simultaneously increase our understanding of the universe and translate that understanding into a better quality of life?
One answer is that a scientist will be most productive when allowed to choose how to frame the problem she is working on. Some scientists make sense of the world in terms of the most basic mechanisms by which it operates, while others understand phenomena primarily in terms of the functions and applications they produce. Both are valid intellectual perspectives and should be supported. Some scientific problems lie directly in Pasteur’s quadrant and should be attacked by those scientists who, like Pasteur, have the ability to simultaneously adopt both modes of thinking.
A second, and in my mind, equally compelling answer is that a major part of our mission as academic scientists is to educate the next generation of researchers, and conducting basic research is essential to this education. While all academic scientists dream of a big breakthrough, the reality is that, for most of us, our most important product and our greatest chance of making an impact is the next generation of scientists we train. A big part of that training comes in the form of basic research in the lab.
Ultimately, there must be scientists who understand the world at its most fundamental level and push that understanding forward, just as there must be scientists who know how to translate this information into technologies and applications. If we lose the foundational knowledge in any scientific field by not asking its most basic questions, the whole house will crumble.