Helping science succeed
Helping science succeed

Is science elitist? Part 2: The dynamics from inside science

Part 1 of this article discussed how many external factors can influence the way we perceive science, including media-hyped partisanship, rising economic inequality, the Internet’s capacity to broadcast misinformation and disinformation, the threat science can pose to our worldviews, and the inconvenience of government regulation. How about internal factors? Does the way science itself operate have an impact on how we see science? Absolutely.

­

What exactly is a “scientist”? Google this question and you’ll get a hundred different answers. There are scientists who wear white lab coats and run experiments, scientists who use supercomputers to find relationships in huge mountains of data, and scientists who try to understand how stars are born. Rocket scientists are most likely not scientists at all but highly specialized mechanical or aerospace engineers; scientists who work in academia will typically work on basic research questions (like trying to understand quantum particles) while their counterparts in industry will focus on applied research (like trying to develop quantum computing technology); psychology researchers will use scientific principles to conduct and analyze research even though their neurophysiology brethren might loathe to call them “scientists”; scientists who are principal investigators may not do any hands-on work at all but just oversee the work of other scientists; and humanities researchers who travel the world piecing together cultural artifacts or read letters buried in Library of Congress vaults to learn more about our past don’t even qualify for the “scientist” moniker, even though they are equally expert and meticulous in their own fields of research.

In short, scientists do many different things, and any one-size-fits-all descriptions we try to come up with about what it means to be a scientist are going to be a bad fit for most scientists. Still, scientists have a lot in common. Most are elite, but in a good way, like elite athletes who are at the top of their games: very smart, highly capable, highly expert individuals, many who have spent years earning their PhD, doing postdoc work, and contributing to our knowledge of the world. Critics of science and scientists aren’t handing out high-fives, though, when they use the e-word. In their usage, elite means untrustworthy, out of touch with reality, and arrogant (as in scientists think they know what’s best for everyone).

Are scientists actually this bad kind of elite? Well, no—not as a group anyway—but there are nuggets of truth in this accusation, which is partly to explain why so many people have been convinced that scientists are in fact the bad kind of elite. Pew research polls show that in 2021, public trust in scientists dropped to its lowest levels in the past 50 years, and trust in scientists among Republicans dropped precipitously from previous years. External factors like media-hyped partisanship and disinformation are huge drivers of this drop, but there are also dynamics from inside science that might help fuel these seemingly bizzarro world perceptions. We’ll look at four of these internal dynamics in this article: branding control (or lack thereof) in science, the language of science, publishing norms in science, and tensions within science and between science and public policy.

BRANDING

The consumer marketplace and political landscape are filled with hacks and quacks. Do we honestly deep down believe those informercials that hawk memory supplements and joint pain medications? We wouldn’t if we read the “studies” that “prove” the effectiveness of these products. But we aren’t up in arms about these fake science claims because we’ve grown to expect that lots of people claim lots of things to make money—always have, always will. Similarly, do we really believe Joe Politician when he says climate change is a hoax? Many of us do because we don’t understand the science, but then many of us just don’t bother to correct Joe because politicians will say stupid stuff—always have, always will. We’re going to let it slide and hope others do as well. Also, anyone who corrects Joe will just sound shrill and be shouted down by the haters, so it’s almost more trouble than it’s worth to stand up for science.

As we wrote about a number of years ago in Supersize Science, science doesn’t have control of its own brand. Anyone can claim they’re an expert, anyone can slap the “science” label on something and call it scientific, and no one is running to the defense of science when it is wronged, or acts as a global clearinghouse of science truth. For the most part, this tornado of misinformation and disinformation just causes confusion and disappointment. People think that because Boston is getting more snow than ever, the earth can’t possibly be warming, or that because their science diet didn’t work or their clinically proven ingredients weren’t effective, then something must be wrong with science.

All this confusion and lack of correction is a problem for science insofar as it weakens the science brand—it weakens public trust in science. But brand dilution isn’t the only problem caused by the mislabeling and misuse of science. Shoddy science can also happen. Inside science itself, humanities and the social sciences have long strived to be more like the natural sciences and use rigorous scientific methods to help discover fundamental truths about the human condition. But trying to be more scientific and actually being scientific are two different things entirely. As it turns out, human behavior is a lot harder to quantify than natural phenomenon, and putting too fine a point on already shaky data is highly inadvisable. Today, researchers fear that the majority of psychology studies conducted over the last few decades are not reproducible because the “science” methods they used were simply not scientific enough.

Another damaging variety of science misuse has happened through the transfer of credibility—when we grant credibility and attention to a physicist who leads a decades-long campaign to debunk climate change science (see also, Fred Singer), or an osteopath who convinces millions that COVID vaccines don’t work (see Joseph Mercola). What we the public don’t always grasp in the face of these falsehoods is that research fields are so specialized that simply having a PhD in, say, biology, doesn’t make one expert in all things biology. Having a PhD in physics certainly doesn’t make one expert in climate science, and osteopaths are not virologists or even medical doctors.

Why do scientists do this? In the recent past, some (like Singer) have been paid by industries and special interest groups with economic or political motives to twist the truth. Other scientists may honestly disagree with policy decisions but their opinions get amplified by media to a volume that doesn’t necessarily reflect its importance within science. And still others, like many media personalities, appear to have cognitive biases that cause them to overestimate their knowledge or ability, particularly in areas where they have little or no experience. This cognitive bias—called the Dunning-Kruger effect—may be more dangerous when it happens with scientists because there’s a greater potential to spread disinformation. That is, it’s easy to pick up on the fact that your drunk Uncle Bob is out of his gourd when he’s blathering at Thanksgiving dinner about how the moon landings were faked, but this blather may be harder to dismiss if your drunk uncle is also a rocket scientist.

Each of these loss-of-branding outcomes—quacks and hacks, science mislabeling and misuse, shoddy “science,” and credibility transfers—help explain why some people think it’s absolutely fair to label scientists as “elitist.” There is no lack of consensus in real science about issues like climate change or the effectiveness of vaccines; there is only manufactured controversy, disinformation, fake science and fake scientists that confuse the public and distort how we view science and scientists. And in the process of trying to correct the record, real experts become elitists for trying to explain that they are the experts, which only makes them seem more elite. And patronizing. A real scientist, after all, should be willing to debate RFK Jr. and entertain his Dunning-Kruger delusions about vaccines; by not debating him, scientists are acting like they know what’s best.

LANGUAGE

Science can be complicated and difficult to explain, so when scientists speak to the public, they can sound quite elitist—if, for example, they talk down to questioners, or dismiss the ideas and concerns of questioners out of hand. So much is at play here—how to present evidence, how to speak clearly without sounding condescending, how to connect with audiences by empathizing with their understanding of an issue, and more. Most scientists have outreach training available to them, but it’s asking a lot and expecting a lot for scientists to also be interested in doing this work and to be as skilled at it as Carl Sagan and Neil de Grasse Tyson.

At the same time, scientists are often worried about oversimplifying science at the expense of accuracy, or of making overly broad and sensational claims, or of taking time and resources away from their research in order to do public outreach and “marketing” work. In other words, the skills and tools used for convincing us which car to buy are the same skills and tools scientists are loathe to use. Therefore, generally speaking, scientists and science communicators are breeds apart (there are notable exceptions, of course). Science communicators bring us Nova and Scientific American, and no one is accusing these resources of being the bad kind of elite. When scientists talk to each other, though—which happens by way of thousands of conferences and nearly four million journal articles every year—suffice it to say that many badly drawn charts with tiny print will be involved.

Why is this a problem? Well, maybe it is, maybe it is, maybe it isn’t. The answer depends on who you ask.  Some researchers and scholarly communication experts defend complicated language as necessary because the complexity of science requires precise language to describe. The more complex science becomes, the more precise language we need. Other experts disagree, noting that wholly apart from the growing complexity of science, the general style of science writing has become much more stilted over time to where even the simplest ideas are expressed in convoluted ways.  To these experts, the goal of science writing today isn’t precision, but establishing a certain cadence that other scholars in the field will recognize as being authoritative, with a requisite density of citations, jargon, statistical analyses, and unintelligible charts. The goal of this style isn’t to inform the public but to impress peers and tenure committees.

There are other linguistic dynamics at work here as well. In addition to its complexity, some of the key rhetorical elements of science are also different than in normal language, which can further exacerbate misinterpretation and misunderstanding. For example, science relies on hedged claims—factor x may be responsible for outcome y, given the preponderance of evidence—but hedged claims in normal language sound like guesses, even uncertainty. “I may go to the grocery store after work to get milk, given that we are out of milk.” Well, will you or won’t you? When scientific evidence expressed in hedges meets a public that thinks hedges are guesses, a whole world of elitism erupts where scientists try to explain their evidence is valid, critics highlight uncertainty, and the gap in interpreting actual meaning is attributed to liberal bias and elitist tendencies.

Hyperbole will also punish science—much more so than if, for example, a politician makes hyperbolic claims about an opponent or an issue. Sometimes this hyperbole is because of sloppy research; sometimes it’s the result of inartful phrasing; sometimes it comes about through bad analysis, or attempts to get press attention. The 1998 Wakefield study claiming a link between vaccines and autism was hyperbolic for all these reasons. Scientists are generally careful to avoid hyperbole because they are aware of the harm it can cause to their personal reputations, their institution’s reputations, and to research itself. There are also public health risks in many cases, where concerned parents stop vaccinating their children, or patients sick with COVID start self-medicating with horse dewormer. To the non-expert public, hyperbole can be exciting on the one hand—there’s always a stir when a cure is on the horizon—but damaging as well, as desperate patients are deceived and harmed, confidence in science takes a hit because people don’t know who to believe, and the elitism charge is raised as scientists with their feet on the ground are viewed as preventing progress and not being open to new ideas.  

Even archaic language conventions can make scientists seem elitist, like referring to settled science as Theory. When something is promoted to “Theory” status in science, it has achieved the highest rung of factuality, like the Theory of Gravity, or the Theory of Evolution. But in the plain-spoken world, this convention makes it sound like evolution is still a theory—small case “t”—in science. Scientists must be some kind of elite to defend Evolution so vociferously when they don’t really know the answer. Their liberal elitist bias must be the reason why they are against creationism. Why else?

PUBLISHING

Along with science conferences, research journals are the mainstay of science communication, and they have been for over 350 years now. Today, somewhere around 90,000 journals are published (although no one knows exactly how many since not all of them are indexed), and within their pages, around four million research articles appear every year on a dizzying variety of subjects.

Journals are important to science, but they are also written for scientists and not for policymakers and the general public. Part of the reason for this is purpose—they are how scientists share ideas, and sharing ideas with the necessary accuracy and completeness requires a format that may be inaccessible to the non-expert. Still, the convention of these articles can be inaccessible even to other scientists. Harvard’s Steven Pinker has described the standard journal article style as “turgid, soggy, wooden, bloated, clumsy, obscure, unpleasant to read, and impossible to understand.”

Since the dawn of the Internet, there has been much ado in science communication about how digital technology would change journals completely, opening up a world of different publishing possibilities. So far, however, this change has been marginal. Journal publishing is changing for sure, but at the same time it is very much a business and has a biased interest in reinforcing conventions that have a long and proven history of working for science, and that also work to the benefit of publishers (which should come as no surprise to anyone). This entrenchment is evident in two main ways: How hard it has been to shift away from journal article standard to something more readable and accessible, and the lengths the international community has gone to pay for these articles as costs continue to climb.

With regard to the shift away from journals, an entire ecosystem of reformers has worked for decades now to try to lure scientists into other publishing formats, but the appeal of journals remains strong because university researchers are evaluated for tenure and grants in large part on the basis of their journal publishing record. There is no incentive for these researchers to try something else. Some shifts are happening on needs like better data sharing, more open access publishing, preprints, post publication peer review and so on, but none of these shifts also make science more readable or usable to policymakers and the general public—that is, the information is still written for the elites. Journals are peer-to-peer communication tools, not Scientific American. In fact, the increasing internationalization of science is probably making the readability of journal writing even worse. English has solidified its position as the lingua franca of science over the past 30 years or so, but over this same period, research has become much more international, meaning that many more authors who lack fluency in written English are needing to publish in English language journals. The need to communicate in a common language can create barriers for authors and readers alike and exacerbate perceptions of elitism as already complex language gets even more convoluted (at the same time, of course, using a common language is helping more scientists communicate with each other).

This is very much a university researcher problem, though. These researchers, who primarily focus on basic research, produce the vast majority of journal articles. Researchers in government and industry—who account for far more research spending than their university counterparts, but in the applied research setting (such as tech, energy, and defense)—primarily publish white papers. White papers have no standard format, no centralized (by subject matter) location, and are often but not always free to access (sometimes they are not released at all, though, due to industry or government secrets).  So, this should be great news, right? There’s still lots of science research out there for policymakers and the public to read that is maybe more readable, plus free to boot? Well…maybe. The world’s research publishing indexes don’t track white papers because they aren’t considered worthy by the conventions of journal publishers—conventions that are designed to promote rigid scientific discourse, but which are also very lucrative to maintain. Some start-ups are trying to bridge this gap, but by and large, science research is either published in an expensive and unreadable journal, or it’s hiding somewhere in a government or industry archive. This isn’t arguing for a wholesale shift away from journals, just noting that the system is exclusive and self-reinforcing—i.e., elite.

Maintaining the economic viability of the journal system has been a struggle. The number of journals being published doubles about every 20 years as the number of researchers grows, research spending increases, and new research specialties emerge. In the early 2000’s, all these journals were only available by subscription, and the rising number and costs of these subscriptions were causing a “serials crisis” in academic libraries around the world. A small revolt ensued, and libraries launched a rhetorical war against academic publishers, especially Elsevier (the largest academic publisher). The first peace settlement involved bulk discounts—“Big Deals” that gave libraries discounts for subscribing to bundles of journals. But over time, pressure built for solutions that didn’t simply line the pockets of publishers but that instead focused on making research cheaper to publish and free to read. By about 2019, momentum began building for a global flip from subscriptions to an author-pays model of publishing. The intent of this shift was to make research cheap and free, but the predictable outcome (one that SCI has warned about for years now) is that it is has made the cost to publish in the best journals incredibly expensive—over $10,000 per article in many cases. The result is that scientists from lower resources regions and institutions are now excluded from even publishing their research.

This particular internal dynamic clearly crosses the rubric. It may not only be perceived by outsiders as being elitist. It is, in fact, elitist. Author-pays policy reforms have been championed by wealthy governments, institutions and funders—primarily in the EU and US—who knew that researchers in the Global South would be adversely affected by these policies but who chose to push forward anyway, thinking that publishers would grant sufficient waivers and exceptions to balance access. In reality, this has been an elitist approach to achieving open science, where the economic upper classes of the world are willing to risk damaging the international collaborative framework of science in order to reap hypothetical benefits of openness. SCI has been a strong proponent of open science for over a decade now, and has worked to achieve open science, but not through the author-pays strategy (see OSI’s work for more details at osiglobal.org).

Can all (or any) of these dynamics change? Maybe, hopefully, but the incentives in journal publishing are entirely misaligned. As noted, researchers are rewarded (in promotion, tenure, and grant making decisions) for publishing more work, and for publishing in the higher cost, elite impact journals, which leads to weird outcomes like cash rewards for publishing in high impact journals, the rise of fake journals, splitting work across multiple papers, including thousands of co-authors on a single paper, faking data in order to fake media-worthy discoveries, boosting citation counts by hyping work through social media, and eschewing the kind of writing, publishing and outreach that is more accessible to the public. The high-impact, high-prestige article is all-important, and this importance only reinforces existing norms in publishing. So, researchers double-down on existing publishing norms instead of looking seriously at ways to improve this form of communication. To the outsider, and to some on the inside, this signals a preference to maintain the elitist status-quo rather than work together on developing a better and more egalitarian system of communication. Indeed, when it comes to perceptions of elitism, you can hardly invent a better foil: Liberal-leaning scientists who get government funding to do their work are writing about issues that have far-reaching public policy implications, but they are doing so in ways that are inaccessible, and that policymakers and the public can’t understand.

TENSIONS

The final factor we’ll look at are tensions. There are four main kinds at work when it comes to fueling perceptions of elitism in science: public policy, administrative, funding, and hesitancy.

In terms of public policy, it’s obvious that when laws and regulations are being considered or rolled out, people who don’t like these laws and regulations are going to find a scapegoat. As we discussed in part one of this essay, science has been at the vanguard of lots of public policy issues over the last 40 years that Republicans in particular have taken exception to—environmental protections, health regulations, social protections, and more. Got a problem with clean water rules? Blame the scientists for their unrealistic (and probably flawed) assessment of risks. Smoking? The scientists are being alarmists. Climate change? Bad data. COVID masks? Anthony Fauci is out of his element. LGBTQ protections? Liberal bias and wholly unscientific. These are external tensions, though, which is to say that scientists aren’t bringing this controversy on themselves through their own actions. So, are there any internal tensions with regard to public policy? The answer is a little squishy because the intersection of science and policy is politically controlled, so there will absolutely be internal tensions, but they won’t be entirely scientific in nature. For example, when the EPA is run by a politically appointed non-scientist who thinks government regulation is overreach, then the kinds of policies put forward for public comment by the EPA (via political leaders) will likely grate on the sensibilities of the career scientists who work for the EPA. The same is true for all other government agencies. These disagreements can result in mass resignations, public disputes over how to weigh scientific evidence, and running battles over whether the scientists who defend truth and knowledge are being elitist because they sincerely think they know what the best policies for the country should look like. In truth, there is simply not a direct line between science and policy; science needs to work effectively within the political process and can’t do an end-run around it. Where tensions emerge in this process, charges of elitism are sure to be heard.

The administrative framework supporting science is also far from perfect. Maybe we expect this framework to be perfect because it’s related to science, but in fact, it has many flaws that fuel tensions over how science itself is perceived. Consider science funding, for example. Most basic research in the US is paid for by government agencies like the National Institutes of Health and National Science Foundation. Politicians looking for headlines (like Senator Proxmire in the 1970s) have often ridiculed this spending as being unnecessary since we have plenty of “real world” problems to address. These anti-elitism tensions are heightened when the spending in question is for science and regulation these politicians oppose anyway (like a climate change observatory or water quality monitoring station).

Administrative tensions also exist insofar as the business of science is not immune to the same ills that plague the rest of society, from racism to misogyny, fraud, workplace abuse, and unemployment (outside of computer science, there just aren’t enough science jobs to go around despite how we constantly trumpet the need for more STEM graduates). All of these topics are worthy of more discussion but are just highlighted here to note they exist and feed into internal perceptions of science that are often quite starkly at odds with the setting we might think exists for such a noble profession.

Funding tensions happen inside science as well. The competition for grant funding continues to get more and more intense—only about one in five applications are approved today. Most studies don’t get funded, and researchers spend a great deal of time writing grant applications that get rejected. This results in tension inside science about who and what gets funded, with funding agencies favoring principal investigators with long publishing resumes who work at the most well-known universities—a Matthew Effect where the rich get richer, and the science ideas of the rich get the most research attention. This Matthew effect also plays out globally, where first world ideas and problems get published in the most prominent journals, and end up pushing the ideas, problems and policy priorities of the Global South to the bottom of the pile. This bias against the Global South is amplified as the article publishing charge has taken hold and as English has increasingly become the lingua franca of science (see previous section). Like the author publishing charge, while this elitism doesn’t factor into external criticisms of science, it is clearly elitism nonetheless.

A final dynamic worth noting is that there are many tensions inside science that work against complete and immediate transparency. For example, scientists may want to hang onto their data for a while before publishing it lest another scientist scoops their discovery. Or they might fear their data will be misinterpreted or misused. This hesitancy can get spun by critics. For example, why can’t we see the public health data behind the EPA’s adoption of air quality rules? (The answer: the de-identified data is always available, but the patient level data contains privacy protect information that can’t be released.) Or why can’t all science be published for free so everyone can read it? (The answer: See previous section about journal reforms.)

These and other internal tensions all point to an internal kind of elitism in science—not elitism in the sense that scientists are out of touch with reality and not to be trusted, but elitism in the sense that hierarchies, harmful conventions, biases and other tensions exist inside science. Some of these tensions can taint the reputation of science, both internally and externally. Other tensions can be interpreted as signals of elitism—scientists who publicly disagree with science policies put forward by political leaders, scientists who hide their data, and scientists who lobby for higher spending on researcher when we should really be spending more money on roads and bridges.

 

Are scientists elitist then? No, not as a group. But clearly, it’s hard not to notice that science opens itself to plenty of criticism that can lead to charges of elitism. This isn’t a blanket condemnation, though. Like the definition of science, “elitism” means different things to different people. To the science communication policy analyst, it means science isn’t operating at its highest, best level, and needs to make reforms. To the partisan, kept at arm’s length from scientific discourse by a thicket of impenetrable gobbledygook and feeling under assault by policies that don’t align with their own thinking, it means scientists are trying to force unpopular change, hide facts from the public, and obscure faulty reasoning behind a fog of rhetoric. All these criticisms have merit, but the internal charge of elitism is based in fact and experience and the external charge is based in perception.

Is there anything we can do to fix the problems highlighted by these criticisms? There are many internal areas where we can start to focus, like improving branding, language, and publishing in science, and working to reduce internal tensions. The external issues identified in part one of this essay like addressing media-hyped partisanship and science misinformation will be harder to tackle, at least for science all by itself. The internal fixes are more within the control of science. Whatever the focus points, though, leadership and resources will be an issue. There are no agencies or institutions that have exhibited the interest or ability to help design and promote a more effective future for science communication. There is also no significant federal or foundation money to help pay for this work, nor is there likely to be in the future because funders have consistently viewed science communication as an add-on to science and not integral to the success of science. Therefore, step one will be to convince the world that science communication is important enough to support. We think it is, obviously. If any funders are interested, let’s talk about how to get started.

Glenn Hampson

Glenn is Executive Director of the Science Communication Institute and Program Director for SCI’s global Open Scholarship Initiative. You can reach him at [email protected].