From the beginning of time, every human society has reconstructed its own history. Some of these reconstructions have been meticulous and factual. Most have been embellished to some degree, often including tall tales and mythologies that tie these societies to something spiritual (think Greek gods, for example). And some histories, like for the modern day North Koreans, have been outright fabrications.
The problem with historical fictions has less to do with their existence than our belief in them. In the time before written languages were developed, tall tales gave people everywhere a way to pass down their oral histories in memorable ways, and to communicate the rules and values these societies believed were most important. As the Scientific Revolution got underway in earnest in the 1600s, particularly in Europe, societies began the process of trying to disentangle truth from the fictions that governed them. Who said kings were infallible? Why shouldn’t every citizen have basic rights? This birth of “liberalism”—now a four-letter word—was a turning point in human history, where people began emerging from the shadows of lies to question their environments and seek universal truths.
Our post Scientific Revolution quest to separate truth from fiction is far from over, however. In the US from the late 1700s to the mid-1900s, we created a history that African Americans were less than fully human in order to justify their persecution and enslavement. In 1930s Germany, Hitler constructed a history of the Jewish people that rationalized their extermination. In Russia today, Putin has reconstructed a history of the Ukrainian people that justifies the destruction of their country. Throughout history and continuing into today, history has been routinely weaponized to serve evil.
These alt-histories have also been misused to create bad public policies. We misrepresent truth in order to pass laws that align with our beliefs, and we intentionally confuse and misdirect voters with intellectually lazy narratives propped up by false equivalencies, biased interpretations and cherry picked data. This alt-history is increasingly normal today, fashioned to construct fundraising narratives instead of accurate historical accounts, and stories that move public opinion instead of deepening and broadening our understanding. Real history is hard, and in our microwave, push-button, Alexa-fed, AI-written society where the results on page two of our Google search are just too difficult to find, real history is once again in danger of becoming ancient history.
This trend is alarming in itself, but it’s also being fueled by a deluge of misinformation and disinformation on the Internet, soon to be greatly enhanced by the growth of artificial intelligence. Political polarization is also contributing to the growth of alt-histories—indeed, polarization in general. As a society, we have become increasingly tribal in our information consumption, and there are “news” channels to fit every belief system, a Facebook audience for every hate group, and a celebrity promoter for every conspiracy theory.
The disinformation that inflames these groups works because most of us don’t dig into the details. Rather—at least when it comes to information outside our comfort zone—we rely on trust, believability, and likability to figure out what is true and real, and what is false and fake. We like Rachel Maddow or John Carlson, we believe our rabbi and our professor, and we align our thinking with coworkers and neighbors in order to get along. We trust these people in our networks to tell us the truth, just like we trust the products with the most 5-star reviews, the brands with the best reputations, and the services with the ads that connect best with our self-image and values.
At the same time, humanities programs that teach critical thinking and celebrate our global history and heritage are in precipitous decline, supplanted by programs like engineering and computer science that promise better returns on increasingly expensive college educations.
So, figuring out the truth—unless you’re a scientist working on science matters in your field—can require a trip through psychoanalysis land. The answers aren’t black and white, and the liability for spreading untruths and believing them are spread far and wide. When a famous actor tells her fans that vaccines cause autism, do we fault the actor for her irresponsible hubris, or do we fault her fans for taking medical advice from an actor? Or do we maybe fault our media outlets for giving this person a platform, or the medical establishment for not running a counter-education campaign? Maybe science is to blame for putting out bad information in the first place? Or that particular scientist? When a church leader tells his flock that evolution is a hoax, when a political leader tells her base that an election has been stolen, or a television talk show host tells his fans that white nationalism is not a threat, we can—and we should—fault these talking heads for their horrifying lack of good judgement. But we can’t necessarily fault the people who parrot these lies and distortions because this is what we do in society. Objectivity only works when the people we trust are trustworthy, when the issues at hand can be explained with facts rather than opinions, and when our channels for sharing and verifying these facts are fit for purpose.
Achieving all this is a near impossible task, except for in science. This is partly because science deals with confirmed data, of course, but also because, over the centuries, science has developed a pretty successful approach to truth wrangling. Because science has strict quality controls governing the production, publication and verification of information, scientists don’t believe something just because someone famous said it. Science needs to be verifiable and verified before it is considered knowledge. In theory anyway. It can take time, for example, to correct errors that get introduced into the system. The 1998 Wakefield study showing a purported link between vaccines and autism took years to debunk. Eugenics “science” in the early 1900s similarly took decades to wash out of the system and in the meantime produced horrifying social and political outcomes. And certain incentives in academic publishing can cause the occasional researcher to commit research fraud.
Still, disinformation has affected science as well. Public confidence in science has remained somewhat high (at least relative to other professions) in modern times, but this is despite a decades-long disinformation campaign waged by big tobacco to discredit medical research and convince the public that smoking was harmless; an ongoing campaign funded by big oil to cast doubt on climate science; and countless other well-funded lies attacking everything from evolution to acid rain to medical care. Science has no central voice with which to defend itself, so one response of science communication advocates over the last several decades has been to train scientists themselves to become better communicators (a defense strategy that SCI has disagreed with; it makes more sense to put money into actual marketing work than to expect scientists to do this work effectively).
Another much more vigorous response, motivated both by those who think science is hiding something and those who want to get more science to policymakers more quickly, has been to require more science research to become publicly accessible. This movement toward more open science has been centuries in the making, spurred on by a variety of forces both internal and external to science, but what we recognize today as peer review emerged from the work of a US Congressional Oversight Committee in the mid-1970s to ensure that tax dollars in science were being properly spent; and the first government-funded repository of research information emerged from the US National Institutes of Health in 2000 (PubMed Central), fulfilling both the growing promise of the Internet to share information, and the growing need to try to communicate research more effectively.
A great many research communication changes and innovations have happened since then, not all good. Today, for example, most new research work is published in open access format (meaning that it’s free to read and reuse) but the cost for this benefit is borne by APCs, or author publishing charges, rather than by subscriptions. The intent of this flip was to reduce the burden on university libraries of subscribing to a growing and increasingly expensive universe of research journals, but the result (so far) has been to make it far too expensive for many researchers around the world to publish their work, except for in low-cost and usually unreputable journals.
Another questionable development has been the visibility of preprints. These are non peer reviewed science articles that are intended to help quickly spread data and ideas for review and feedback. Preprints have been wildly successful in some fields like physics and astronomy (indeed, the physics preprint server arXiv was launched at the very start of the Internet revolution in 1991), and many open advocates see preprints as an important way to reduce costs and improve access. But since preprints contain science in the raw, the research findings they contain—often not-ready-for-prime-time science that reputable journals will not publish (about two-thirds of preprint articles will eventually be published in journals, but only after improvements have been made)—can be taken out of context by the media and promoted as fact instead of settled science, resulting in more backlash against science.
The very institution of science is not immune to alt-history either. Since the early 2000s, certain lobbyists and funders—many of them with a pronounced anti-publisher bias—have promoted a narrative that research needs to be free for everyone everywhere to read immediately after work has been completed, consequences to research and research publishing notwithstanding. Previous approaches to open science allowed researchers a few months to collect their thoughts before going public, and for publishers to recoup their investments by giving subscribers priority access to published work. This narrative of free and immediate is built on the historical myths that science as it is currently practiced is flawed and secretive, that publishers (who adjudicate and curate science knowledge) don’t add value to the system and in fact prevent the world from seeing science, and that by simply making science free to read and access, we can unlock the gates to new discovery and new cures.
While the promise of faster cures is alluring, the facts point toward different historical truths, and different solutions for improving the future of science. Indeed, the facts suggest that if we aren’t careful with our reforms, we may end up with policy solutions that make research and research communication worse than today, not better. But to justify our current open science policy direction, we are told by the alt-history funders and promoters that open science didn’t exist before the open science movement (although it did, in spades), open science policies lead to more papers being cited (they don’t) and lower costs (they definitely don’t), scientists everywhere support open science policies (scientists support the idea and goals of openness but often not the policy details of open science, although this varies by field, region, and policy), open science was crucial to our victory over COVID (existing open science practices helped, but not new open science policies), and open science will lead to faster discovery (they will not, unless we focus more on making data actually useful instead of just accessible). This policy debate, such that it is, is all something of a tempest in a teapot for much of the world, but the communication reforms in progress will have major implications for the conduct of research in the coming years, as well as the status of science as a pursuit open to all people everywhere, and not just those who can afford to participate.
SCI’s largest project over the last dozen years—-OSI (the Open Scholarship Initiative)—has worked extensively with researchers and research leaders from around the world to identify the best path forward for open science. OSI’s 2023 report titled “Considering evidence-based approaches to open policy” provides more detail about the real history of open science, and describes how we are basing our current open policies on historical fallacy instead of fact. The report also describes how our global open science policies can be modified so we’re working together toward actually improving the world through open science, instead of working toward goals that only exist in an alt-history universe. To read the full report, please visit the OSI website at osiglobal.org.
Meanwhile, Jason Steinhauer’s History Communication Institute (HCI) is working on some of the broader issues with regard to history communication in modern society. Inspired by the Science Communication Institute and some of the work we’ve done, HCI is trying to develop a global community of historians who can work together on finding policy approaches to these and other issues. Steinhauer’s “History Disrupted” offers a more complete look at some of the many communication dynamics involved in how we see history today.
The following excerpt is from OSI’s “Considering evidence-based approaches to open policy” paper. For the sources noted in the in-text citations, please see the report’s references section.
For additional reading, please see:
Open access is a term that has gained much attention in research communication circles over the last twenty years. Generally speaking, it means making information easier to find and share, including but not limited to research information. Countries around the world have focused on open solutions reforms (including but not limited to open access, open data, open educational resources, and more; see Hampson 2021) as being essential to the future of research. The reason for this is not entirely clear, although effective advocacy and constant publicity about publisher profit margins has elevated open access into a sort of cause celebre, with OA advocates being heroic Robin Hoods stealing from the rich and giving to the poor. Our passions appear to have inflamed OA reform ideas into being proxies for reforming the future of research. There is no actual international effort for this kind of work, of course (see Box 1), so open access policies, and to a lesser degree open science and open data policies, have become global research reform policies writ large, soaking up policymakers’ attention and creating changes that affect a broad swath of research and research communication practices well beyond just making information more open. In this open solutions race, open access policymakers have so far created the most policies with the most wide-ranging impacts on research.
Unfortunately, the evidence our policymakers have been relying on is inadequate, and the seriousness of our deliberations has not been commensurate with the significance of the policies in question. Our debates have instead been swirling in an anti-democratic eddy for decades, during which time we have not carefully listened to all parties involved—despite what in many cases are genuine efforts to listen and learn—and have instead allowed the policymaking process to be guided more by the opinions of interest groups and biases of policymakers than by objective facts and evidence. This pattern is, maybe unsurprisingly, consistent with the policymaking biases we have seen for many other high-profile science-related issues over the years. As a result, some people view the open access regulations we have created today as a significant and noble accomplishment while others see them as a complete failure unworthy of science. Is there a path toward open access policymaking that is more democratic and evidence-based? And if so, is it even possible to backtrack and think about new policy frameworks?
AGREEING ON DEFINITIONS
A first step might be to agree what open access even means. As mentioned above, this term generally means making information easier to find and share. At its core, this means free to read. But the exact definition involves lots of caveats, depending on who is doing the defining. Some say information is only open access if it is free to read plus licensed in a way that permits unlimited reuse with attribution (a CC-BY license). Others say free plus CC-BY is not sufficient, and that additional conditions are also necessary, like zero embargo (no delay between publishing and accessibility). Still others pile on even more conditions like metadata, repository requirements, and data sharing. The same caveats are true for open data, open code, open educational resources, and more, where different kinds of information have different kinds of open definitions, conventions, options and outcomes.
In this report, we will use the terms open and open access interchangeably (along with the term open solutions, which is a blanket term describing all open approaches). This overlap is intentional. The world outside the confines of scholarly communication experts has conflated these terms and used them interchangeably, so much so that trying to make a distinction between them is now more confusing than helpful. At least in the policymaking world, OA and open now mean the same thing.
UNDERSTANDING HISTORY
Over the past 20 years, many valiant open access scholars have tried to organize the different ways in which “openness” is described.[1] Their efforts have ventured beyond just defining open, and have instead focused on trying to understand why we speak with so many different languages when it comes to our open goals and methods. These scholars have invented a multitude of plausible explanations, all correct to some degree, including noting that many different philosophical motivational, epistemological and economic motivations exist for open. But how did all these differences arise in the first place? The economic explanation may be correct one to adopt (Mirowski 2018), but there’s also a simpler explanation. As it turns out, the concept and practice of openness has been evolving along at least half-dozen distinct historical paths over a very long time, in some cases for centuries already. Over the years, these histories have led to the formation of entirely different branches of open, each with its own completely and legitimately different ideas about what open research looks like and how it should grow in the future.
The first historical branch of openness comes from within research itself. The need to share ideas and discoveries has always been a bedrock principle of scientific investigation (Poskett 2022). Over time, researchers have been adept at inventing the solutions they need (and that work) to communicate more openly and effectively with each other, including forming new scientific societies; attending conferences; creating new journals; creating a multitude of data catalogues and indexes; creating new standards; creating binding guidelines on the social and ethical need to share research data (see Box 2, for example); and creating highly successful data sharing and research collaboration partnerships and networks, particularly in the life sciences, high energy physics, astronomy, and genetics.
A closely-related second branch of OA evolution centers around publishing practices. Research and the dissemination of research findings have always been closely tied.[2] Widespread use of the printing press started around the late 1500s and was a transformative event in human history that fundamentally changed our expectations for how knowledge could and should be shared (see Johns 1998), particularly for the practice of systematized research, which was just beginning to take root. By the mid-1800s, publications established explicitly to share ideas and discoveries were proliferating—over 1300 journals now existed. It was crucial for scientists to be aware of what knowledge already existed in their field, but even then, doing so was becoming increasingly difficult. This need for more openness and increased awareness gradually led to standards and systems for what constituted clear and rapid sharing of knowledge, claims to discovery, proper citation methods and more (Csiszar 2018). These standards and systems have continued to evolve today in response to the ever increasing growth of research, in response to the ever changing needs of researchers, libraries, funders and governments, and in response to the huge market opportunities available for creating the best new systems.
A philosophical offshoot of this second branch, technically distinct enough to be considered a third branch, is the growth of computer technology and the Internet starting around the mid-1980s. Once again, as with the advent of printing, these developments fundamentally changed our expectations about access to information, and paved the way for more open developments in research and society, such as the launch of GenBank in 1992 by the US Los Alamos National Laboratory, the world’s first public access repository of nucleotide sequences; creation of the world’s first preprint server, arXiv, in 1991 (originally for physics and astronomy research); publishing of the world’s first OA journals (through SciELO in 1997); formation of the Open Source Initiative in 1998 to help govern computer code; the world’s first OA megajournal (PLOS in 2000); and development of the first open educational resources (by the Hewlett Foundation in 2001). Today, it’s impossible to underestimate the influence that technology and the Internet have had on all things communication, from rapid download speeds to social media to the proliferation of publishing platforms. These developments continue to raise our expectations and increase the potential for what communication can become, not just in research but across society.
The fourth distinct branch in the evolution of open knowledge has centered around social development. Over time, the slow and steady march of the scientific method—valuing evidence, openness, transparency, accountability, and replicability—and its success at unlocking true knowledge has influenced everything from philosophy to politics, law and industry, which in turn has created more “norming” of this approach, particularly in the West.[3] For example, not long after the start of the Scientific Revolution in Europe, when natural philosophers such as Copernicus and Galileo successfully challenged prevailing explanations for how the world worked (as defined by Aristotle and the Catholic Church), social philosophers such as Locke, Hobbes and Rousseau (among others) were inspired to start questioning the world’s social order. This work led directly to revolutionary new political concepts, including France’s Declaration of the Rights of Man, and the US Constitution (both passed in 1789), which employed the Scientific Revolution’s spark that even man and society were tied to the natural world through natural rights.
In parallel with this growing appreciation of and our need for the scientific method, science and technology became driving forces of global development in the 1800s, with breakthroughs in physics, medicine and biology igniting massive change throughout the world. The public’s thirst for knowledge and enthusiasm for learning more about the natural world became a global phenomenon that continued into the first decades of the 1900s. By the aftermath of World War II, Karl Popper’s “Open Society and It’s Enemies” made the case that the open knowledge ethos of science needed to spread beyond science and into the fabric of societies—that it was important now more than ever to construct societies where truth is widespread and easily accessible, lest we backslide again into a world ruled by totalitarianism and fascism.
Popper’s work is generally acknowledged as the formal intellectual beginning of the open society movement. Today, many open advocacy groups travel along an offshoot of this branch, characterizing the need for open science as a social justice issue. The massive technological influence of the Internet has both influenced and enabled this ongoing work and development, raising our expectations for what technology can do for open knowledge and open society, and enabling this change, which in turn has lead to higher expectations and even more change.
A fifth historical branch of open has been accountability. Before the mid-1950s, accountability in research was largely internal, focused on ensuring that research was accurate, and that systems for reporting and writing about research were broadly accepted. In the post-WWII era, as government spending on research increased dramatically, the need for greater public accountability in research also developed, both financially and in terms of public access to what we were spending money on and why. Systems of accountability have now evolved to sophisticated heights, from grant evaluation procedures to modern research impact evaluation procedures and freedom of information laws, all from different government agencies and with different objectives. For example, the world’s first nationwide open access policy for scientific research was implemented by the US National Institutes of Health in 2008 (Suber 2008). What we now recognize as peer review was born out of US Congressional oversight into research in the mid-1970s (Baldwin 2018). And many countries now have their own research impact evaluation systems, perhaps none more carefully designed than the UK’s Research Evaluation Framework (REF 2021).
A SIXTH BRANCH EMERGES
Amidst this centuries-long evolution of open thought and practices, participants at a 2002 conference in Budapest (the Budapest Open Access Initiative, or BOAI) advanced the idea that open access meant only one thing: that in addition to being free, research also needed to be licensed in a way that optimized the potential for its unrestricted reuse, free of its typical copyright restrictions. The goals were simple: by making information free and easier to access and reuse, we could democratize research, lower publishing costs (by untethering publishing from publishers), and better serve the public good.
The language used in the BOAI declaration was lofty and Panglossian, reflecting the vision of the Internet circa 2002 that we were on the cusp of a world where information would soon flow freely across borders with little cost and enormous benefit for all mankind. Adding fuel to this declaration, several of the BOAI signatories would in the coming decade become the most prolific, eloquent and vocal opponents of high profits in commercial science publishing, including Steve Harnad, Leslie Chan, Jean Claude Guedon, Peter Suber, Michael Eisen, two representatives from the Open Society Institute, and one representative from SPARC (the Scholarly Publishing and Academic Resources Coalition; SPARC in particular would lead the anti-publisher march over the next 10-15 years).
Subsequent modifications to BOAI made at conferences in Berlin and Bethesda stipulated that research also needed to be made immediately available, with no delay allowed between publishing and free access to the public.
THE SIXTH BRANCH BECOMES ALL WE NOTICE
Over the next decade, promoted by the effective voices who helped craft this statement, supported by the money and organizing acumen of SPARC and the Open Society Institute,[4] and made timely by the spiraling cost of journals for academic libraries where, prophetically, commercial publishers played the role of the boogeyman to perfection,[5] the BOAI approach to open access became the bedrock philosophical foundation for most subsequent open access policies, and it continues to be so even today. All other historical branches of open access have been ignored.
This isn’t to say that BOAI’s policy recommendations were wrong. To many believers, they were exactly on target. Rather, the most vocal post-BOAI open advocates tended to portray open access as a contest between good and evil. Policy debates became urgent, polarized and confrontational—even personal. The policy space became a battlefield where there was no middle ground, and no willingness to understand issues from all sides, ignoring the different histories involved and the differing needs and points of view centuries in the making. Ideology was not only trumping the expert-driven democratic policymaking ideal, it was beating it into the ground with a hammer of righteous might (see Box 3 and Plutchak 2022). As one research leader remarked on the OSI listserv in 2018, we were going about reforming science in a very unscientific manner.
Today—and despite a large, meaningful and influential array of open tools, policies and efforts, from the Panton Principles and FAIR Principles governing open data (2009 and 2016 respectively) to a thick alphabet soup of important organizations and principles (DORA, GitHub, OSF, Lindau, PubMedCentral, et al)[6]— the BOAI approach has become an article of faith for most of the world’s significant open access policies,[7] from Europe’s Plan S to UNESCO’s open science policy to the University of California’s transformative agreement with Elsevier and the new US open access policy (the Nelson Memo).
The idea that open means free, immediate and licensed for unlimited reuse is not challenged. Most major funders have also fully accepted this approach to open access.[8]
As our global open access policymaking efforts move forward, it’s important to remember there are many histories and forces still influencing open practices. Understanding this will help us better understand what needs to be done and where we might want to concentrate our efforts for maximum effect and sustainability. In this policy space, there is a tangle of history, actors, needs, motives, and objectives. We may want “open” to be a simple notion with a straightforward past and an obvious future, but as we shall continue to explore in this report, it is none of these things.
[1] Notable thinkers include Benedikt Fecher and Sascha Friesike (Fecher 2013), Jeroen Bosman and Bianca Kramer (Bosman & Kramer 2017), Samuel Moore (Moore 2017), Philip Mirowski (Mirowski 2018), Jon Tennant (Tennant 2019), and Rebecca Willen (Willen 2020). See OSI Policy Perspective 3 (Hampson 2020) for a more detailed overview of the philosophical underpinnings of open science.
[2] Vint Cerf (co-inventor of the Internet) and Keith Yamamoto (UC San Fransisco Vice Chancellor for Science and Policy) both highlighted this point in their opening and closing remarks to OSI’s 2017 conference (see OSI 2017). For Cerf, increasing the reproducibility of published research is of paramount importance for the future of research, which requires increasing access, which in turn requires a much more serious focus on digital preservation—from hardware and operating systems to software and formats. Without this preservation and access, there can be no modern scientific record. For Yamamoto, the act of publishing cannot be separated from research. “If you don’t publish your experiment, it is exactly like not doing it.”
[3] Several excellent books on the history of science communication touch on this theme, including David Wootton’s “The Invention of Science” (Wootton 2019), Adrian Johns’ “The Nature of the Book” (Johns 1998) and James Poskett’s “Horizons” (Poskett 2022).
[4] At the time, SPARC was part of the American Library Association. It became a separately funded lobbying group in 2016.
[5] The number of scientific journal articles published doubles roughly every 17 years (Bornmann 2021) due to a steady increase in research spending, the emergence of new research disciplines, a splintering of existing disciplines into new specializations, and other factors (see Annex). Trying to stay abreast of these changes, publishers note that their cost per article published has dropped over this doubling period, but the total costs of subscribing to all available content has still been too high for university libraries to bear. Richard Poynder’s 2019 essay, “Open access: Could defeat be snatched from the jaws of victory?” gives one of the most detailed accounts of this history (Poynder 2019). For an insider’s account of the politics at play, read “Public access policy in the United States” by T Scott Plutchak, Fred Dylla, Crispin Taylor and John Vaughn (Plutchak 2022).
[6] In the 20 years since 2002, various declarations have added nuance and complexity to the cri de coeur of BOAI. For example, in 2010, the Panton Principles qualified that publicly funded science should be in the public domain (CC-0) and that licenses that limit the reuse of data (like CC-BY) should be discouraged. DORA in 2012 and the Leiden Manifesto in 2015 both took aim at Journal Impact Factors, arguing that qualitative evaluations of research should matter more than quantitative evaluations. FAIR in 2016 argued that data must be easy to find, clearly accessible, interoperable with other systems, and optimized for reuse. The Lindau Guidelines of 2020 reiterated the need for scientific data and results to be made “openly available,” while adding that research and evaluation criteria must be transparent. Harkening back to the biomedical research declarations (see Box 1), Lindau also stated that science has a responsibility to society to communicate, educate and engage.
[7] The 16-person 2002 Budapest meeting was followed by a 24-person meeting in Bethesda in 2003. The Bethesda group built on the Budapest group’s work, adding provisions for how users will enact open access. A 2003 Berlin meeting that attracted around 100 representatives built on the Budapest and Bethesda definitions of open, culminating in the Berlin Declaration on Open Access, which is also a foundational philosophy in open access policy (Max-Planck 2003).
[8] Our acceptance has arguably even made us blind to conflicts of interest and hyperbole. For example, the CEO of open access publisher Frontiers was deeply involved in the development of Plan S (Schneider 2019). As for hyperbole, the new US open access policy promotes the merits of open access but lacks factual support for its recommendations (Clarke & Esposito 2022).
Glenn is Executive Director of the Science Communication Institute and Program Director for SCI’s global Open Scholarship Initiative. You can reach him at [email protected].