Helping science succeed
Helping science succeed

Considering evidence-based approaches to open policy

According to the world’s most influential open access policies, only certain types of information outputs are genuinely open. In practice, however, there are actually many types of open access outcomes and solutions. A more flexible, evidence-based approach to creating open access policy will better meet researchers’ requirements and also reduce the unintended consequences of our current policies.


The most influential open access policies in the world today are founded on the belief from the early 2000s that only specific types of information outputs are truly open. In this ideology, all other types of outputs (such as free to read but still copyrighted) are unacceptable, particularly for academic journals.

After seven years of global, multi-stakeholder engagement and research—during which our understanding of the global information landscape has evolved a thousand-fold from the early 2000s—the Open Scholarship Initiative (OSI), in collaboration with the United Nations Educational, Scientific, and Cultural Organization (UNESCO), has reached the opposite conclusion: that openness has many definitions and outcomes, and that many different open solutions are working well in modern research communication. In fact, some of today’s most robust and promising open solutions would not even be considered open by the ideology of the early 2000s.

As policymakers around the globe move forward with the challenge of making research more accessible, it is crucial that these efforts be based on solid, democratic, fact-based foundations. Particularly, policymakers should pay close attention to what researchers need, what information sharing solutions are already working in the research world (including solutions that do not fit common definitions of open), and the negative unintended consequences of our current open policies.

UNESCO has long argued that equity should be a pillar of this next-generation open policy framework. OSI has proposed that doing something with open should be a second pillar, treating open as a tool to help research succeed rather than as an end in itself. OSI’s 2022 research communication surveys indicate that the majority of researchers agree with this perspective and methodology.

Reconsidering our open policies does not necessarily mean abandoning Plan S, the Nelson Memo, the UNESCO open science policy, or major transformative agreements. In any case, these policies are all evolving gradually in response to feedback and market pressures. Rather, it suggests that in the future, we should also develop broad, inclusive, flexible, evidence-based policies as part of a tapestry of open options and approaches. Doing so will benefit researchers and societies worldwide, improve global equity in research and research communication, allow the world capitalize on the full potential of open research, and help prevent research from fracturing along regional and ideological lines.


The Open Scholarship Initiative (OSI) is an international group of leaders and experts in scholarly communication. Approximately 450 individuals have participated in group discussions since 2015, representing over 250 research institutions from 32 countries and 18 stakeholder groups. UNESCO, non-profit foundations such as Sloan, commercial publishers and publishing industry groups, scholarly societies, universities, scholarly communication experts, and participants themselves (by way of conference fees) have supported OSI’s work.1

OSI has served as an open policy observatory, facilitating direct communication between high-level leaders in scholarly communication, and synthesizing policy recommendations for UNESCO’s consideration based on our group’s extensive knowledge. Our policy recommendations have always been more a collection of perspectives than a consensus opinion, given that we represent so many diverse global regions and differing points of view. For example, when we issued a generally negative critique of Plan S in 2019 due to how it would widen the gap between high and low income countries by replacing paywalls with “playwalls” (i.e., by eliminating subscription access to published research and replacing it with a system where researchers are charged for publishing their findings), roughly a third of OSI participants still supported Plan S with minor or major modifications, while another third did not (Hampson 2019).

These honest differences of opinion exist not only within OSI but within the broader scholarly communication community as well, where some are cheering the current state of open access policies like Plan S, others are urging a more thoughtful and restrained approach, and still others are resigned to the fact that rapid change is happening and are just trying their best to adapt. On social media, these differences of opinion frequently manifest as a battle between good and evil, between those who support rapid reform for the greater good and those who support a status quo characterized by entrenched dysfunction and profiteering.

In reality, though, we are all working toward the same goal: A future where we can do more with research because more research is open and accessible. There are some who want to act now, and others who are committed to finding solutions that truly work for researchers everywhere, not just in the United States and the European Union. Our commitment to working for more inclusiveness and equity is what best defines OSI’s work. Generally speaking, this challenge has appealed to scholarly communication analysts (many of the world’s leading scholarly communication experts have contributed to OSI’s work) but has frustrated those on the far right and far left. Open access (OA) critics have not engaged much with OSI over the years, fearing reputational damage to their careers and institutions by speaking negatively about OA reform ideas and policies, while those who are morally outraged by publisher profits have not engaged much with OSI because OA reforms are, in their mind, a social justice issue with a clear cause and clear solution. For these OA supporters, debate equals appeasement.

To-date, OSI’s authors have published six policy perspectives (including this one) that detail the group’s findings and recommendations. Our first policy perspective, published in March 2019, analyzed the pros and cons of Plan S and suggested that the organizers (cOAlition S) modify their plan to prevent the kinds of unintended consequences we’re currently witnessing with regard to growing inequity between researchers with large publishing budgets and those without. The second policy report from OSI, which was published in April 2020, examined the common ground for policymaking in this space. OSI argued, based on our internal discussions, conference proceedings, and original research, that there are numerous areas of common ground in this community, and that working together on these areas was the most rational approach to policymaking. OSI’s third report, published in June 2020, served as an introduction to OSI’s participation in UNESCO’s open science policy initiative. In this document, OSI provided UNESCO with extensive research on how open science is variously defined and what global open science policies should look like (OSI later participated in UNESCO’s regional consultative meetings and served as an official observer during the final passage of this policy). The fourth policy perspective from OSI, published in February 2021, investigated the technical and policy overlap between all open solutions, including open access, open data, open code, open government, open educational resources, open science, and open methods. This original research led us to the conclusion that the most effective framework for inclusive open solutions policies will be built on a foundation of achieving common goals, such as working together to cure cancer by creating open policies and resources that enable more information of all types to flow between cancer researchers, as opposed to thinking of open as a way to collect information in text, data, or code format. OSI’s fifth report, published concurrently with this report, summarizes the results of its 2022 researcher surveys.

This sixth OSI policy perspective is, in a sense, the culmination of our five previous OSI reports, bringing together their observations and recommendations. In this document, we will reiterate that APCs are harmful and that ideologically-based policies limit the potential of open (OSI Policy Perspective 1); that common ground is abundant and should be our primary policy focus (Policy Perspective 2); that open science policies are an obvious vector for change, but these policies must be grounded in evidence and make sense to researchers (Policy Perspective 3); and that developing broad, flexible, long-term, goal-oriented strategies is essential (Policy Perspectives 4 and 5). In building our case, we will also summarize the key recommendations of OSI participants since 2015 and note how the results of our global surveys of researchers in 2022 support these recommendations.

We believe the observations and recommendations in this report have withstood scrutiny and can serve as the foundation for a new generation of open research policies that are more effective and sustainable than current policies. This claim may appear overly confident. After all, current open access policymaking efforts are continuing unabated, and countries have an abundance of existing policies to choose from without considering new policy frameworks from OSI. As we will discuss in the introduction of this report, however, many of these policies, particularly those of global significance, are not based on evidence or vetted through democratic policymaking processes, as we would expect for sound public policy. A weak foundation is only part of the problem; failing to understand the needs and perspectives of researchers is also a major flaw.

The recommendations in this report describe a range of global open access policies that can be used as templates by researchers, institutions and countries around the world. If you have any questions or would like to provide feedback on this report, please email OSI program director Glenn Hampson at [email protected] by August 31, 2023.


Walt Whitman, the American bard of democracy, noted 150 years ago that democracy isn’t just about voting. It’s also about respecting different points of view in all walks of life.

Representative democracy was a nascent and revolutionary form of government in Whitman’s time. Today, about two-thirds of the world’s countries are democratic. In most of these countries, when it comes to making public policy, democratic principles are the ideal: Experts convene to study an issue, they invite broad and representative public comment to inform their deliberations, and they draft thoughtful recommendations for policymakers and elected politicians to consider. Some policies get codified into law; other policies are amended or disappear entirely over time.

The reality, of course, is that policymakers aren’t robots. Rather, they are individuals who enter the arena of public policy with their own opinions, biases, and motivations. In addition, policymakers are not independent of political leaders. Even though one of the most important characteristics of the modern administrative state is that public servants strive to be impartial and objective, politicians sometimes task administrators with “color by number” policymaking rather than building policy from the ground up, using supplied “facts,” or implementing predetermined partisan solutions. Even when expert policymaking processes are adhered to, political judgment frequently trumps expert recommendations, and interest groups focus more on demonizing opposing viewpoints and misrepresenting facts than on finding common ground and workable solutions.

In the US, this is the history of much high-profile public policymaking, from manifest destiny to slavery, women’s suffrage, civil rights, and immigration. In the area of science, public policy smear and disinformation campaigns have taken place over issues like DDT, tobacco, acid rain, the earth’s ozone layer, clean air and water, climate change, and COVID vaccines (Oreskes 2011). Anti-democratic dynamics in policymaking tax our time and patience, harden our positions, deepen our distrust in facts and government, and delay solutions to important problems. They can even lead us to adopting the wrong policies altogether.

What does any of this have to do with open access? Open access is a term that has gained much attention in research communication circles over the last twenty years. Generally speaking, it means making information easier to find and share, including but not limited to research information. Countries around the world have focused on open solutions reforms (including but not limited to open access, open data, open educational resources, and more; see Hampson 2021) as being essential to the future of research. The reason for this is not entirely clear, although effective advocacy and constant publicity about publisher profit margins has elevated open access into a sort of cause celebre, with OA advocates being heroic Robin Hoods stealing from the rich and giving to the poor. Our passions appear to have inflamed OA reform ideas into being proxies for reforming the future of research. There is no actual international effort for this kind of work, of course (see Box 1), so open access policies, and to a lesser degree open science and open data policies, have become global research reform policies writ large, soaking up policymakers’ attention and creating changes that affect a broad swath of research and research communication practices well beyond just making information more open. In this open solutions race, open access policymakers have so far created the most policies with the most wide-ranging impacts on research.

Unfortunately, the evidence our policymakers have been relying on is inadequate, and the seriousness of our deliberations has not been commensurate with the significance of the policies in question. Our debates have instead been swirling in an anti-democratic eddy for decades, during which time we have not carefully listened to all parties involved—despite what in many cases are genuine efforts to listen and learn—and have instead allowed the policymaking process to be guided more by the opinions of interest groups and biases of policymakers than by objective facts and evidence. This pattern is, maybe unsurprisingly, consistent with the policymaking biases we have seen for many other high-profile science-related issues over the years. As a result, some people view the open access regulations we have created today as a significant and noble accomplishment while others see them as a complete failure unworthy of science. Is there a path toward open access policymaking that is more democratic and evidence-based? And if so, is it even possible to backtrack and think about new policy frameworks?


In democratic socieites, our ideal is for policymaking bodies to operate within their own spheres of influence and expertise. We neither want nor expect our local health department to design electrical codes, or the US to design immigration laws for France. We also expect strong communication in the policymaking process between those who design policies and those who will be affected. If there is too much disconnect between policy makers and the governed, we end up with unreasonable, unjust, ineffective, and unsustainable laws. Ultimately, of course, public policy is going to be influenced by external factors like bias and politics. But to the extent possible, democratic societies always aspire to the ideal that policymaking is driven by expertise and evidence, and works for the greater good; it should not be ideological driven, or inflict harm on society (although, of course, much of it has throughout history).

When it comes to developing research communication policy, who is the expert? No one really. There are enormous scholarly societies like the American Association for the Advancement of Science and American Geophysical Union who have participated in high level conversations on research communication topics for decades. Similarly, research institutions like the Howard Hughes Medical Institute, the Los Alamos National Laboratory, Max Planck Institute and Cold Spring Harbor Laboratory all have decades of experience thinking and publishing insights about these issues. Government agencies around the world have also been deeply engaged, from the US National Institutes of Health, National Science Foundation, and National Academies of Science, Engineering and Medicine, to China’s Association for Science and Technology, Brazil’s São Paulo Research Foundation, India’s Ministry of Science and Technology, UK Research & Innovation, Horizon Europe, the Autralian Research Council, and many more. All have hosted and participated in conferences about issues in research communication for years. Publishers have also participated and helped fund these conversations, along with universities, llibraries, philanthropies, and interest groups. The collective expertise of these groups regarding research communication policy is deep, but it is also disbursed throughout the world.

When it comes to creating global research policy, then, which single agency has the standing and expertise to do so? Here again, the answer is no one. There is no single voice that speaks for all research everywhere. The needs, perspectives, priorities, practices, and unique knowledge are far too widely disbursed. Added to this, creating a single policy for all research everywhere is akin to developing a single policy for all sports—saying that henceforth, all athletic contests shall involve a ball weighing 450 grams on a field measuring 50 by 100 meters. This kind of policy might be okay for soccer, but the impact on basketball would be interesting and the impact on swimming would be nonsensical. Even if we do hear from all voices and then attempt to create a single policy that averages out everyone’s ideas and concerns, the resulting policy still wouldn’t necessarily make sense in this case.

At the international level, certain agencies have been given the authority by the international community to develop global policies covering a range of issues, from human rights (UN OHCHR) to international monetary policy (IMF), economic development (IBRD), copyright (WIPO), health (WHO) and climate change (IPCC). UNESCO is the only international agency vested with the authority to develop science policy. Although it doesn’t have a team of dedicated science policy experts on staff, or a large team of scientists (which arguably makes it an unusual candidate for this job*) UNESCO does have a mandate to be “a laboratory of ideas,” and attempt to offer “a broad range of expertise in the fields of Education, the Sciences and Culture” (see In service to its mission, UNESCO does amazing work, attempting to assemble international teams of individuals and organizations to research and consult on various policies.

It was with such commitment and vigor that UNESCO attempted in 2019 and 2020 to hear from the entire global research communication community and then craft a policy that fairly and accurately reflected the needs and perspectives of researchers from around the globe with regard to open science reforms. Some have hailed this effort as an historic success; others (including OSI) have noted that the final policy misses the broader point that one-size-fits-all approaches can’t work in research. In terms of effort, however, UNESCO’s final product—the UNESCO Recommendation on Open Science (see UNESCO 2021)—stands far above Plan S and the Nelson Memo in terms of commitment to our democratic public policy ideals, since both of these major policies involved no consultations of note with researchers or research communication groups. What’s doubly unfortunate is that even though both of these policies were ostensibly designed to only affect a subset of researchers within the borders of the EU and US, their impact is becoming global (Plan S, having a three-year head start on the Nelson Memo, is having the most impact at the moment). Plan S in particular, with its focus on the expensive article publishing charge method of paying for research publishing, is exacerbating existing global inequities in research due to a mismatch between the needs and resources of EU researchers and the needs and resources of the Global South (e.g., see Mwangi 2021 for a discussion of resource constraints in Kenya).

Is there another way to reform global research communication policy? More inclusive, evidence based policies would be a good start, particularly to the degree that researchers themselves can be more involved in policy discussions and development. See the policy options discussion section later in this report for more detail.

*Contrast this with the World Bank (IBRD), for example, which is tasked with global economic investment and development but is staffed by thousands of economists and development specialists from 189 countries around the world and has field offices in 130 locations to keep close to projects and development issues. Similarly, WIPO has hundreds of patent attorneys, WHO’s team of 8000 professionals includes many of the world’s leading public health experts, including doctors, epidemiologists, scientists and managers, and the IPCC consists of the world’s top climate experts and support staff.


A first step might be to agree what open access even means. As mentioned above, this term generally means making information easier to find and share. At its core, this means free to read. But the exact definition involves lots of caveats, depending on who is doing the defining. Some say information is only open access if it is free to read plus licensed in a way that permits unlimited reuse with attribution (a CC-BY license). Others say free plus CC-BY is not sufficient, and that additional conditions are also necessary, like zero embargo (no delay between publishing and accessibility). Still others pile on even more conditions like metadata, repository requirements, and data sharing. The same caveats are true for open data, open code, open educational resources, and more, where different kinds of information have different kinds of open definitions, conventions, options and outcomes.

In this report, we will use the terms open and open access interchangeably (along with the term open solutions, which is a blanket term describing all open approaches). This overlap is intentional. The world outside the confines of scholarly communication experts has conflated these terms and used them interchangeably, so much so that trying to make a distinction between them is now more confusing than helpful. At least in the policymaking world, OA and open now mean the same thing.


Over the past 20 years, many valiant open access scholars have tried to organize the different ways in which “openness” is described.2 Their efforts have ventured beyond just defining open, and have instead focused on trying to understand why we speak with so many different languages when it comes to our open goals and methods. These scholars have invented a multitude of plausible explanations, all correct to some degree, including noting that many different philosophical motivational, epistemological and economic motivations exist for open. But how did all these differences arise in the first place? The economic explanation may be correct one to adopt (Mirowski 2018), but there’s also a simpler explanation. As it turns out, the concept and practice of openness has been evolving along at least half-dozen distinct historical paths over a very long time, in some cases for centuries already. Over the years, these histories have led to the formation of entirely different branches of open, each with its own completely and legitimately different ideas about what open research looks like and how it should grow in the future.

The first historical branch of openness comes from within research itself. The need to share ideas and discoveries has always been a bedrock principle of scientific investigation (Poskett 2022). Over time, researchers have been adept at inventing the solutions they need (and that work) to communicate more openly and effectively with each other, including forming new scientific societies; attending conferences; creating new journals; creating a multitude of data catalogues and indexes; creating new standards; creating binding guidelines on the social and ethical need to share research data (see Box 2, for example); and creating highly successful data sharing and research collaboration partnerships and networks, particularly in the life sciences, high energy physics, astronomy, and genetics.

A closely-related second branch of OA evolution centers around publishing practices. Research and the dissemination of research findings have always been closely tied.3 Widespread use of the printing press started around the late 1500s and was a transformative event in human history that fundamentally changed our expectations for how knowledge could and should be shared (see Johns 1998), particularly for the practice of systematized research, which was just beginning to take root. By the mid-1800s, publications established explicitly to share ideas and discoveries were proliferating—over 1300 journals now existed. It was crucial for scientists to be aware of what knowledge already existed in their field, but even then, doing so was becoming increasingly difficult. This need for more openness and increased awareness gradually led to standards and systems for what constituted clear and rapid sharing of knowledge, claims to discovery, proper citation methods and more (Csiszar 2018). These standards and systems have continued to evolve today in response to the ever increasing growth of research, in response to the ever changing needs of researchers, libraries, funders and governments, and in response to the huge market opportunities available for creating the best new systems.


Organized information sharing in health and medicine has existed in every society around the world and every epoch of history long before open access policies, computers or the Internet. Public health efforts have been a major driver of this need, typically focusing on priorities like malnutrition, infection, and sanitation. This vital knowldge was documented, taught, preserved, shared and improved upon across regions and genarations (see Tulchinsky 2014 for a brief but rich overview). Over time, as the world became more connected through travel and trade, formal international health collaboration resources and organizations began to emerge. In 1892, for example, the International Sanitary Convention was formed to help control cholera. Following World War II, the newly-established United Nations created the World Health Organization, which began sharing information on malaria, tuberculosis, venereal diseases, maternal and child health, sanitary engineering, and nutrition (McCarthy 2002). By the 1960s and into modern times, information sharing across health and medicine was commonplace, highly valued and strongly encouraged (Fienberg 1985). This is a comically short recap of history; our point here is only to highlight, in this box, that information sharing in these fields is nothing new.

The rules and conventions governing how health and medical information is shared, as well as the conduct of medical research itself, largely evolved in an ethical vacuum until the mid-20th Century (there were codes like the Hipocratic Oath, but these were often incomplete or horribly biased). The first major modern rule on research ethics was the 1949 Nuremburg Code, written to ensure that Nazi crimes committed during the Holocaust would not occur again within the medical profession. The Nuremberg Code established that the informed and voluntary consent of research subjects was essential to any research trial; that proposed research must benefit society; that any proposed research study must safeguard the well-being of its subjects; and that the risks of any study must be calculated and justified. Most importantly, the Nuremberg Code imbued medical research with a responsibility to human dignity and human rights. This responsibility was expanded fifteen years later in the 1964 Declaration of Helsinki, issued by the World Medical Association, which not only reaffirmed the Nuremburg Code’s ideals, but also established that underrepresented peoples should be given access to studies and study results. Fifteen years after Helsinki, the Belmont Report of 1979 (issued by the US National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research) introduced the importance of using justice and fairness as guiding principles in medical ethics. The report also espoused that when research is supported by public funds, those who take the risk (i.e., the tax-paying public) must experience the advantages, not just those who can afford to access the data.

Many detailed polices governing how and why to share medical research have since been constructed on these ethical frameworks. In 1982, 1993, and 2002 the Council for International Organizations of Medical Sciences (CIOMS) issued a series of guidelines for clinical trials research, followed by similar guidelines from several other organizations during the 1990s and early 2000s (such as CONSORT for clinical trials data). Numerous medical privacy laws have also emerged, along with sophisticated policies and safeguards at the clinic and hospital level. Over the past 20 years in particular, governments and life sciences funders around the world have increasingly merged these old and new information sharing requirements—pushed for the most part by increasingly complex array of funding and regulatory requirements, and recalling that information sharing has always been done (it’s mostly just the tools that are new)—with the ethical guidelines and imperatives for sharing data envisioned by Helsinki and Belmont. These resulting new guidelines are robust and expansive, establishing that researchers have a responsibility to protect not only patients and society, but research itself. Modern guidelines include strict compliance with complex clinical research protocols (often hundreds of pages long), clear transparency, proven lack of conflict, proven benefit, demonstrated replicability, advanced scientific and statistical rigor, robust data sharing plans, and more—every attribute of high quality and socially responsible research. Layers of regulatory review and approval by government and/or funding agencies are involved, plus alignment with patient privacy protection laws like HIPPA and GDPR, and oversight by and accountability to institutional review boards, scientific advisory boards, data safety monitoring boards, community advisory boards, and more. And in this midst, a seeming infinte array of research collaboration programs and data sharing networks have evolved, all successfully abiding by their own sharing rules and requirements that supplement this new ethical sharing framework.

Until very recently, none of this history has had anything whatsoever to do with open access, open data or open science. Rather, history has simply unfolded organically over time with input from researchers in order to meet the ethical obligations of research and ensure that research is done right.

A philosophical offshoot of this second branch, technically distinct enough to be considered a third branch, is the growth of computer technology and the Internet starting around the mid-1980s. Once again, as with the advent of printing, these developments fundamentally changed our expectations about access to information, and paved the way for more open developments in research and society, such as the launch of GenBank in 1992 by the US Los Alamos National Laboratory, the world’s first public access repository of nucleotide sequences; creation of the world’s first preprint server, arXiv, in 1991 (originally for physics and astronomy research); publishing of the world’s first OA journals (through SciELO in 1997); formation of the Open Source Initiative in 1998 to help govern computer code; the world’s first OA megajournal (PLOS in 2000); and development of the first open educational resources (by the Hewlett Foundation in 2001). Today, it’s impossible to underestimate the influence that technology and the Internet have had on all things communication, from rapid download speeds to social media to the proliferation of publishing platforms. These developments continue to raise our expectations and increase the potential for what communication can become, not just in research but across society.

The fourth distinct branch in the evolution of open knowledge has centered around social development. Over time, the slow and steady march of the scientific method—valuing evidence, openness, transparency, accountability, and replicability—and its success at unlocking true knowledge has influenced everything from philosophy to politics, law and industry, which in turn has created more “norming” of this approach, particularly in the West.4 For example, not long after the start of the Scientific Revolution in Europe, when natural philosophers such as Copernicus and Galileo successfully challenged prevailing explanations for how the world worked (as defined by Aristotle and the Catholic Church), social philosophers such as Locke, Hobbes and Rousseau (among others) were inspired to start questioning the world’s social order. This work led directly to revolutionary new political concepts, including France’s Declaration of the Rights of Man, and the US Constitution (both passed in 1789), which employed the Scientific Revolution’s spark that even man and society were tied to the natural world through natural rights.

In parallel with this growing appreciation of and our need for the scientific method, science and technology became driving forces of global development in the 1800s, with breakthroughs in physics, medicine and biology igniting massive change throughout the world. The public’s thirst for knowledge and enthusiasm for learning more about the natural world became a global phenomenon that continued into the first decades of the 1900s. By the aftermath of World War II, Karl Popper’s “Open Society and It’s Enemies” made the case that the open knowledge ethos of science needed to spread beyond science and into the fabric of societies—that it was important now more than ever to construct societies where truth is widespread and easily accessible, lest we backslide again into a world ruled by totalitarianism and fascism.

Popper’s work is generally acknowledged as the formal intellectual beginning of the open society movement. Today, many open advocacy groups travel along an offshoot of this branch, characterizing the need for open science as a social justice issue. The massive technological influence of the Internet has both influenced and enabled this ongoing work and development, raising our expectations for what technology can do for open knowledge and open society, and enabling this change, which in turn has lead to higher expectations and even more change.

A fifth historical branch of open has been accountability. Before the mid-1950s, accountability in research was largely internal, focused on ensuring that research was accurate, and that systems for reporting and writing about research were broadly accepted. In the post-WWII era, as government spending on research increased dramatically, the need for greater public accountability in research also developed, both financially and in terms of public access to what we were spending money on and why. Systems of accountability have now evolved to sophisticated heights, from grant evaluation procedures to modern research impact evaluation procedures and freedom of information laws, all from different government agencies and with different objectives. For example, the world’s first nationwide open access policy for scientific research was implemented by the US National Institutes of Health in 2008 (Suber 2008). What we now recognize as peer review was born out of US Congressional oversight into research in the mid-1970s (Baldwin 2018). And many countries now have their own research impact evaluation systems, perhaps none more carefully designed than the UK’s Research Evaluation Framework (REF 2021).


Amidst this centuries-long evolution of open thought and practices, participants at a 2002 conference in Budapest (the Budapest Open Access Initiative, or BOAI) advanced the idea that open access meant only one thing: that in addition to being free, research also needed to be licensed in a way that optimized the potential for its unrestricted reuse, free of its typical copyright restrictions. The goals were simple: by making information free and easier to access and reuse, we could democratize research, lower publishing costs (by untethering publishing from publishers), and better serve the public good.

The language used in the BOAI declaration was lofty and Panglossian, reflecting the vision of the Internet circa 2002 that we were on the cusp of a world where information would soon flow freely across borders with little cost and enormous benefit for all mankind. Adding fuel to this declaration, several of the BOAI signatories would in the coming decade become the most prolific, eloquent and vocal opponents of high profits in commercial science publishing, including Steve Harnad, Leslie Chan, Jean Claude Guedon, Peter Suber, Michael Eisen, two representatives from the Open Society Institute, and one representative from SPARC (the Scholarly Publishing and Academic Resources Coalition; SPARC in particular would lead the anti-publisher march over the next 10-15 years).

Subsequent modifications to BOAI made at conferences in Berlin and Bethesda stipulated that research also needed to be made immediately available, with no delay allowed between publishing and free access to the public.


Over the next decade, promoted by the effective voices who helped craft this statement, supported by the money and organizing acumen of SPARC and the Open Society Institute,5 and made timely by the spiraling cost of journals for academic libraries where, prophetically, commercial publishers played the role of the boogeyman to perfection,6 the BOAI approach to open access became the bedrock philosophical foundation for most subsequent open access policies, and it continues to be so even today. All other historical branches of open access have been ignored.

This isn’t to say that BOAI’s policy recommendations were wrong. To many believers, they were exactly on target. Rather, the most vocal post-BOAI open advocates tended to portray open access as a contest between good and evil. Policy debates became urgent, polarized and confrontational—even personal. The policy space became a battlefield where there was no middle ground, and no willingness to understand issues from all sides, ignoring the different histories involved and the differing needs and points of view centuries in the making. Ideology was not only trumping the expert-driven democratic policymaking ideal, it was beating it into the ground with a hammer of righteous might (see Box 3 and Plutchak 2022). As one research leader remarked on the OSI listserv in 2018, we were going about reforming science in a very unscientific manner.


The open access debate is more civilized today than it was during its heyday—roughly the 15 years between 2003 and 2018. In the words of T Scott Plutchak, describing the legislative and political efforts of the Scholarly Publishing Roundtable between 2009 and 2012—work that eventually led to the 2013 Holdren Memo’s expansion of the US Public Access program—“One of the paralysing results of [the] pitched battle [between opposing camps at this time] was that individuals who might have been allies in other circumstances found themselves on the opposite sides of a very public rhetorical war” (Plutchak 2022). Still, although the public rhetoric may have died down, very strong differences of opinion remain. For example, to some in the open access community, publishers are not legitimate parts of the research ecosystem at all, but parasites who feed on it. This opinion is even reflected in the language of several major open access policy statements in use today, and factors into the philosophy of several leading groups in the open access funding world. In a sense, what open access policymakers are being influenced by is more than just the facts and evidence about open access, but by a belief that the effort to develop open access policies is a battle between good and evil. This is not an outlier opinion; even some in OSI support this philosophy and policymaking approach. From a democratic and evidence based policymaking perspective, however, these opinions make it difficult to focus on and be led by facts. Rather than working together to help researchers improve research communication based on their needs and perspectives, the good versus evil approach puts a thumb on our scale of objectivity.


The pirate publisher Sci-Hub probably best exemplifies the good versus evil approach. Created in 2011, Sci-Hub has used stolen and donated university credentials and other hacking methods to download nearly 90 million copyrighted books and journal articles from research publishers, which it then distributes for free through the Sci-Hub website. Publishers have successfully filed numerous copyright infringement injunctions against Sci-Hub but the site keeps moving to new internet service providers and therefore keeps operating. Many researchers around the world see Sci-Hub’s work as heroic; and even though many universities block the site, researchers around the world use it anyway because it fills an important need. Estimates vary but more than 50 million articles per month are downloaded from this site (Owens 2022). Justifying its actions on the Sci-Hub home page and even soliciting funds for its legal defense, the website states “The position of Sci-Hub is: the project is legal, while restricting access to information and knowledge is not. The current operation of academic publishing industry is massive violation of human rights” (Sci-Hub 2023).


The EU’s Plan S is transforming the world of scholarly publishing. The plan’s coordinating body is cOAlition S. As stated on the cOAlistion S website, “Publication paywalls are withholding a substantial amount of research results from a large fraction of the scientific community and from society as a whole. This constitutes an absolute anomaly, which hinders the scientific enterprise in its very foundations and hampers its uptake by society….[Our] collective duty of care is for the science system as a whole, and researchers must realise that they are doing a gross disservice to the institution of science if they continue to report their outcomes in publications that will be locked behind paywalls…. There is no valid reason to maintain any kind of subscription-based business model for scientific publishing in the digital world, where Open Access dissemination is maximising the impact, visibility, and efficiency of the whole research process” (Coalition-S 2023).


Charitable foundations like Gates, Mellon, and Arcadia are major players in open access reform, leading and contributing heavily to open access policy reform efforts around the world. For several of these foundations, their work is fueled by the philosophy that our current system of scholarly communication is unjust. These groups are not necessarily wrong, of course, nor are they required to act objectively like government policymaking bodies, but the influence of these groups in the open access space has been significant, and has created so much policymaking overlap between advocacy channels and official policy channels that it’s difficult to tell where ideology ends and objectivity begins. On the Arcadia Fund’s website, we read that “Access to knowledge is a fundamental human right. It advances research and innovation, improves decision-making, exposes misinformation and is vital to achieving greater equality and justice. The internet has transformed how we share, find and use information. But some materials that should legally and morally be free for anyone to access are still constrained by paywalls and restrictive copyright regimes…. Restrictive copyright laws are a significant barrier to open access. They benefit few, while denying many access to vital knowledge. We support efforts to challenge and improve existing laws, regulations, exceptions and limitations so that people have better access to knowledge they need” (Arcadia Fund 2023).

Today—and despite a large, meaningful and influential array of open tools, policies and efforts, from the Panton Principles and FAIR Principles governing open data (2009 and 2016 respectively) to a thick alphabet soup of important organizations and principles (DORA, GitHub, OSF, Lindau, PubMedCentral, et al)7— the BOAI approach has become an article of faith for most of the world’s significant open access policies,8 from Europe’s Plan S to UNESCO’s open science policy to the University of California’s transformative agreement with Elsevier and the new US open access policy (the Nelson Memo).

The idea that open means free, immediate and licensed for unlimited reuse is not challenged. Most major funders have also fully accepted this approach to open access.9

As our global open access policymaking efforts move forward, it’s important to remember there are many histories and forces still influencing open practices. Understanding this will help us better understand what needs to be done and where we might want to concentrate our efforts for maximum effect and sustainability. In this policy space, there is a tangle of history, actors, needs, motives, and objectives. We may want “open” to be a simple notion with a straightforward past and an obvious future, but as we shall continue to explore in this report, it is none of these things.


The Open Scholarship Initiative (OSI) was founded in late 2014 to listen to all sides in this debate, lower the temperature of the discourse, broaden understanding of various perspectives, and develop fact-based approaches to open access policy. There have been many other multi-stakeholder conversations happening as well, such as FORCE11, the Research Data Alliance (RDA), the Committee on Data of the International Council for Science (CODATA), and the Open Access Scholarly Publishers Association (OASPA). OSI’s unique value proposition has been to bring together high-level representatives from all key stakeholder groups and organizations and have them work directly together to find common ground on key issues in scholarly communication—not just open access, but tangential issues like impact factors, peer review and the culture of communication in academia. An important part of our mandate has also been to represent and protect the interests of all countries in this conversation, not just focus on what works for the EU and US.

Where does OSI stand on current OA policies? As noted in our 2020 Common Ground paper (Hampson 2020) it’s fair to say most participants in OSI agree that (1) Research and society will benefit from open done right; (2) Successful solutions will require global and inclusive collaboration, (3) Connected issues (like peer review and impact factors) need to be addressed, and (4) Open isn’t a single outcome, but a spectrum of outcomes.10 Beyond this, OSI participants have a wide variety of opinions, and our role isn’t to speak with one voice. There are some in OSI who are thrilled with these policies, and others who aren’t. It’s also probably fair to say that amongst the analyst community—and this is the community which has been most active in OSI conversations over the years—there has a been a considerable amount of discussion regarding the pros and cons of various policy approaches, and a general understanding that we need to be on the lookout for unintended consequences.

Probably the most impactful transformation happening today involves flipping the subscription model for scholarly publishing to a model where authors pay for publishing via article publishing charges (APCs). The APC model is mandated by Plan S, covering a large portion of the EU (even though this affects a small global portion of publishing, publishers have been transitioning to Plan S requirements for years now), and is strongly directed by the new Nelson Memo covering all federally-funded US research (which will give a huge new push to the transition).11 The general idea is that authors can simply tap their research budgets to pay for publishing, and in exchange the publisher will get paid and make the article free to read.12

There are many other transformations happening as well, of course, such as eliminating embargo periods, requiring a CC-BY license on all work in all disciplines, improving data availability, negotiated agreements at major universities whereby access to published work and APC charges are bundled together, and more, It is unfortunately well beyond the scope of this paper to dive into each of these policy prescriptions at length. For our purposes here, many in the OSI community have expressed four general concerns about the overall nature of these reforms: (1) ignoring the unintended consequences of APCs; (2) ignoring the evidence that in practice, openness exists along a broad spectrum of outcomes; (3) our tendency to overreach and design policies for which we lack the requisite expertise; and (4) forcing one-size-fits-all open solutions on researchers, even where these solutions don’t match researcher needs and resources.13 The following subsections describe each of these concerns in more detail.


The APC-funded approach to open (which is central to policies like Plan S) is not free. Indeed, APC charges have risen to stratospheric levels for premium research journals over the last few years, now topping US$10,000 per article for publishing in top research journals. Even the average APC charge (around US$2600 for OA mirror journals, although there is wide variation by field, publisher, and journal quality; see Smith 2022) is now far higher than most researchers around the world can afford unless they are based at a major institution in the US or EU or are well endowed by their private funder.14

As noted is OSI’s official critique of Plan S, many worry that our widespread use of APCs will widen the chasm between the haves and have nots in research, and substitute one equity imbalance with another: the inability to pay for access (paywalls) due to high subscription costs, with the inability to publish (playwalls) due to high APCs. Since this chasm roughly equates to a fracturing of the open access policy space along economic and regional boundaries, the US and EU will have their own rich open universe, other parts of the world will have their less endowed universe, and the gaps between these worlds may end up hurting research instead of helping it (for many reasons: technical difficulties with sharing, cost differences, protectionism, less collaboration, and more).

The APC approach to open access also doesn’t reduce the power of major commercial publishers to the degree BOAI and other OA ideologies originally intended. The power of major commercial publishers is increasing instead, because the APC model is proving to be quite financially robust (Pollock 2021 and Zhang 2022) and because society and university publishers need help navigating the rapidly changing regulatory landscape.15

This power will probably continue to grow in the coming years as publishers lock in their consolidation by offering value added tools to their customers (like advanced search and synthesis). Rich countries and institutions who can afford to climb the ladder of publisher-controlled offerings will partake in a buffet of new capabilities made possible by more open access, while lower income institutions and countries will have to make do with the bare minimum Author Accepted Manuscripts and Excel spreadsheets mandated by open policies. There is no money or incentive to make this kind of advanced access available for everyone everywhere. Rather, everyone will be required to contribute their information for free, and only the rich will be able to extract maximum value from it.


As discussed, our most influential open access policies are not, unlike actual public policy, grounded in a wealth of evidence, nor have they been developed through the expert and impartial consultative process we expect to see in democratic societies. Instead, they are ideologically grounded. This ideological approach makes sense to those who consider the act of locking research behind subscription paywalls to be an inherently immoral act. For this group, trying to put commercial publishers out of business is a morally justified imperative. But not everyone feels this way. In practice, open access efforts are driven by a variety of motives such as the desire to improve impact, efficiency, reproducibility, accountability, transparency, and collaboration. Many researchers also readily appreciate that publishing adds value to the research record through processes like gatekeeping, peer review and preservation, and note that without a reliable process akin to quality journals, the scientific record may become unreliable.

This difference between ideology and evidence is also apparent when it comes to defining what open means. Ideology says that open is, at minimum, CC-BY licensed information without embargo, but evidence clearly shows open comes in many different forms and looks different for different users in different fields and different parts of the world. OSI participants developed an information model called “DARTS” to describe this open spectrum, where the five letters of this acronym stand for discoverability, accessibility, reusability, transparency, and sustainability. On this spectrum, we allow for the fact that some kinds of open are free to read but still copyrighted; other kinds may be closed to the public but robustly open and interoperable within designated user groups (this solution is common in clinical research); and still other kinds are public domain licensed but not very discoverable, transparent or sustainable. So-called “green” open, which accounts for the vast majority of open resources, is exactly this: a hodge-podge of information that is free to read but whose discoverability, accessibility, reusability, transparency and sustainability vary widely. This diversity reflects the fact that different user groups have different resources, needs, incentives, motives, conventions, restrictions, and so on. It doesn’t mean they shouldn’t strive to improve their openness, but it also doesn’t presuppose that one type of open is necessarily superior for all users and circumstances. Movement toward better open solutions should continue, but this movement should be based on evidence and need, not assumptions.


DISCOVERABLE: Can this information be found online? Is it indexed by search engines and databases, and hosted on servers open to the public? Does it contain adequate identifiers (such as DOIs)?
ACCESSIBLE: Once discovered, can this information be read by anyone? Is it available free of charge? Is it available in a timely, complete, and easy-to-acceess manner (for instance, is it downloadable or machine-readable, with a dataset included)?
REUSABLE: Can this information be modified? Disseminated? What conditions (both legal and technical) prevent it from being repurposed or shared at will?
TRANSPARENT: What do we know about the provenance of this information? Is it peer reviewed? Do we know the funding source (are conflicts of interested identified)? What do we know about the study design and analysis?
SUSTAINABLE: Is the open solution for this information artifact sustainable? This may be hard to know—the sustainability of larger, more established solutions may evoke more confidence than new, small, or one-off solutions.

The DARTS framework is currently only a concept and not a measuring tool, although quantifying this tool might help make it useful to open research in other ways. For example, imagine running a scale from let to right, and then assigning a value for each DARTS attribute of a paraticular information artifact. Assignng a transparency score of zero means we know nothing about where this information came from, whereas a nine means we are very clear about this. Doing the same for each DARTS attribute, we could then assign a perfectly open object a DARTS score of 99999, and an absolutely closed object a score of 00000. Almost all information exists somewhere in between. This paper, for example, will have good discoverability and accessibility (although not as good as a commercially published report), limited reusability, acceptable transparency, and good faith sustainability (although not perfect, like commercial publishers). Therefore, its DARTS score might be 77586 or some such.

Indeed, after 20 years of pushing for ideologically perfect open solutions, most of the world’s open information is still published in other formats (which isn’t to say closed, just imperfectly open). Estimating the exact distributions of open outcomes depends on which indexes are analyzed (different indexes skew toward different journal types and disciplines), which regions of the world are being measured, and the sampling methodology used (see Box 4), but according to a recent analysis of eight million journal articles listed in the Web of Science between 2015 and 2019, BOAI-compliant articles (for which Gold OA is a rough proxy) account for only a small fraction of the total (Table 1 and Simard 2022). What researchers want and need for open information, then, isn’t necessarily always the same as what’s being prescribed. Open access policy may require one outcome, but evidence shows many different outcomes are possible, even preferred.


FieldTotal OAOf which Gold*Of which Green
Natural Sciences45.419.936.3
Engineering and Technology30.413.021.4
Medical and Health Sciences50.020.840.4
Agricultural Sciences35.917.122.0
Social Sciences35.57.929.8
All Fields42.918.133.8

Source: Simard 2022.
*Gold OA represents open access materials which have been made open through APC charges and made immediately and freely available to the public without embargo. It does not necessarily mean the materials are CC-BY licensed, however. DeltaThink estimates that of all the open materials currently being published during this same time period (2015-2019), about 55% in aggregate was CC-BY licensed, meaning that only about 10% all published materials (55% of 18.1%) are strictly BOAI-compliant. See Pollock 2022. This figure is consistent with other global estimates of gold open (see, for example, Zhang 2022 and Piwowar 2019).

How can we know for sure which of these outcomes are best? We can’t—we need more research, without which our policy conversations have been stymied and our positions hardened along ideological lines. This need for more research isn’t a red herring argument like the tobacco industry used in the 1970s and 80s to slow down anti-smoking policy. In truth, as a community we have conducted almost no solid research into fundamental questions, such as which researchers need open and why, what types of open work best in each field, how short embargo periods can go, the cost-benefit of replacing the subscription model, the open access citation advantage (see Box 8), and more.16


Scholarly communication research isn’t an exact science. Researchers will typically investigate open access in unique ways from one study to the next, segregating their findings into different types of open that aren’t necessarily comparable, or using the same category names like green and gold but defining these categories in slightly different ways, or sampling data from different publication indexes, over different time periods, focusing on different fields and countries, or using different sampling methodologies. As a result, these studies always arrive at different conclusions, often markedly so, about how fast open access is growing.*

Some of the most methodologically refined analyses to-date have taken measurements across multiple indexes, regions, fields and time periods (like Piwowar 2018 and Archambault 2016), but even here, the aggregate data from these studies may be less meaningful than accurately understanding whether a particular field or country is currently making progress toward its open goals. And across all studies, none to our knowledge have fully captured the breadth and depth of green open, which is defined in a variety of ways by researchers but should arguably include all research-related information that is free to read and not clearly gold or bronze, from preprints to self-archived reports on university websites, to datasets available in repositories (and not otherwise CC-BY or CC-0), to most of the information in massive archives like PubMed Central, and copyrighted work that is now free to read everywhere, not just academic journal articles in OA repositories (see, for example, the wealth of infomation being catalogued by

This inconsistency and variation is important to understand for two reasons. First, because our analyses can vary so widely, OA policymakers can be tempted to use hand-picked statiscs that support their favorite OA story line, such as rapid growth rates for their preferred form of open. This bias is not unusual in policymaking, of course, but we can help give policymakers a more solid foundation of evidence by firming up our research methods. Second, only two types of open are deemed ”acceptable” by most of the world’s major OA policies: gold and diamond.** Thefore, it’s important to develop a better understanding of how these forms of open are actually faring, particularly with respect to their historic growth trends and to other forms of open. As noted in Table 1 and Figure 1, the marketplace has a wide variety of open outcomes.

How do we know which of these outcomes is working best for researchers? We don’t. Many studies over the years have tried to measure the growth of open solutions, but there haven’t been any studies to-date (to our knowledge) that investigate which of these different solutions are the ones actually preferred by researchers. We do know from surveys (see, for example, Hampson 2023) that the APC solution is widely disliked, so we can infer from this that gold open is not an outcome most researchers look upon fondly (at least in aggregate) if given a choice. Some researchers also don’t like the idea of paying to publish their work, and others feel that pay-to-publish journals are lower quality, or have lower reputations or lower readership. University tenure committees typically feel the same way, although this attitude is gradually changing. These factors might help explain why gold open has been stuck at below 20% of all open outcomes (and only around 5-10% of all published work; see OSI 2021) for the last 20 years.

Gold open will surely grow in the coming years, though, due to the influence of Plan S and the Nelson Memo, piled atop current growth trends (Piwowar 2019). This isn’t necessarily a bad development if the issue of APCs can be resolved. But given historical growth trends, it may be helpful to gather more evidence about what researchers actually want and need from open publishing solutions so we can make sure we focus on growing the right kinds of solutions. To-date, we can only tell that the marketplace is producing a wide variety of outcomes, and that most of these outcomes are not the ones OA policymakers want (at least yet). A first step in this process might be to fund more carefully designed and definitive studies that can give us a clearer picture of the open environment so policymakers can develop better evidence-based policies moving forward.

*For example, older, higher-impact, STM-centric work will be listed in indexes like Scopus and WOS (Web of Science). Measuring the open content in these indexes (as in Simard 2022) will tend to show slightly lower growth. Newer journals are more likely to be picked up by newer indexes like Unpaywall, which detects the DOI of journal articles (older articles are less likely to have been assigned a DOI). Measurements done from this index (like Piwowar 2019) will tend to show higher growth.

**Gold open is where the final version of record of an academic journal article is made freely and immediately available for the public to read and reuse through author publishing charges (APCs), which are often (but not always) paid by the author, the author’s grant funds, or the author’s library or institution. Diamond or platinum open, which are far less common than gold but still acceptable to Plan S, are variations on this theme where fewer or no author charges occur. In these cases, publishing is managed and subsidized by a research community, academic community, or nonprofit and no publishing charges are drawn from the author’s pocket, grant funds, or research institution.

What we can clearly see from a number of researcher surveys over the years17 is that at minimum, getting free and immediate access to journal articles isn’t the only concern researchers have. Researcher also want lower publishing costs, improved connections with colleagues, and increased visibility and impact for their research work. This isn’t to say that improving access isn’t an important goal, just that it is one of many goals and we may not want to reach it by trampling on other goals and creating a world of unintended consequences which end up being harmful to research on balance. If we follow the evidence, we may want to focus first on the highest priority communication needs of researchers instead of on the priorities highlighted by current OA policies. Exactly how these priorities might rank is discussed later in this report.


Open access policymakers are generally guilty of at least two kinds of overreach. The first kind involves designing policies without possessing the needed expertise. Open access, open science, open data, and other open movements all have different perspectives and priorities. An open science led effort makes no sense for humanities researchers; an open access led effort makes no sense for open data. Today, however, we see a good deal of mission creep, where open access advocates are designing policies having to do with the future of open data, and where plans for the future of journals are designed with STM disciplines in mind, not the humanities. While there is some overlap between these communities with regard to tools and basic principles, they are in fact very different. Therefore, it is ill-advised from a policymaking perspective for open access advocates alone to write such policies—as is the case with Plan S, the UNESCO open science policy, and the US Nelson Memo—since the facts and nuances of all open practice communities are not even remotely captured in policies like these. The reverse situation would never be tolerated by the open access community, where open data advocates working alone decided what the future of open access should look like.

To elaborate on the case of open data, all major open access policies have open data requirements (usually including provisions to make data FAIR—findable, accessible, interoperable and reusable—or to deposit data in specific repositories) but most lack any evidence-based operational details. In truth, open access policymakers (as distinct from the open data experts) have very little grasp of what open data actually looks like, particularly in clinical research where the OA community wants to see faster discovery. This data realm is awash with challenges, such as protecting patient privacy (conforming with existing data protection laws like HIPPA and GDPR), protecting proprietary data (owned by drug companies who sponsor research work), preventing the misuse and misinterpretation of data, and struggling to make the sharing of datasets complete, timely and compatible, even when this data is being generated by the same research group. OA policymakers have not even begun to understand the complexity, diversity, and best practices of this real world sharing, yet they are designing one-size-fits all policies that mandate sharing nonetheless based on open access ideals. Table 2 describes of some of this complexity, and shows how the type of data sharing envisioned by open access policies fits into this array of other data sharing models.


Governance structureNumber and linkage of partiesDegree of data AvailabilityDegree of freedom to use dataChallenges common to the governance successPrimary governance design pattern
PairwiseOne-to-oneMedium/HighMedium/HighUneven status of parties, value of dataInformal or closed contract
Open SourceOne/some-to-manyHighHighRights permanently granted to userLicense
Federated QueryMany-to-many, via platformHighMedium/LowDefection of creatorsContract and club rules
Trusted Research EnvironmentOne/some-to-manyMedium/LowMedium/LowUsers agree to be known, surveilledData transfer and use agreements
Model-to-DataOne-to-manyHighLowNot all who apply can use dataRestricted analyses, data curation
Open Citizen ScienceMany-to-manyHighHighCapacity for analysis is unevenContract or license
Clubs, TrustsSome-to-someMedium/LowHighEasy to create things governed more liberally. Trusteeship can be revoked.Club / Trust rules
ClosedMany (to none)LowHighFundamental limits to collaborationPublic laws, security protocols
Closed and RestrictedSome (to none)LowLowFundamental limits to collaborationPublic laws, security protocols

Source: Mangravite 2020

In the meantime, there is a long list of promising work being done in open data and many success stories to share, but these experiences originate from efforts that have nothing whatsoever to do with global open access policies. For example, there are a number of highly successful research collaboration efforts that demonstrate what the cutting edge of open access development can accomplish (like DataSpace, Vivli, SDSS, CERN, GenBank, DataSphere, and Sage Bionetworks), and what evidence-based data sharing and collaboration challenges exist on the road ahead.

These data sharing networks are most often developed through private partnerships with strict and distinctly BOAI-unfriendly data sharing guidelines and eligibility, not through generalist OA policies and repositories. Open access policymakers have not studied, learned from, or even cited these examples. See Boxes 5-7 on the following pages for more detail.

A second and equally important type of overreach is that policymakers often grant special powers to open that aren’t merited. Take the widely touted claim, for example, that open access increases citation rates. The evidence for this phenomenon is actually not clear, nor is it clear that focusing on increasing citations instead of increasing research quality is the right approach to take (see Box 8).


OA policies requiring open data deposits are mostly silent on the details that make this data actually useful to researchers, such as data management, vetting, and curation. Leading repositories that already manage vast quantities of research data have designed intricate policies to add this kind of value to open data (which isn’t always openly licensed) as well as to protect discovery, copyright and patient privacy as needed. The data sharing agreements upon which these resources are constructed are legal documents that typically define (at minimum) how long data can be used, for what purpose(s), by what means, what constraints will apply, and financial, confidentiality and security requirements. Many of these agreements (especially in the life sciences) also protect against misuse by allowing only qualified participants to deposit and use data. None of these real world data sharing practices are acceptable under the open data policies designed by open access policymakers, however. Instead, these OA-designed policies only articulate a vision for CC-0 licensed data that is uncurated and available for anyone to use and reuse.


It is not practically possible to make the full raw data-set from the LHC [Large Hadron Collider] experiments usable in a meaningful way outside the collaborations. This is due to the complexity of the data, metadata and software, the required knowledge of the detector itself and the methods of reconstruction, the extensive computing resources necessary and the access issues for the enormous volume of data stored in archival media. It should be noted that, for these reasons, general direct access to the raw data is not even available to individuals within the collaboration, and that instead the production of reconstructed data (i.e. Level-3 data) is performed centrally. Access to representative subsets of raw data—useful for example for studies in the machine learning domain and beyond—can be released together with Level-3 formats, at the discretion of each experiment. (See CERN 2023)


The GenBank database is designed to provide and encourage access within the scientific community to the most up-to-date and comprehensive DNA sequence information. Therefore, NCBI places no restrictions on the use or distribution of the GenBank data. However, some submitters may claim patent, copyright, or other intellectual property rights in all or a portion of the data they have submitted. NCBI is not in a position to assess the validity of such claims, and therefore cannot provide comment or unrestricted permission concerning the use, copying, or distribution of the information contained in GenBank. (See NIH 2023a)


(A) Data Use Agreement—All Data Requestors requesting data must execute the Data Use Agreement (DUA). The DUA is the product of extensive negotiation with the organizations that contribute data to Vivli. This agreement is non-negotiable. If granted access to the data, it is for the express purpose outlined in the research proposal. Any changes to that proposal will require re-review and approval by the data contributors involved; (B) Qualified Statistician—All research teams submitting a Vivli data request must include a qualified statistician. The statistician must have a degree in statistics, or similar field, or publications relevant to the proposed research where the individual conducted the statistical analysis; (C) Publication Plan—The dissemination plan must include a definitive statement to publish and disseminate your findings to contribute to furthering scientific knowledge. (See Vivli 2023)

There are, of course, many other examples of how real data sharing is working in today’s research environment (such as DataSphere, CAVD DataSpace, Yoda, and Sage Bionetworks), many high-profile success stories in sharing science data (like CERN, the Hubble Space Telescope, the Human Genome Project and the Sloan Digital Sky Survey), and many other open data research repositories in use (not even including institutional repositories; see And owing to the experience and expertise of these groups, as well as to the efforts of the many outstanding organizations working to create best practices in data management and data repository function (such as the Research Data Alliance, COAR, CODATA, JISC, NISO and OpenAIRE), we have also learned a lot about the pros and cons of various data governance structures (see Table 2 on the previous page), and the challenges involved in sharing more research data (see Box 6 on the following page). Our goal with open solutions policies, then, should be to learn from all this history and experience rather than trying to reinvent the wheel. What we may find is that the “imperfect” open approaches that have evolved in the marketplace are the ones that actually work. By learning from these, we can create better and more realistic policies; at the same time, we can help these existing systems operate even more efficiently by bringing them into the fold regarding best practices in open data.


Competition collaboration, and data sharing are three key drivers in research. Each of these drivers has unique practices, outcomes and challenges, but they are also all closely linked and affect each other. Competition has always been fundamental to the very fabric of research, for example, but as research becomes increasingly complex, collaboration is also increasingly important, and along with this, data sharing as well. Still, relatively few researchers (around 15%) currently share their data outside a limited group of colleagues in any comprehensive and meaningful way (notable exceptions include astronomy, high-energy physics and genomics; see NASEM 2020) due to a variety of concerns and challenges. Similarly, the race to discover has always been a key part of science, but this race sometimes leads to a hyper-focus on secrecy, a temptation to commit fraud, hiding negative findings, and other behaviors that conflict with the needs of good science and open science.

Understanding how these three drivers operate and are evolving in the real world is important for understanding how to improve the research of tomorrow. For example the needs and concerns of researchers with regard to data sharing generally fall into six main categories: Impact, confusion, trust, access, effort and equity.

  1. IMPACT: Will my research have greater benefit if I share my data? What benefit will I get from this personally? Will my open data efforts be well received by colleagues and tenure committees?
  2. CONFUSION: Where should I begin? What kind of license should be used? What data should be shared, in what format, with whom, and in what repository?
  3. TRUST: Will my open data be misinterpreted or misused? Will my potential discoveries be scooped?
  4. EFFORT: Will complying with data requirements take up too much time? Different publishers and repositories all have different compliance formats and requirements. Will I be responsible for maintaining my data over the long-term?
  5. ACCESS: Who needs access to my data anyway and for what reasons? Some datasets are so large that they can’t be uploaded via the Internet. For what purpose will my data be used? Would data summaries suffice instead?
  6. EQUITY: Overall, is this data sharing mandate even fair to me and my colleagues? For example, data processing capabilities vary widely by region, field and institution. Researchers from lower resource institutions often lack the huge support networks and processing facilities that more privileged researchers might take for granted. So, why should these lower resourced researchers share their hard-earned information and then not be able to extract any value from it?There are also many challenges regarding the data itself. These include:
    • How can we fund and maintain the infrastructure necessary for data processing, curation, and preservation?
    • How do we protect against link rot, and data decay and data obsolescence over time?
    • Big data keeps getting bigger. Can our sharing tools keep pace?
    • What happens to data once a research facility is shut down and data needs to be preserved and curated for decades more?
    • What happens to long tail data, and the data that sits on laptops or personal websites with minimal or no attached metadata or documentation? Not being able to capture this contributes to issues like irreproducibility, duplicate research, and innovation loss.
    • Who pays long term for data care and maintenance?
    • How do we ensure the timely sharing of critical data (insofar as rapid sharing impinges on secrecy)?
    • How do we ensure better data quality, consistency and completeness?
    • How do we standardize data formats and collection processes (where necessary) to ensure data completeness and comparability?
    • How do create internationally agreed-upon minimum standards for metadata (further complicated when metadata are not in English)?
    • How do establish interoperability and searchability between data platforms (without which researchers need to search and make requests across multiple platforms)?
    • How do we create internationally agreed-upon standards for Data Availability Statements?
    • Can we streamline the governance structures used by different platforms?Other challenges include the fact that very little funding support is available to facilitate data sharing, and to improve data infrastructure systems; code sharing needs to be improved (for many kinds of research, sharing or reanalyzing data without the original code means just sharing and preserving a jumble of numbers); high level data sharing policy often conflicts (for example, the EU’s GDPR conflicts with most global clinical trials data sharing policies, and this conflict has yet to be resolved; see Staunton 2019 and Hampson 2021 for more detail), and more (e.g., regarding which metrics are best for evaluating open data, and how to reward open data practices).

Source: Derived from Hampson 2021a


To-date, none of the major, global open solutions policies or even the discussions leading to these policies have focused on the importance of curation in making research information useful. What is curation? Essentially, it means organizing information. This organization is everywhere and all around us: imagine grocery stores where food is not organized into aisles, without consistent ways of cataloguing and displaying product information, without metadata that enables different family trees to connect together, fields of study without a sophisticated understanding of the knowledge that already exists and how it’s organized, or search engines that don’t know how to crawl the web. Organizing information is a prerequisite to making it useful (at least for humans).

In an undertaking like research, organizing information has added dimensions like making sure units of measure are standardized across fields, making sure the data being collected across studies is consistent, filling in missing pieces of data, adding explanations, and otherwise properly cleaning, documenting, labeling, transposing, and formatting work for sharing. All this effort takes time and money (someone needs to do this, and not for free), and the time for doing this is limited because projects need to report data within a given window, grant funding eventually runs out, and principal investigators and researchers eventually move on to other projects.

How much time and money is needed? No one knows for sure; data curation isn’t an activity that has been well investigated and documented, and the needs obviously vary widely based on factors like data volume, complexity, privacy considerations, intellectual property constraints, and the number of collaborators (see Perry 2022). For sure, the most widely used and long-lived curated resources require massive ongoing investments of time, money and attention. Also, while “openness” is important to some of these efforts, it’s irrelevant to others; the common denominator isn’t openness but finding reliable and relevant information, converting this into something useful, and then making the curated resource accessible to the world as part of a cohesive narrative. Here are a few good examples of freely available curated research resources:

  • WHO: Founded in 1948, World Health Organization today manages and maintains about 75 different curated data collections related to global health and well-being as mandated by UN Member States, covering everything from HIV/AIDS cases to malaria, COVID, nutrition, injury, mortality, maternal health, mental health, immunization status, and tobacco use. Data collections are typically CC-BY licensed but a wide variety of copyrighted information is included in these collections. See
  • IHME: The Institute for Health Metrics and Evaluation (IHME) works with collaborators around the world to develop evidence that sheds light on the state of global health and provides policy makers with accessible and usable information tools and resources. Over 500 people work at IHME. Founded in 2007, funding support comes from the University of Washington, the National Science Foundation, the Gates Foundation, and elsewhere. Much of what IHME collects and curates is free, but much is included with permission. See
  • TAIR: The Arabidopsis Information Resource is a database of genetic and molecular biology data for Arabidopsis thaliana, a widely used model plant. Launched in 1999, data available from TAIR includes the complete genome sequence along with gene structure, genome maps, genetic and physical markers, DNA and seed stocks, related publications, and information about the research community. TAIR is managed by the nonprofit Phoenix Bioinformatics Corporation (which manages other bioinformatics resources as well) and is also supported primarily through institutional, lab and personal subscription revenues. See
  • Allen Brain Map: The Allen Institute for Brain Science was established in 2003 to accelerate neuroscience research worldwide by sharing large-scale, publicly available maps of the brain. Research teams conduct investigations into the inner workings of the brain; the institute also publicly share all the data, products, and findings from their work on, including data, analysis tools, and lab resources. This information is copyright protected, not CC-BY licensed. See
  • UW Drug Interaction Database. The University of Washington’s Drug Interaction Database (DIDB) uses a small army of PhDs to read thousands of variously-licensed peer reviewed studies every year (as well as drug labels and NDA studies), and then manually extracts qualitative and quantitative human and clinical information related to interacting medications, food products, herbals, genetics, and other factors that can affect drug exposure in humans. Launched in 2002, DIDB is today used by hundreds of pharmaceutical companies, regulatory agencies, CROs, academic institutions and clinical support organizations around the word. Sustainability is made possible through DIDB’s licensing and subscription revenues. See
  • curates US government data and provides users with a polished finished product. For example, search for climate databases on and you get a long list of downloadable documents plus links to climate-related state, federal and nonprofit websites. Click the climate tab on and you get a long page of easy-to-read graphs and graphics plus clear text summations and links to relevant resources. The mission of USAfacts is to provide authoritative, easy-to-use, nonpartisan data to US citizens. The site is privately funded by former Microsoft co-founder Steve Ballmer; it accepts no outside donations or funding. See


How can we know for certain why some research articles are cited more often than others? Is the difference due to funding level? Promotion (through Twitter, radio interviews, or conference presentations)? Accessibility (whether through open access or some other means, like SciHub)? Salience (maybe an article deals with an issue of urgent importance to a particular group of researchers)? Uniqueness? Does publication language play a role? What about readability? Publication venue? Researcher seniority? Number of authors? Graphics? Metadata? Or maybe where the article is indexed? And how do we account for publication age (where older publications may have accumulated more citations than newer publications), field of study (different fields cite at different rates), strength of network, and negative citations (where work is being cited because it’s bad, like the Wakefield study linking vaccines to autism)?

No doubt all these factors (and others) exert at least some amount of influence on citation rates, but a number of studies over the past 20 years have tried to determine whether simply being published in open access format creates an open access citation advantage (OACA) by itself. Unfortunately, very few of these studies have attempted to account for even some of the above-mentioned factors that might be involved in this calculation. Colby Lewis (Lewis 2018) looked at a number of studies conducted between 2011 and 2017 and concluded that there was no definitive answer about the OACA and that more research was needed (other noteworthy studies since Lewis include Basson 2021 and Correa 2021). A 2021 study (Langham-Putrow 2021) looked at 134 studies published on the OACA since 2001 and found that about half confirmed the existence of the effect, a quarter found it did not exist, and another quarter found OACA exists but only in certain fields, indexes, publication types, or types of open. Here again, more research is was called for. As the scholarly communication firm Clarke & Esposito noted in their June 2021 newsletter, “You might think that after 20 years of research and more than 130 studies on the subject, we’d have a clear picture of the effect that open access publishing has on an article’s citation performance. Unfortunately, decades of poor studies and a mystifying unwillingness to perform experimental controls for confounding factors continues to muddy the waters around the alleged open access citation advantage…. [For example], of the 134 studies examined [in the Langham-Putrow study], only 40 acknowledged the possibility of confounding factors, which is fairly astounding. As early as 2007, it was clear that things like selection bias…were likely to invalidate any correlations that could be drawn. Yet even among the 40 studies that acknowledged potential confounders, only a subset made any effort to control for those factors [and these showed no OACA].”

Studying this issue is challenging, however, and many accomplished researchers have tried. Let’s say for the sake of argument that we accept a good faith effort to find the answer as being good enough for policymaking purposes. One of the best and most cited studies to-date has been conducted by Heather Piwowar and Jason Priem. In their 2018 report, Piwowar and Priem (Piwowar 2018; see figure below) tried to determine how much open access literature existed in the world, how the growth rate of open was changing, and as a relatively brief aside, what the citation rate of open access articles might be. Their findings, which aligned with several other well-done studies (e.g., Archambault 2016), showed that green open has the highest citation rate, and gold the lowest. On average, across all fields and all types of open, the OACA measured about 18 percent.

However, lest we celebrate too soon, it’s important to note that Plan S and BOAI-compliant open (gold open) was actually found to have a lower OACA than even subscription (closed) articles. The highest OACA was for green open, which includes a mix of copyrighted, embargoed, non peer-reviewed and open materials. Hybrid and bronze open also performed well, but these formats are also not compliant with Plan S and BOAI. So even by this argument, if we accept the evidence showing there may be an open access citation advantage and that this advantage is completely independent of other factors, it is still for the wrong kind of open. For the kind of open our major OA policies are requiring, OA citation outcomes are actually the worst of the bunch.

All this said, it stands to reason that making research easier to access is beneficial by itself. Citations needn’t be touted as a proxy for research impact since much great science throughout history has had a spotty citation record. Also, importantly, there is no correlation whatsoever between the quality of research and its citation record (Aksnes 2019), so citations are not a proxy for quality either. Indeed, focusing on citations may be a net negative, merely tempting researchers to game the system through dubious practices like fractional publishing (dividing a single study into multiple papers), and including large numbers of co-authors; both of these practices are on the rise. Rather than continuing to assert that open access is a route to more citations, and continuing to treat citations as evidence of superior scholarship, evidence and common sense suggest we should simply acknowledge that open access is a good way to share research findings more broadly. It may eventually become clearer that open access is indeed allowing research to be cited more easily and frequently, but this is not what the evidence currently shows, nor should we continue to assert that it matters one way or the other.

Or consider how excited the policy world is becoming about open science. Like the UNESCO policy before it, a major open science policy being developed by the US National Academies Roundtable (NAS 2022), considers open science to include transparency (“scientific process and results should be visible, accessible, and understandable”), inclusiveness (“process and participants should welcome participation by and collaboration with diverse people and organizations”), accessibility (“data, tools, software, documentation and publications should be accessible to all (FAIR)”), and reproducibility (“scientific process and results should be open such that they are reproducible by members of the community”). However, this policy and similar ones attempting to define open science are simply describing what good science looks like and has always looked like. Transparency and reproducibility are not new ideas—as mentioned earlier, they are the fundamental building blocks of research—and research has always been global and deeply interconnected; and integrity is a function of good science, not open science.18

In other words, open science means good science, but the reverse isn’t necessarily true. Good science doesn’t necessarily mean open science. There is obviously some overlap between all these concepts, but in a Venn diagram, the “open” circle just intersects with lots of issues related to research reform, from improving access and equity to having a role in improving transparency, reproducibility and reliability, to also having a role in important issues like impact factors, tenure evaluation, and peer review (see Figure 2). Open science policies can play a role in addressing all these issues, but they won’t solve all of them. Policymakers often forget this and use the term open science when what we really mean is something else. As Jon Tennant noted in 2020 (Tennant 2020), “Rebecca Willen has … identified that there might be two, perhaps three, different sub-movements that intersect in different ways, involving ‘open science’, ‘replicable science’, and ‘justice-oriented science’…. [I]t could be the case that now, open research is diffused in such a wide variety of ways that there cannot plausibly be a single, cohesive community and set of practices that define it…. Instead, Open Scholarship, Open Research, and Open Science might best be thought of as overlapping/intersecting ‘boundary objects’ (Moore 2017) that represent this inherent diversity.”


Attributing special powers to open science is nowhere more apparent than the claim that open science is why COVID vaccines were developed in record time. Both the UNESCO open science plan and the US Nelson Memo make this claim, and say this sharing represents the potential of open access (and even the victory of open access policies).19 This characterization is mostly inaccurate. While the rapid sharing of data was important with COVID research and is a key potential benefit of effective OA policies, the evidence suggests a murkier role for open data in COVID research (e.g., with lots of misinformation being published rapidly in preprint form, and none of the actual proprietary vaccine-related data being openly shared). The true hero of our quest to rapidly develop COVID vaccines were policies that allowed researchers to conduct their due diligence rapidly, with ample funding, and in parallel with manufacturing instead of in sequence (see Box 9).


Popular media accounts of the COVID crisis, and even the story lines woven into several major open access policies, claim that open science is a major reason for why COVID vaccines were developed in record time during 2020 and 2021. This claim is mostly fiction. While the rapid sharing of data was important for COVID research and is a key potential benefit of effective open access policies in general, the evidence suggests a murkier role for open data in COVID vaccine development

Genetic sequencing data was quickly shared during the early stages of the crisis, but this kind of rapid sharing normally happens anyway in global health emergencies (such as AIDS, Zika and Ebola; the World Health Organization has well established guidelines for this). Rapid sharing happened during the COVID crisis as well, along with a surge in open access preprints (preliminary research that hasn’t yet been peer reviewed yet), but this much hyped surge actually constituted ony a small fraction of the total number of articles published on COVID (Brainard 2021). In addition, the total number of datasets made publicly available to researchers and the number of peer reviewed COVID articles authored in traditional format were not exceptional, especially the information that was of a usuable quality. A high percentage of papers published during this period were junk, focusing on topics like hydrochloroquine, which misdirected both scientists and the public. This isn’t surprising. The rapid and widespread sharing of actually useful and usable data happens far less frequently in medical research than open access proponents imagine for a number of reasons. Chief among these reasons in medicine are intellectual property concerns (since a lot of research work is industry sponsored and has patents attached), and privacy protections for clinical trials participants.

What did make a significant difference with accelerating COVID vaccine development—apart from global sharing of the sequencing of the SARS-cov-2 genome—was cutting the evaluation and safety monitoring times for moving from phase 1 to phase 3 clinical trials (where phase 1 trials look at safety and efficacy in small groups, and phase 3 trials are widespread and involve thousands of volunteers), widespread dedicated funding for this work, and the parallel production of all possible vaccine candidates so that by the time the winning candidates crossed the line, vaccine doses were already manufactured, bottled, and distributed to staging areas around the world. In addition, established journals were able to squeeze efficiencies out of the system and do more with less over the short term, managing higher submissions and reviewing more papers faster.

This isn’t an example of open access, open data or open science. It’s an example of accelerated research logistics like the world saw with NASA’s Apollo program in the 1960s. In this case, these logistics cut years off the normal drug development timeline. Indeed, we’re really burying the lede by claiming that this miracle of science was a victory for open. The evidence suggests that instead, we should focus more on the sorts of process improvements we used with COVID vaccines if our goal is to speed discovery, and not rely (incorrectly so) on open solutions alone to deliver the benefits we need.

Open access does have the potential to speed discovery, though, and to vastly improve the value and impact of research, which is why so many people believe in open and want it to succeed. But in order to make progress toward this future, and build the right kinds of tools and policies, we need to be honest about our facts and assessments. Our efforts must be guided by clear-eyed answers to questions like why, specifically, do we need open solutions, and which solutions work the best under which circumstances? We also need to develop, as a global community working together (in concert with other global policy efforts), a clearer understanding of our goals. Open isn’t that goal. It’s a tool that can help us reach our goals. And if our current open policies aren’t helping us reach these goals, or are even throwing sand into the gears of research, then we need to stop overreaching and either improve our knowledge of the space we’re regulating, or allow for more flexibility to accommodate the expertise and experience of researchers who have a better understand of what they need.


One-size-fits-all open access policies like Plan S are a bad fit for the world of research because, at the risk of sounding too obvious, this world is very large and diverse, encompassing a wide variety of research needs and resources, differing interpretations of what open means, numerous concerns about open, and copious amounts of activity from many different governments, agencies and institutions (recalling our previous discussion about the historically different paths for open policies):

  • VARIETY OF NEEDS AND RESOURCES: It is well established the researchers in different fields and regions have different needs, resources, and applications for open solutions. Please see OSI’s other Policy Perspective reports for more detail and references.
  • DIFFERENT INTERPRETATIONS: Different stakeholder communities support open efforts based on very different interpretations of what we mean by “open.” Working to create a world where more information is free to read has very different policy implications than a world where all publishing is paid by authors and is made immediately available for unrestricted reuse. Many researchers and institutions support open policies in a broad and generic sense, but far fewer may support the policies that prioritize strict licensing and liberal reuse (see Table 3, for example, for how our OSI2022 researcher survey participants define open). Misinterpretations also happen at the analysis level: it’s often the case that policymakers and analysts misconstrue the facts about open because they are, for example, looking at only one index (typically limited to high impact STM journals), examining journal-level statistics instead of article-level statistics (there are a great many journals that produce very few articles), categorizing all green articles as open (when in fact the majority of archived green is still copyrighted material), or assuming that all open is CC-BY licensed. One needs to read the fine print on studies to make sure we’re really comparing apples with apples.
  • NUMEROUS CONCERNS: It is well established that many researchers have a variety of reasons for not favoring blanket open solutions, including but not limited to affordability, reliability, sustainability, practicability, usability, privacy, and secrecy. Please see OSI’s other Policy Perspective reports for more information and references. The negative impacts of these policies are also a major concern, varying by field, region, type of open, and more. Our policy solutions need to work for all researchers everywhere and not just STM research in the US and EU.
  • A WORLD OF ACTIVITY: The open access policy reform space encompasses a diverse array of actors, definitions, methods, needs, barriers and goals, as well as an endless variety of motives, adaptations, and best practices. There is also a great deal of inspiration and innovation from all corners. No one is sitting still; everyone is listening, learning, and developing new tools and systems for the future. Some of this development is aligned with the idealism of Plan S; some is focused on national interests; some is more narrowly tailored to institutions or disciplines. This is all far more activity than any one organization can track, including OSI. Developments are everywhere and often well below our radar; debates in one corner of the open universe are often completely disconnected from and uninformed by debates in another corner; information issues of great relevance in one field are completely unheard of in another field; policy issues of great relevance in one region have no priority in another region.

    As an observatory, OSI hasn’t been able to keep pace with the totality of these developments and synthesize them for consideration by UNESCO. We have done what we can with our limited funding, but we are also aware of our limits and of the limits of any group making policy or advising on policy. Perhaps it is because of all this diversity and activity that policymaking bodies have seemingly given up trying to embrace it all and are instead simply buying into overly simplistic depictions of open that aren’t accurate or representative. Or, maybe OSI’s message about diversity just isn’t getting through to policymakers as well as open advocacy messages. Either way, as Jon Tennant observed in 2019, our lack of common understanding in this space has “impeded the widespread adoption of the strategic direction and goals behind Open Scholarship, prevented it from becoming a true social ‘movement’, and separated researchers into disintegrated groups with differing, and often contested, definitions and levels of adoption of openness” (Tennant et al. 2019).

    BOAI signatory Leslie Chan agrees. Long a powerful advocate for the development of open access policies, Chan notes in his latest work that “Far from a democratizing force, open science has become a practice of complying with standards and funders’ policies and mandates, further exacerbating deep-seated structural inequalities in knowledge production. Reflecting on our many failed attempts at reclaiming the knowledge commons and co-creating open infrastructure, I call for new imaginaries and narratives of what open scholarship may look like or aspire to be.” (Chan 2023)


Condition% saying this is OFTEN or ALWAYS important
The work is published according to best practices (e.g., such that it is properly reviewed, indexed and archived)88%
The information must be free to read83%
The work is transparent as necessary for all good research (e.g., with regard to methods, sources, funders, and potential conflicts of interest)80%
Data is included73%
The information must be available to read immediately without any delay (e.g., subscription journals often impose a 12-month embargo for non-subscribers)*73%
Publishing costs are paid by authors (or their funders or institutions), not by subscribers44%
The publisher discloses their profit margins to the public44%
The protocol (if there is one) is pre-registered44%
The publisher avoids mixing free to read content with subscription content (as is currently the case with the journals published by most scholarly societies)34%
The information can be re-used in any way without your permission (including copying and pasting everything and selling it commercially)27%

Source: OSI2022 Global Researcher Congress, week 2, questions 8 and 9 (Hampson 2023).

*This response is analyzed in OSI Policy Perective 5 (Hampson 2023). Concerns other than embargoes may also be reflected here, including publishing delays and library access delays. Future surveys will try to understand this concern more precisely.


What might Dr. Chan’s new and improved global open scholarship narratives look like? At their core, these ideas must be built on solid foundations of researcher input and support, fact-based assessments, and global equity. They must also embrace diversity. Given the many different paths, histories, ideas, perspectives, needs, methods and goals of open, layered atop a vast diversity of regional and institutional resources, global open policies cannot possibly prescribe how open is defined in all circumstances and how it must be addressed, nor should these policies aspire to do so. Rather, they should embody and empower an affirmation of our common goals and needs, and be built around flexible frameworks for addressing these goals and needs together so that no field, institution or region of the world is left behind. Research is and always has been a global enterprise. Preserving this inclusive and unifying aspect of research is essential.

There are many examples of policies that meet these requirements. Boxes 10-12 on the following pages describe a dozen such policy frameworks that have been mentioned at some point in OSI since 2015 (which doesn’t indicate an endorsement; we’re just noting that these ideas, among many others, have been noted over the years in reports or discussions). Table 4 illustrates the main differences between these frameworks (bearing in mind that hybrid policy frameworks could certainly be constructed). The most significant difference between these policy frameworks is that some are more complete and action-oriented than others. Only one (Plan A) is truly comprehensive, meaning it has all the features needed to fully reboot the global OA policy mindset. Six policies are action-oriented, meaning their main reason for being is to develop new solutions to open. The remaining five policies are passive, which isn’t to say they are ineffective, but that they lead from behind by laying the groundwork for and encouraging the development of new OA solutions, but stop short of investing the time and money needed to develop and pilot these new solutions. All frameworks are flexible and general, all embrace a diversity of approaches to open, and all lead to the same end point: A world where research is being shared more freely, and in a manner that maximizes collaboration, objectivity, and equity. Being general and flexible, all these policies can interact with each other at the margins in productive ways.


*Develop a better understanding of how open is working (or not), how it works in various information ecosystems, its diversity and interactions with respect to different types of open and different form (code, text, data, OER, etc.), case studies of best practices, economic analyses, and more.



OSI has long advocated an approach to open solutions that recognizes our common ground and common interests and requires us to work together. OSI’s Plan A describes one such approach (see; also see the Annex section of this report). Plan A was launched by OSI in April of 2020 as a trial balloon to see what kind of feedback this general concept would receive. About a dozen organizations and individuals signed on to the plan, but we didn’t make a concerted lobbying effort to collect signatures. In the final analysis, this approach may be too involved and too detailed for most. It makes sense as the framework for a years-long international effort to build a new open solutions policies from scratch, but the political and funding support for this kind of approach doesn’t appear to exist.

Plan A recommends that the international scholarly communication community begin immediate and significant action to:

  • Discover critical missing pieces of the open scholarship puzzle so we can design open reforms more effectively;
  • Design, build and deploy an array of much need open infrastructure tools to help accelerate the spread and adoption of open scholarship practices;
  • Work together on finding common ground solutions that address key issues and concerns (see OSI’s Common Ground policy paper for more detail, Hampson 2020); and
  • Redouble our collective efforts to educate and listen to the research community about open solutions, and, in doing so, design solutions that better meet the needs of research.

In pursuing these actions, our community should:

  • Work and contribute together (all stakeholders, including publishers);
  • Work on all pieces of the puzzle so we may forge a path for open to succeed;
  • Discover missing pieces of information to ensure our efforts are evidence-based;
  • Embrace diversity;
  • Develop big picture agreement on the goals ahead and common ground approaches to meet these goals; and
  • Help build UNESCO’s global open roadmap.

Plan A recommends that the community’s work in this space be:

  • Common-goal oriented;
  • Accountable;
  • Equitable;
  • Sustainable;
  • Transparent;
  • Understandable; and
  • Responsive to the research community.



OSI’s 2020 Common Ground policy paper (Hampson 2020) describes in detail what a common ground approach to open solutions policies might look like. This approach isn’t quite as extreme as the total policy reboot called for in Plan A. It might appeal to governments and institutions who want a foundational framework to justify open policies, but don’t want to invest the time and resources into rebuilding a framework from the ground up. The main focus of this approach is on controlling the direction of open so it makes sense for research and is producing the desired benefits. The core ideas of this approach are as follows:

  • Work together to get all research materials somewhere onto the DARTS open spectrum (see Fig 1). Around 70 percent of the world’s research is closed and off the spectrum entirely (according to Piwowar 2018, and not even counting research materials we’re excluding like industry and government reports). If we can focus on getting more research somewhere onto the DARTS by valuing all types of open outcomes and not judging which outcomes are superior to others, then over time we can work together to improve all open outcomes.
  • Work together to immediately improve access where it’s most needed. What kinds of outcomes are wanted by researchers and where? Where are improvements needed and why? Where possible, we can solve these access gaps quickly through targeted reforms instead of through slow systemic changes.
  • Work together to improve and clarify standards and guidelines so researchers know exactly what is expected, why, how they will pay for open work, how they will benefit, what tools and resources are available to use, how their efforts will be evaluated, and so on.
  • Develop different open policies for different users and audiences. Learn about the unique needs and perspectives involved (especially researchers) so we can work together to build the best solutions to the most pressing problems. Involve researchers in this process. They are the key stakeholders in research communication but are usually not consulted about reform efforts in any meaningful way. Help them work together on relevant challenges in ways that fit the needs and norms of their fields.
  • Work together (whether this means by field, or by institution, network, government, region, or whatever makes sense) across all stakeholder groups to to address urgent research needs and achieve common research goals. The community should also focus on grand research goals like climate change and cancer. These broad and ambitious goals are the most challenging to solve but they also bring the most ideas and resources into focus, and provide a vector for sustainable funding and the nurturing of large datasets (and from this focus, the development and sharing of highly effective best practices).
  • Discourage ideologically hardened solutions (e.g., we must do things this way because BOAI says so) that make it difficult to work together as a community. Value diverse perspectives and follow the evidence instead to address real needs with realistic and effective solutions. Also, set realistic expectations. Be wary of claims that open solutions are a panacea for all that ails research.
  • Integrate different areas of open advocacy. The open access community should not be developing open data policies for example; we need more collaboration in order to create policies that make sense and will have the desired impact.
  • As they are encountered, fix existing open policies as needed to mitigate undesirable side effects, particularly those that are reducing equity. Also work to address the many issues related to open research, from impact metrics to peer review to the culture of communication in academia. Our open future will not reach its full potential without a substantial and sustained effort to reform these issues.
  • Pilot useful open solutions—not just solutions that make information open, but solutions that can combine, curate and standardize data, make new connections, bridge the gaps between disciplines, see new fields, and make new discoveries—in short, do work that proves open is the future.
  • Look beyond. As a community, look beyond the journal article and figure out what we really need. What tools and systems should we build? To what end (specifically)?



Rather than focusing only on open access, governments, funders and research institutions might instead choose to focus on improving research communication writ large. The need to improve research communication is well established and many of the aspirations for OA policy actually have more to do with research communication than OA. The ideal OA policy may therefore be a subcategory of policies designed to help address broad research communication needs and priorities. These needs and priorities will vary by field, institution and region. From a global perspective, if this approach is used, then a research communication-based OA policy framework might look something like this (from OSI’s 2022 researcher surveys; these priorities are discussed in more detail later in this report):

  • Improve policies and systems that help researchers stay up to-date on the latest information in their fields.
  • Improve research data repository and processing systems so researchers can get the most benefit from sharing their data and being able to see and use other research data.
  • Improve policies to help lower the costs of research publishing and access.
  • Improve policies that help researchers receive equitable credit and recognition for publishing in open formats (instead of focusing primarily on the high impact journal record).
  • Improve systems that help researchers communicate with each other and with policymakers and the general public.


A more narrowly focused strategy is to facilitate the development of information and data sharing networks in specific fields. Doing this would leapfrog the need for getting everyone on board with specific policies. Instead, only the most motivated networks of researchers would seek support for this work and reap the benefits of more open engagement. Over time, best practices and lessons of experience will emerge from this genre of open engagement that will make future networks easier to start and more effective. These networks need:

  • Commitment to a fundable goal (like sharing the data from all research in one area of study).
  • Dedicated funding to support research data collection, curation, synthesis, long term maintenance, and outreach (to network members). Depending on the needs and goals involved, this could mean several million dollars annually. An overly broad goal like sharing all research from all studies is unlikely to attract a funding patron. Smaller (but still ambitious) networks centered around discrete research fields are more fundable.
  • Flexibility. Every network will have unique needs and requirements regarding data architecture, data formats, data gaps, privacy restrictions, usage restrictions and so on. A centralized effort to help these networks grow and thrive can help provide funding, infrastructure and best practices so that each new network isn’t tasked with reinventing the wheel.



OSI’s Common Ground and Open Solutions policy perspectives (Hampson 2020 and 2021), describe what kinds of reforms can happen and are currently happening at the individual stakeholder level. These types of reforms aren’t necessarily new. What would be new and helpful is if these groups could start coordinating ther action more broadly (to the extent possible since publishers can’t collude), and also start building their efforts on evidence-based foundations rather than on the shaky precedent of BOAI (simply duplicating bad policy isn’t the ideal). If we pursue a more collaborative and evidence-based approach, we might end up with a framework of interlocking OA policies that makes sense. For example, at the OSI2017 meeting (see OSI 2017), representatives from different stakeholder groups identified a number of top priorities. Infrastructure groups agreed they could help push for more global standards and integration; journal editors wanted to work together to improve global journal standards through mentoring and networking; libraries discussed working together to improve the global capacity for open; open knowledge groups suggested reducing the jargon around open to make it better understood, and to also establish better financial sustainability for a diverse open environment; commercial publishers offered improving the ability of coordinating groups like OSI to engage on this issue; research universities noted the need to think more critically and creatively about developing programs and platforms that meet the actual needs of researchers; and scholarly communication experts recognized the need to continue learning more about the open space and gather more input from researchers, Each of these stakeholder group actions is effective; woven together in a coordinated fashion, they would create a powerful and sustainable force for change.


A tech-centric approach to open policy reform focuses on developing technical solutions to open without necessarily being wedded to any single open policy. In this sense, it is an important generalist approach to open solutions. Also, the advocates and audiences for various tech solutions aleady exists, as well as the funding in many cases; this approach is not necessarily new. What is needed in this solution space is more funding and action, particularly with regard to improving the global infrastructure for research, including:

  • More xml formatting of journal articles, which allows for easier sharing and mining of articles
  • Better and more widespread use of research object identifiers like PIDs and DOIs, and researcher identifiers like ORCID
  • More infrastructure work: Better resources and services are needed for research communities around the world to conduct research and to archive, analyze, and share their findings. Different organizations are working on different aspects of these challenges. Many of the most needed and reliable management resources are private, however, which is problematic over the long term with regard to accessibility, accountability, and making needed reforms (from major indexes which are heavily weighted toward Western STM publishing, to impact factors that over-value citations and are therefore easily gamed, to needed lists of predatory journals that are privately calculated). Community ownership and operation may require public (or at least community-wide) funding. Data repository development and integration may be the most pressing infrastructure challenge, however. At present there are thousands of different data silos and data standards. Continued work on standards and interoperability is important, as well as focusing on improving the infrastructure resources and data processing capabilities available for researchers everywhere.


In American football, a Hail Mary happens when the quarterback throws the ball as far downfield as possible in an attempt to score quickly. There are a number of policies in research communication reform that fit this description—decidedly action-oriented but still considered too far on the fringes to be taken seriously (yet). These actions include (but are certainly not limited to):

  • Develop a global All-Scholarship Repository (ASR; see the Annex section for details), and as part of ASR, nuture the development of new kinds of science communication professionals who are funded by research grant overhead charges and whose job is to manage and improve research communication workflows and outputs.
  • Buy the entire backlist of research publications currently held by commercial publishers and make everything publicly available immediately. Assuming this could even be done, the cost might be many billions of dollars,* but this amount could be split between governments (based on research output, so the US and China would each contribute close to a quarter of this amount—an unlikely possibility if the price is actually this high).
  • Deploy AI tools to help root through. coalesce and translate global research, looking for more connections and previously unknown work. Developing a better understanding where to focus research will make our research spending more efficient and effective, and will also lead to more breakthroughs and faster discovery.

* The combined 2022 revenues for the largest commercial academic publishers was approximately US$5 billion. Publishers making huge profits for many years (as is the case here) can be valued at 20 times earnings. However, the goal wouldn’t be to buy the companies, just the backlists.



Similar to Plan A, a research-centric policy framework might appeal to governments and research institutions who want to do more but also want to base their policy designs on objective evidence. Unlike Plan A, however, there is no commitment with this policy approach to work together or to design and deploy new open solutions—just a focused commitment to find answers that better inform decisions about what to do next (if anything). The action items in this approach are to:

  1. Develop a clearer understanding of how open ecosystems work in text, data, code, government, and OER.
  2. Collect case studies highlighting the variety of open approaches used in each of these open environments. Focus on examples that constitute the most common and impactful use, not necessarily all outlier solutions.
  3. Evaluate the economics of each of these open approaches. Where (if anywhere) are we saving costs? Where are costs being shifted? What are the current and emerging financing impacts and sustainability concerns? (Note that most of this analysis already exists.)
  4. Identify the best practices and lessons of experience across fields and types of open.


An affirmative 4-point policy framework is a cheerleader approach, encouraging the adoption of open solutions without explaining the reasons why or specifying a course of action. This approach is good if the goal is to simply nudge research toward more openness but not necessarily toward specific solutions, and if adopters don’t want to do all the fact-finding, piloting, and collaboration needed in the common ground approach. The four affirmations of this policy approach are as follows:

  1. Knowledge is a public good. It is vital for the continued and equitable progress of knowledge that public knowledge be made freely available for everyone everywhere to see and use. Therefore, free to read should be our default mindset for all research communication. From there, we should expand as possible (e.g., for code, data, or unrestricted reuse), or restrict as needed (e.g., to protect patient privacy, patents, government or industry secrets, etc.).
  2. There are many stakeholders in the knowledge production process, and each stakeholder is part of this process, not the owner. Stakeholders must therefore work together to ensure all viewpoints are represented and that communication solutions are equitable, sustainable, and in the best interest of research.
  3. Open solutions will necessarily come in a variety of forms, including forms that may not be ideally licensed or that may limit access to more finely curated or processed information. While it is important for public knowledge to be freely available for everyone everywhere to see and use, we must at the same time acknowledge the incentives and needs for systems that collect, curate, publish and analyze information and whose outputs may not be publicly accessible. Research requires both: not just piles of raw, unprocessed information, but also value-added outputs that take time and resources to develop. Our focus should be on what we’re trying to accomplish with open rather than what open means and the methods we employ.
  4. Innovate and learn. Don’t lock into ideological-driven policies that are built on scant evidence and may in fact create negative consequences for access. Flexibility and innovation are key.



Manifestos like DORA aren’t policies per se, but they can read a lot like a 4-point policy or a common ground policy in that they help raise awareness and steer thinking. An open manifesto OSI proposed in 2018 reads as follows:

  • Recognizing the importance of research to the future of humankind,
  • Considering there are a wide variety of research fields in the world today, each with unique needs and perspectives,
  • Acknowledging that researchers, research institutions and global regions everywhere are not equal with regard to their ability to participate in or reap the benefits from research,
  • Committed to ensuring that that future of research communication is both effective and more equitable, and,
  • Building on the work of the Open Scholarship Initiative (OSI), which has been engaged in partnership with UNESCO since 2015 to build such an effective and equitable framework for the future of global research communication, and building as well on the numerous efforts with related goals, such as DORA, FAIR, BOAI, the Leiden Manifesto, and the Lindau Guidelines,
  • Together resolve that research should adopt these 10 policy goals for research communication—that researchers everywhere should:
  1. Follow and help improve established best practices regarding the ethical conduct of research (as outlined in existing legal frameworks, institutional guidance, and more). This is relevant to research communication insofar as faulty research that gets published and publicized, that uses forged data or fake analysis, or that plagiarizes other research poses a threat to the research ecosystem.
  2. Avoid publishing in fake and predatory journals, which lack adequate safeguards to ensure the work they publish is of sufficient integrity.
  3. Make research work readily available and discoverable to research peers worldwide to the extent possible (taking into account concerns such as competition and misuse). This goal is achievable through a variety of means, from publishing work in some type of “open” format (of which many varieties exist), to ensuring that data is included with work, ensuring that old work doesn’t sit in file cabinets, and ensuring that null-hypothesis outcomes get published.
  4. Support efforts to improve equity in research through improved access, through the recognition and reduction of funding and evaluation biases, and other means. As part of this effort, be aware that “one-size-fits-all” solutions crafted in the Global North (such as the article publishing charge) may harm equity and access for researchers in most parts of the world.
  5. Support efforts to make research work more accessible to policymakers and the public through journalism, outreach, plain language abstracts, and other means.
  6. Follow established best practices with regard to archiving and preserving the published research record.
  7. Participate in reviewing and critiquing the work of peers worldwide (however these processes continue to evolve).
  8. Evaluate research based on merit, not on its “impact”, or the “impact factor” of the journal in which research is published.
  9. Support continuous efforts to improve research replicability, reliability and transparency.
  10. Support efforts to improve the usability and reusability of research and research data (by means that include but are not limited to using more open licenses, sharing data through research networks and repositories, supporting data standards work, and more).



A longer version of the manifesto is the proclamation. OSI proposed an open solutions proclamation in 2021 (Hampson 2021; also, see the Annex section of this report). The principle of the proclamation is the same as the manifesto, but it has more trappings of diplomatic language. Our 2021 proclamation closely follows the logic and rationale for Plan A and is modeled after the opening language of the 2021 UNESCO Recommendation on Open Science.


Doctrines like FAIR and the Panton Principles are also effective ways of encouraging policy development (to the extent these doctrines form the basis for policies but aren’t complete policies by themselves). For this report, historian Jason Steinhauer reviewed the key research information doctrines developed over the last several decades (part of his work is contained in Box 1). We can convert Jason’s synthesis into a new doctrine for open communication in research—we’ll name it the Tennant Doctrine in honor of our late colleague Jon Tennant—built on our historical need for sharing research information and grounded in the reality of how sharing takes place today:

  1. All actors in the researcher ecosystem have a responsibility to engage, communicate, and distribute information widely and equitably across the world. No actors within this ecosystem are shielded from that responsibility, regardless of how well they benefit from the current models.
  2. More must be done to meet researchers where they currently are, recognizing that they will not abandon current models and systems overnight if there are not proper incentives and rewards for them to do so.
  3. New investments in infrastructure and culture must be made to make the transition to open access smoother and more sustainable.
  4. Global conversations must be held that are diverse and multi-stakeholder, to ensure the knowledge sharing inequities of the past are not repeated in the future.


None of these policy frameworks are fully developed, which is an intentional oversight. Filling in the policy particulars is best left to governments, research networks and individual research institutions after they choose the most appropriate OA policy framework for their needs. This is an important point. Laws, codes, and statutes require specificity, whereas policy guidance is often best described briefly. For instance, the UN Declaration on Human Rights contains 1772 words (UN 1948) and describes these rights only in general terms.20 The United States Declaration of Independence contains only 1337 words (not including the names of the signers). In contrast,21 UNESCO’s 7400-word declaration on open science (UNESCO 2021) hits most of the same high notes about open as OSI’s Plan A, but then goes on to bury these overarching ideals under mountains of detail specifying acceptable outcomes. Would-be adopters are likely to miss the main points of this policy, or worse, ignore them because they agree with the overall tone but disagree with the specifics. Imagine if Thomas Jefferson had written an additional 5,000 words about his predictions for the future of the United States and his specific implementation requirements. Would the impact of his first 1337 words have diminished?22

If we truly endeavor to build a world where open research reaches its full potential, then generalities that get us on the same page and empower us to work together toward common goals are the only way forward. We can’t change the future, after all, by first unilaterally deciding what this future will look like23 and then prescribing exactly how all of us must get there. Much like Jefferson’s goals for democracy and freedom, our community’s goals for scholarly communication can serve as guideposts and inspiration. To the extent our policies exclude researchers, fail to address their needs, or make research communication worse, we end up nipping the flower in the bud.

Writing in generalities can also lead to policy that is more inclusive, not only because we don’t actively foreclose options, but because we invite a broader range of input, interpretation, collaboration and innovation. A famous (and probably overused24) example from public policy history is how the world’s major urban centers were struggling in the late 1800s to dispose of increasingly large piles of horse manure. Horses were still the dominant form of transportation, and city streets were piling high with millions of pounds of horse manure every day. Architects in New York began building stoops on all new buildings, elevating front entrances a half-story from street level so they could stay above the mountains of waste and all the flies and rats it attracted (Paul 2016). New York city planners predicted that at the current rate of accumulation, city dwellers would be buried several stories deep in horse manure by the 1930s. Something had to be done.

When the world’s first international urban-planning conference was held in 1898, it was dominated by discussion of the manure situation. But the architects, public health officials and social workers who attended were unable to imagine cities without horses—industrialists and innovators were not invited because urban planning at the time was mostly about architecture (Erickson 2014)—so the conference adjourned after just three days (Morris 2017). Fortunately, a technological solution to this crisis emerged soon thereafter. Electricity had just started arriving to cities in the late 1800s, and the internal combustion engine was catching on. By the early twentieth century, cars outnumbered horses and electric trolleys replaced horse drawn ones. The manure crisis was averted (albeit, exchanged for the beginning of the climate change crisis).

This parable has been told many times with varying aims. Climate change deniers have used it to claim we shouldn’t worry about global warming because technology will come to the rescue; anti-regulation types have used it to suggest that all government policymaking efforts are comically flawed. In this case, the lesson we’re drawing out isn’t denialism or blind faith, but the power of inclusion. Experts who stay in their own silos make bad predictions about big picture issues, from the architects, public health officials, and social workers who met alone in New York to discuss the future of horse waste, to the many business, military and engineering tycoons around the world who never saw a practical use for what the Wright brothers had invented, to politicians who never saw the need for social safety nets, to tech wizards who thought the computer would never amount to more than an electronic recipe box, to techno utopianists who thought the Internet and social media would only lead to global peace and understanding. For complex and interconnected challenges, one group alone cannot see the full picture.

Making global research more open is one such complex and interconnected challenge, and it doesn’t lend itself to neat answers. If we’re trying to help researchers succeed, we first need to develop as broad and accurate an understanding as possible about how the global research environment works. In this environment, researchers are the key stakeholders. It is essential to secure their input and involvement.

It may be surprising to note, then, that suitably large numbers of researchers have never been consulted in any meaningful way on any of the major OA policies currently in use.25 The need for open has been pushed by governments, agencies and funders, and the solutions in place have been designed by governments, funders, open activists, libraries, and publishers.26 Researchers haven’t even been consulted after the fact; it sometimes seems the only time the OA community hears from researchers is when they sign petitions complaining about the OA policies that have suddenly intruded on their academic freedom (see, for example, Kamerlin 2018).

In everyone’s defense, researchers are not a monolithic group. From art historians to virologists, astronomers to sociologists, postdocs to emeritus professors, academicians to private industry experts, there is no typical profile of a researcher and no single group of researchers whose opinions can serve as a proxy for all researchers everywhere. Researchers are also busy, not just with their research work but in many cases also with grant-writing, teaching, mentoring, attending conferences, writing papers, and more. All this diversity and scarcity makes it difficult to truly understand the perspectives of all researchers everywhere when it comes to designing policy.

Still, given how important it is to include researchers in conversations about the future of research communication, this oversight is significant. OSI has tried to include as much input as possible from researchers in its deliberations over the years. A number of prominent researchers have participated in our conferences and online discussions, and we have made every effort to incorporate their viewpoints into our analyses and recommendations. Several large surveys have also been conducted in recent years that attempt to measure various facets of researcher opinions about open access. We have reviewed all this work, and also attempted to fill some remaining gaps in our understanding by conducting our own global surveys of researchers in early 2022.


Box 13, below, contains a summary of key findings from the major researcher surveys conducted over the last five years. These surveys give a consistent portrayal of researcher perspectives across many fields, institutions and countries, where researchers generally dislike APCs, have concerns about sharing and reuse, and value academic freedom, journal quality, and impact. Researchers aren’t necessarily happy with current OA policies, but they aren’t particularly aware of the details or agencies involved.


Researcher attitudes about communication practices have been measured through a number of quality surveys in recent years. The surveys cited below are listed separately in the Annex section of this report. An overview of all these non-OSI surveys reveals a pattern consistent with the findings from OSI’s 2022 researcher surveys (see Box 14):

  • Most researchers believe there is value in anyone being able to access their research (Taylor & Francis 2019, Wiley 2019a).
  • Most who publish in open format are motivated by the desire to increase the impact of their work. Only about a third are motivated by the desire to increase transparency and reuse (Wiley 2019a). In open data, the reuse motivation is higher—maybe around a half (Wiley 2019b).
  • Most researchers know relatively little about the details of ongoing research communication reform efforts and policies (Taylor & Francis 2019).
  • Only a fraction (maybe as low as 1 in 5) believe funders have a right to control where to publish. For 84% of researchers, the single most important factor in research communication is allowing scholars the freedom to publish where they choose (Taylor & Francis 2019)
  • There are a host of concerns about data sharing and reuse. The most commonly cited problems are a lack of suitable infrastructure for data sharing, and a lack of incentives. There are also concerns about misuse and scooping, concerns about copyright and licensing, and the time and effort needed to make research data openly available (Perrier 2020, Davies 2019, Stuart 2018). Other concerns include fairness (where better resourced researchers with superior computing facilities mine open data), science deniers (where “requests for information are motivated by the desire to discredit their work and professional reputations”), a lack of oversight regarding compliance, and difficulty adapting FAIR requirements to datasets that are also constrained by sensitivity and privacy considerations (Hrynaszkiewicz 2021).
  • Designing new data sharing philosophies and systems that allow data and research to make more of an impact is preferable to doubling down on our current approach that simply enables more sharing and reuse. Our current systems which are filled with bad and incomplete data and fraught with peril—relying on bad datasets, getting scooped, an imbalance between risk and reward, etc. (Hrynaszkiewicz 2021, NASEM 2020, Faniel 2020).
  • The top priorities for researchers when picking a journal are roughly as follows (with response percentages starting at around 90% and dropping to 65%): the journal has a good reputation in field, it is well read, it focuses on the researcher’s specific area of research, it has high impact factor, it is free to publish in, it belongs to a scholarly society in the researcher’s field, and it has short turnaround times. Whether the journal is fully open access ranks dead last at 30 percent (Taylor & Francis 2019).
  • CC-BY has historically been the least preferred type of license. About a third of researchers dislike this type of license the most, while only 10% like it the most. Conversely, CC-BY-NC-ND has been the most preferred type of license (Taylor & Francis 2019).
  • Opinions about APCs vary by wealth, region, career stage and field of study (Segado-Boj 2022). Time period is also a factor since the negative affects of APCs are only now coming to light. In 2019, most researchers (particularly in the Global South) reported not having the funds to publish in open access (Wiley 2019a, Scaria 2018). Also in 2019, most researchers reported that if everything was published in APC format it would have a large negative effect on their ability to publish, with AAAS survey respondents reporting the need to make tradeoffs between research and publishing (Taylor & Francis 2019, AAAS 2022).
  • Overall, the top problems in academic publishing may rank something like this for many researchers: Pressure to publish in high-impact journals, publication delays, paywalls, lack of accurate measures of journal/paper quality, insufficient publishing-related resources, inadequate benefit of peer review in improving quality, irreproducibility, and tedious journal processes (Editage 2018).

These findings largely align with what we found from OSI’s 2022 researcher surveys, summarized in Box 14.27 From these surveys, we learned that most researchers want new communication solutions and are ready to embrace the ones that address their key needs. These needs are most urgently to lower the costs of journals for authors and institutions, and also to improve research infrastructure, narrow the global access equity gap, make more journal articles (plus accompanying data) free to read and quickly accessible, find the right research papers to read and stay up-to-date on the latest research, ensure free classroom for journal articles use while limiting misuse and commercial reuse, ensure the continuation of a high quality publishing environment, retain the freedom to decide where to publish avoid one-size-fits-all solutions, ensure proper credit and recognition (especially as it relates to advancement), make more of an impact on society, improve collaboration and communication with colleagues in the same field, and reduce administrative workload and improve funding sustainability.

Overall, this group recognizes that developing successful OA policies will require broad collaboration, and that researchers are a key stakeholder in this conversation. They also believe science and society will benefit from the right policies. However, these policies cannot be one-size-fit-all approaches anchored in a limited understanding of the broad spectrum of global research communication needs and perspectives, or in the idea that open is a narrow construct since there are in fact many different kinds of open.


OSI’s 2022 researcher surveys returned findings that are consistent with the other major surveys of researchers conducted over the last several years. The main conclusions from OSI’s surveys are as follows:

  • The overwhelming majority of researchers think there are better ways of structuring research communication, and would like to hear about and explore new ideas and policies. Indeed, most say there is an urgent need for many reforms in scholarly communication, led by lowering costs. However, only a few think these reforms should involve reinventing the wheel or creating one-size-fits-all policies for all researchers everywhere. In addition, most researchers want to retain the freedom to publish wherever they see fit.
  • Communication plays a significant role in research, particularly journals. However, the communication priorities of researchers are general in nature when it comes to OA (like being able to access research for free and being able to communicate effectively with colleagues). More granular communication concepts like reusability are a much lower priority.
  • The overwhelming majority of researchers recommend creating a system that makes sure the research world doesn’t divide into those with means and those without. Top reform ideas include improving repositories, simplifying licensing, and building new infrastructure capabilities.
  • Relatively few researchers say that current OA policies have helped their research. Others haven’t noticed any changes so far, or have noticed changes but to these haven’t mattered, or these changes have hurt their work.
  • Most researchers are familiar with key OA concepts but are not aware of OA agencies and their policies.
  • Most researchers define open as being free to read material that is high quality and transparent and has data included. Most do not believe that copyright license or the format of journals (hybrid, gold, etc.) are important components of open.
  • When it comes to licensing, most researchers are interested in free classroom use and are wary about poor quality reuse and commercialization.
  • Most researchers dislike APCs, and say that publishing has become too expensive for them.
  • Adoption and uptake issues include a mismatch between needs and solutions, a lack of viable options, quality concerns, academic freedom, doubts about the effectiveness of OA policies, and high costs.
  • There is widespread support amongst researchers for OSI’s conclusions: There are no one-size-fits-all solutions in OA, OA exists on a spectrum of outcomes, researchers are a key stakeholder in scholarly communication, and real solutions will require broad consultation and cooperation (see Table 5).

As noted in the last bullet point of Box 14, not only do the findings from these different surveys align with each other, they also align with the general points of agreement among OSI participants (as mentioned earlier in the “OSI” section of this report; see Table 5) that there are no one-size-fits-all solutions in OA, OA exists along a spectrum of outcomes, researchers are a key stakeholder group in scholarly communication, and real solutions will require broad consultation and cooperation. There is also strong overlap between the concerns researchers have about OA policies and the concerns many in OSI have noted about the overall nature of global OA reforms (also as described earlier in the “OSI” section)—to wit, how policymakers are: (1) ignoring the unintended consequences of APCs; (2) ignoring the evidence that in practice, openness exists along a broad spectrum of outcomes; (3) overreaching and designing policies for which we lack the requisite expertise; and (4) forcing one-size-fits-all open solutions on researchers, even where these solutions don’t match researcher needs and resources

Going forward, then, and based on what we can tell from our analyses about what researchers want and what research communication experts think should happen, open access policymakers should turn their focus to the following:


OSI position% of researchers who sort of or strongly agree
There are no one-size fits-all solutions in scholarly communication.100%
Successful open solutions will require broad collaboration. It is important to hear from and work with all stakeholder groups in our efforts to reform the scholarly communication system.96%
Researchers are a key stakeholder in this conversation. Reforms need to be made in collaboration with researchers so we don’t end up damaging research in the process and/or making access issues worse.92%
Publishing is a critical part of the research process.88%
Science and society will benefit from open done right.88%
“Open” exists along a spectrum of outcomes. There are many different kinds of “open.”88%
The incentives for making more information open are not aligned—i.e., the rewards and benefits aren’t currently commensurate with the effort.80%
Connected issues need to be addressed. There are many parts of the scholarly communication system that need improving, not just making things more “open.”76%
There is much common ground in the research communication reform space, and we should build on this common ground76%
The culture of communication in academia needs to be reformed. There is too much attention paid to things like impact factors and publishing record.72%
Making information more open is just a means to an end. It is not the end goal itself.72%
It might be worth thinking in terms of “open solutions” that are integrated instead of open access plus open data, open code, etc.68%
We need to learn more about the issues here before making global changes.64%

Source: Hampson 2023 (OSI2022 Global Researcher Congress, week 4, question 1)

  1. Give researchers the solutions they want and need. Researchers are looking for ways to lower costs, improve collaboration, improve impact, ensure quality, and generally make their research lives better. These needs are not the focus of our current global OA policies. At best, these policies focus primarily on much lower priority concerns like reusability, embargoes, and CC-BY licensing (see Box 15 and Table 6). At worst, they have been sold as a magic elixir that will cure all that ails research, but they can’t and won’t. Some researchers will benefit from these policies, others will not; some issues will be addressed, the highest priority issues will not; some regions of the world will be able to adopt these solutions, most will not.

    As we consider designing new research communication policies, we should keep researcher needs and priorities squarely in mind. Rather than merely creating policies that satisfy the definition of BOAI, researchers and the research world would be better served if we focus on the communication solutions researchers actually want and need.

    Before we can take this more considered approach to OA policy reform, it will first be necessary to better understand exactly what these need are—and these will differ greatly by region, discipline and field. In time, research will greatly benefit from solutions that are centered on meeting these specific objectives and that truly involve researchers in creating the best solutions. This strategy will also help improve the discourse around research communication reform from one where we merely prescribe blanket solutions to challenging issues to one where we search for best practices and fact-based solutions that researchers actually want and need.
  2. Do something about APCs. The high cost of publishing figures prominently in researcher concerns. It may even be accurate to say that cost is the number one concern of researchers. APCs have been touted for years as the best possible solution for publishing, even though many groups (including OSI) have warned that the widespread use of APCs will widen the gap between the haves and have nots in research, and substitute one equity imbalance with another—the inability to pay for access with the inability to publish. Indeed, as costs have shifted (in different ways for authors in different fields and institutions, with some authors relying on support from grants, foundations, or libraries to pay for APCs, others less so, and still others not at all), the cost burden for many authors in an APC-based world is now much heavier than it was in the subscription world it is trying to supplant.28

    All this said, it’s possible the disruption we’re witnessing today will be resolved over the next five to ten years as adjustments take hold, such as APC waivers and discounts (all major publishers offer waivers and discounts to researchers from certain countries), or the increased willingness of funders and governments to cover APC costs as part of grant funding. For now, however, the subscription-to-APC transition in scholarly publishing is not being greeted by many (or maybe even most) researchers with open arms.
  3. Respect the fact that research is a profession. Many individuals choose professions where making an impact is more important than earning a large salary. Research is one such profession. Nevertheless, these occupations are susceptible to the same challenges as all others, including recognition, retention, and promotion. In our 2022 surveys, as well as surveys undertaken by other organizations (see Annex), researchers place a limited amount of value on open research. They want to be able to connect effectively with their peers, read the work of other researchers, publish economically, and have an influence. We can score a victory for open access inasmuch as these research communication goals align with open policies, but the vast majority of researchers (globally and across disciplines) are not primarily motivated by the desire to make their work accessible. This is what we should expect.

    However, this incentive dichotomy researchers perceive is rarely respected in the world of policymaking. Researchers are told the quest for knowledge belongs to all humanity and that they should be entirely motivated by participating in this pursuit, disregarding incentives which better align with their career demands and objectives. This is the essential premise of our current OA policymaking environment: that open outcomes are the highest priority, and valued more than quality, reputation, and cost. In the meantime, the majority of academics face the career-driven reality that quality, prestige, and cost are more important than open. The challenge of our future OA policymaking efforts is that we must achieve both goals, collaborating with researchers to develop solutions that align with their career incentives while also meeting the needs of a more open research environment.OSI has long maintained that researchers are key stakeholders in the OA policymaking process, or at least that they should be. Over the years, our group has closely tracked survey research in this subject to gain a deeper understanding of researcher viewpoints on open access. Numerous researchers who have participated in OSI’s conferences and online discussions have also provided us with guidance, information, and perspectives.


All major global OA policies specify a CC-BY license for publishing because this is what aligns best with the BOAI definition of open on which these policies are based. There are three problems with this approach. The first problem is that there are many different kinds of open information, created by different researchers with different needs and different motives. As a result, many different solutions and licensing options for open have come about over time (recall Figure 1 about the DARTS open spectrum).

The second problem is popularity. We know from previous researcher surveys that CC-BY may in fact be the least popular copyright license made available to researchers (Taylor & Francis 2019 and below table).* CC-BY-NC-ND is the most popular type, allowing unlimited reuse with attribution but also preventing commercial (NC) and derivative (ND) use. Granting an exclusive right to publish is currently the most popular type of copyright license overall among researchers surveyed, wherein authors retain copyright and publishers manage reuse requests. We know from OSI’s and other surveys that researchers are concerned about commercial and derivative use, so the fact they like CC-BY-NC-ND better than CC-BY is not surprising.

License typeMost preferredSecond most preferredLeast preferred
Exclusive license to publish23%21%21%
Copyright license18%24%16%

Source: Taylor & Francis 2019

The third problem is utility. Is CC-BY even the right tool for the job? Researchers want to be able to cite and excerpt work and use academic papers for classroom instruction. CC-BY grants these rights, but so do existing Fair Use and Fair Dealing copyright laws (in the US and UK respectively). CC-BY also provides an easy path to free access, but it isn’t the only path (as noted, more restrictive variations of CC-BY also work, as does regular copyright). The unique benefit of CC-BY envisioned by BOAI is a world where researchers can reuse and remix journal articles at will, but do they even need or want this capability? We learned from our 2022 surveys (see Hampson 2023) that only somewhere around 14% of researchers are looking for the ability to copy and paste large chunks of text (others may be interested in this ability but not researchers; see Table 6). Indeed, most simply seem interested in the free to read nature of open (apart from open data and code, which are governed by CC-0 and not CC-BY). Added to this, the prospect of having work misused is an outcome no one wants but is very real using CC-BY.

Given all this, what compelling reason exists for sticking with CC-BY as the default license type for OA? Coming at this question from a different angle, what features do researchers actually want and need in a copyright license for their work? Such a license should, at minimum (based on what we learned in our surveys) include rights like free classroom use, and the right to immediately share finished products within a peer community. It might also include a prohibition on commercial and derivative reuse without permission from the author. Maybe this new kind of license (let’s call it CC-EDU) should be the new standard? Maybe copyright retention will become the new standard? Taking either approach would show respect for researcher concerns and might also open the floodgates to a much broader, faster, and productive transition to open content.

* This said, Pollock 2022 shows that CC-BY accounts for about 55% of all open licenses as counted in Crossref. The dispartity between these two data points might be due to the fact that Crossref measures listings by their DOI’s, and these identifiers are more likely to be associated with open access articles.


Concern%*Communications related?
Tier 1 concerns (66%+ of researchers say this is ALWAYS important)
Stay up-to-date on all the latest research in my field76%X
Get funding for my research work66%
Tier 2 concerns (33-65% of researchers say this is ALWAYS important)
Infrastructure support from my institution (good facilities, etc.)64%
Find, hire and keep good staff60%
Design good research studies60%
Make an impact in my field60%
Find the right research papers to read59%X
Publish in a journal57%X
Collaborate with other researchers56%
Read research papers for free54%X
Get proper credit and recognition for my work52%X
Effectively communicate my findings to fellow researchers50%X
Publish in a prestigious journal48%X
Advance in my field48%
Make an impact on society48%
Figure out what to read—there’s so much information out there47%X
Job security47%
Publish affordably47%X
Freely and rapidly share my research work with other researchers around the world41%X
Effectively communicate my findings to the general public41%X
Effectively communicate my findings to policymakers41%X
Tier 3 concerns (0-32% of researchers say this is ALWAYS important)
IMMEDIATELY (without waiting for embargo periods) read what other researchers have published in a subscription journal32%X
Publish in the right journals32%X
Publish enough—the pressure to “publish or perish”28%X
Make my data available in a format that others can see and use28%X
See the data generated by other researchers25%X
Protect my research from getting “scooped” before I can publish it24%X
“Register” my discovery (publish quickly so the world will recognize I was the first to discover something)24%X
Publish quickly20%X
Reuse the data generated by other researchers18%X
Protect my research from misuse16%X
Protect my research from theft8%X
Copy and paste large chunks of text from other research papers or otherwise reuse these works (beyond what is already permitted by copyright under Fair Use and Fair Dealing)6%X

Source: OSI Research Communication Survey, question 5 and OSI2022 Global Researcher Congress, week 3, question 2
*% column is averaged across the two source questions

Even after all this work, though, we cannot say for certain, of course, what all researchers everywhere think about OA policies, but we can say for certain that policymakers must do a better job of engaging with and listening to the global research community. A policymaking strategy that does a better job of listening to researchers and addressing their top priority demands is necessary because there is a great deal of unmet need and misaligned incentives, potential for harm from our current policies, and a great deal of benefit to be gained from new and better policies.


So far, we’ve made the case that the many global stakeholders in research communication (researchers in particular) haven’t been working together, and that they should be working together, to develop OA policies that work for all researchers everywhere. It’s important to recognize for perspective that this lack of collaboration extends far beyond OA policy. Research communication is a broad and incohesive field and typically doesn’t work across boundaries at all.

Research communication is less a field, in fact, than a collection of unrelated activities with dissimilar needs and goals, including specializations such as science writing and outreach for public audiences, journal editing, grant writing, technical writing, research administration, tech transfer, project management, data management, education (with different specializations for different grade levels), public relations, policy analysis, and informatics. The diversity of needs being addressed include sharing, access, affordability, visibility, transparency, clarity, multilingualism (e.g., see Helsinki Initiative 2019), precision, privacy, secrecy, and persuasion. And the philosophies, approaches, skill sets, goals, and best practices involved vary by region, field, study, institution (including industry), audience, purpose and budget. We talk in this report about research communication reform as though everyone in research takes this to mean open access reform, but in conversations across the entire spectrum of research communication professions, open access barely rate mention in many official reports, and even then only in the most general and generic way.29 Therefore, should broad research communication reform policies be written even more broadly than proposed in this report so they give direction and unity to what more of the world considers to be research communication?

This may already be happening at some level. For example, in June 2022 the General Secretariat of the Council of the European Union issued a proclamation on “Research assessment and implementation of Open Science” touching on many of these broader issues like research assessment, innovation, education, multilingualism, public policy, and economic development. The details of Plan S don’t even rate mention in this statement. In this broad policy proclamation, EU member states are simply encouraged to develop research that is high quality, impactful and shared for the benefit of all.30

Similarly, the US Nelson Memo sets up new open requirements for researchers but is otherwise solution agnostic. It’s likely that the net effect of this policy will be to push the US (and world) toward more gold OA solutions (that is, research published using APCs and carrying a CC-BY license), but at the same time this directive doesn’t insist on CC-BY licensing or APC funding, so in this sense it is a general policy directive pushing US research communication reform in a general direction without getting entangled in specifics (see Crotty 2022).

The open policy strategy currently being developed through a US National Academies Roundtable process is another example of an emerging generalist policy, which is a bit surprising given that the effort is being chaired by reformers from the more ideological side of the OA policy divide (leaders of the Open Research Funders Group effort), and because the early organizing work around this effort was very activist sounding.31 Now, after several rounds of input from agency and research leaders, the emerging policy is sounding general in nature, soft-pedaling to university leaders what an open future should look like (see Box 16, below).


Open scholarship entails a culture shift in how research is conducted in universities. It requires action on the part of university administration, working in concert with faculty, sponsors and disciplinary communities. Universities should consider steps in three areas:

  • Policies: Language and guidance should be reviewed for alignment with open scholarship, in particular: (1) academic hiring, review, tenure and promotion (valuing diverse types of research products; metrics that incentivize the open dissemination of articles, data, and other research outputs; and valuing collaborative research); (2) intellectual property (ownership, licensing and distribution of data, software, materials and publications); (3) research data protection (for data to be stored and shared through repositories); (4) attribution (recognizing full range of contributions); and (5) privacy (insuring that privacy obligations are met).
  • Services and Training: Researchers need support to assure that data and other research objects are managed according to FAIR Principles: findable, accessible, interoperable and reusable. While the specific solution must be tailored to the discipline and research, common standards, including Digital Object Identifiers (DOIs), must be followed.
  • Infrastructure: Archival storage is required for data, materials, specimens and publications to permit reuse. Searchable portals are needed to register research products where they can be located and accessed. Universities can recognize efficiencies by utilizing external resources (including existing disciplinary repositories) and by developing shared resources that span the institution when external resources do not exist. Presidents and provosts are encouraged to work with their academic senates to create an open scholarship initiative that promotes institution-wide actions supporting open scholarship practices, while remaining sufficiently flexible to accommodate disciplinary differences and norms.

Source: NASEM 2022a.

Granted, the NAS Roundtable is not technically centered on open science or open scholarship. Rather, it is an attempt to align incentives across sectors so efforts to support more open and equitable practices in one sector are not met with counter efforts in another just because of the way policies are constructed.32 Still, this policy effort could have easily steered into the weeds like Plan S, but it didn’t (or at least hasn’t so far), and this is significant. We see the same dynamic with Plan S itself, whose policy evolution process so far has been to set steep requirements (with regard to funding disclosure, data deposits, transition time frames and more) and then soften these over time as compliance falls short of targets.33

The underlying reasons for this policy broadening are anyone’s guess. Maybe this is what common ground looks like when taking many stakeholder perspectives into account. Or maybe what we’re seeing here is some middle ground between activism and market forces. It’s also possible we’re seeing a fall from grace—a grudging admission that overly detailed OA policies don’t work, or a realization that our techno-utopian visions of 20 years ago need updating.

In this broader context, one thing is certain: We need to keep in mind that research has always been both enormously diverse and globally interconnected. The development and expansion of research has always depended on communication, sharing, and building on the work of others. As a community, we must refrain from presenting open access and open science policies as the creators and forces behind this dynamic and as the single most significant reforms required for research communication. They are not. They are potentially very significant forces that, if implemented properly, might have a revolutionary influence—hence the attention and debate—but they are only one of many crucial reforms that must take place, many of which are interconnected. If we recalibrate our expectations for open solutions and at the same time refocus on solving the highest priority needs of researchers as well as addressing all these other crucial connected issues, we will be able to carve out an effective path for open solutions reforms that make sense and that fit logically within the broader needs and priorities of researchers and the broader research community.


It’s possible, and maybe even likely, that as the world continues down our current APC-based path toward OA reform, we’re going to see more unintended consequences from our reform actions. Some of the major possible consequences are discussed below.


We’ve mentioned this issue already in this report but it bears repeating: The APC solution simply replaces paywalls with playwalls, and arguably, the latter is more damaging. APC charges have risen to stratospheric levels for premium research journals over the last few years, now topping US$10,000 per article for prestige journals. Even the average APC charge (around US$2,600 for OA mirror journals, although there is wide variation by field, publisher, and journal quality; see Smith 2022) is now far higher than most researchers around the world can afford unless they are based at a major institution in the US or EU or are well endowed by their private funder (see Scaria 2018, Kwon 2022, and Nwagwu 2018 for discussions about the cost burden on Global South researchers, who are much more likely than their Northern counterparts to pay these APC costs out of their personal budgets).

While most publishers do offer APC waivers and discounts to researchers from lower income countries, and around 200 publishers also support the Research4Life UN-Publisher partnership providing free and low-cost access to research publications, a recent study of this waiver system confirmed that very few Global South authors publish in high APC mirror journals, and that “the authorship of OA mirror articles is overwhelmingly concentrated in high-income countries” (Smith 2022). The researchers note that “Despite being based in countries nominally eligible for APC waivers, authors from middle-income countries published proportionately few OA articles, but authors in low-income countries published almost entirely subscription-only articles in Parent journals. Taken together, these results strongly suggest that APCs are a barrier to OA publication by scientists from the low-income countries of the Global South.” The authors speculated there might be three reasons for this outcome: (1) In practice, waiver requirements can be stringent (e.g., publishers might only waive APCs in cases where every coauthor of an article is based in a country that is waiver eligible); (2) lack of awareness about the existence of these waivers (a hypothesis supported by other research); (3) even partial waivers on very high APCs is still entirely unaffordable (see Smith 2022). See Box 17 for additional discussion of APC waivers and discounts.


APC discount and waiver programs are probably the most talked about way to reduce the impact of APCs on researchers from countries and institutions with limited budgets. How effective are these programs? Unfortunately, not very, for the three reasons noted on the previous page: In practice, waiver requirements can be stringent (e.g., publishers might only fully waive APCs in cases where every coauthor of an article is based in a country that is waiver eligible); there is a lack of awareness about the existence of these programs; and even generous discounts on very high APCs are still entirely unaffordable to many researchers (see Smith 2022, AAAS 2022, Beard 2021, Mwangi 2021, Scaria 2018, Editage 2018, et al). Some have also suggested that an overly complicated discount and waiver application process is a fourth barrier to the success of these programs (Rouhi 2021).

Can we do better? For example, can we rethink the co-authorship residency requirements so authors from the Global South won’t be discouraged from collaborating with colleagues outside their countries? Obviously, publishers don’t want a dynamic to develop where researchers from the Global South are included as co-authors on papers for the express purpose of qualifying for an APC discount, but surely there are thoughtful formulas that could be implemented that can provide better financial support where needed without driving undesirable outcomes.

The second barrier—improving awareness—might be the easiest to overcome. Many researchers simply aren’t aware these discount programs exist. There isn’t a centralized tool out there that tells researchers everything they need to know about APCs (scattered resources exist but nothing major)—how much they are, who is willing to pay for them, what kinds of discounts exist, and so on. OSI has discussed developing such a tool since the market has yet to come up with a solution like this.

Overcoming the third barrier—inadequate APC discounts—might be more complicated. Publishers decide who gets APC discounts and under what conditions. Generally, they use country income rankings from the World Bank to guide their decisions. Researchers from countries in the lowest income threshold (tier one countries) typically receive a complete waiver of APC costs; researchers from countries on the next highest income rung (tier two countries) are eligible to receive 50% discounts on APCs. On a case-by-case basis, publishers will also grant discounts or waivers to researchers who are unable to cover these costs. Several major publishers also use guidelines established by the Research4Life (R4L) program, incorporating a mix of factors and not just World Bank country income rankings.

One way to expand discounts is to increase the number of countries eligible to receive APC discounts and waivers. Many researchers from tier two countries (like Brazil) are unable to afford the discounted price of expensive APCs (averaging $2600, but typically much more expensive for higher impact specialty and prestige journals). Granting these authors complete waivers instead of discounts would help. A second approach might be to increase the discounts offered to researchers from tier two countries from 50 percent to 75 percent. This would lower their average APC to about US$650, which is much closer to what authors typically pay to publish in regional journals. This approach still prices authors out of more expensive specialty and prestige journals, but at least it’s a step in the right direction; variations might involve, for example, asking these higher priced journals to offer even steeper discounts to tier two countries—whatever percentage is necessary to bring the costs down to around US$500. Some have argued that increasing discounts would force many regional publishers out of business because they can least afford to offer steep discounts, or that higher discounts will be offset by higher APCs for researchers who can afford to pay more. To this first concern, we can exempt regional publishers from discount and waiver policies since regional journals are already priced somewhat affordably. Added to this, most researchers aren’t concerned whether they can afford to publish in regional journals (which account for only about 12% of all articles—see OSI 2019), but whether they can afford to publish in a higher-impact specialty or prestige journals, which are priced much higher than regionals. Other approaches might be to trigger automatic discounts for any APCs above a certain level (in qualifying countries); or to make sure hybrids aren’t excluded (for example, see Elsevier’s hybrid discount policy at Elsevier 2022).

The fourth barrier—an overly complex discount and waiver process—might be hurdled by redesigning discount systems so authors face less of administrative burden than now. At present, authors need to prove their eligibility prior to each submission, even though their article may not be accepted—a long, involved, and undignified process. A simplified system might involve having authors complete eligibility paperwork once through a separate and centralized system, and thereafter be re-registered as eligible for discounts (subject to periodic eligibility verifications).

Finally, there are those who argue that the entire discount and waiver system is fatally flawed since it is based on a fatally flawed APC payment system. If this turns out to be the case, there are still other solutions available to help provide equitable access to research articles. For example, since 2002, R4L has been working in partnership with publishers, societies and professional associations, universities and UN agencies to provide researchers from 11,000 institutions in lower income countries with free or discounted access to over 200,000 academic books, journals and databases. Most of R4L’s support comes from the World Health Organization (WHO) and about 200 publishers, together accounting for about 78 percent of the organization’s funding (see Research4Life 2022). Content collections available through the R4L Discovery Portal include Hinari (providing access to biomedical research), GOALI (legal), AGORA (agriculture), OARE (environment) and ARDI (development and innovation).


Closely related to our concern about equity is our concern that the research world is being split into different camps—wealthy researchers versus everyone else. The fault lines are already clear. While the US and Europe are careening toward APC-based policies, China is developing their own internal publishing system, India will have a national subscription system (recently announced; see Niazi 2022), Japan will have a light-touch hybrid system (Salter 2022), and the remaining 40 percent of the world’s researchers will need to decide whether to pay to play, or create their own more affordable system. More research findings will be read across borders as a result of US and EU open access policies, but researchers from outside the EU and US will publish in other venues, likely at a lower cost, and these venues’ visibility and significance, as well as the research they publish, may diminish over time as the higher priced venues become even more prestigious. Moreover, it’s primarily the researchers from wealthy countries and institutions who will have access to the high-speed computing resources required to exploit global databases (to which Global South researchers are still expected to contribute).

Will these dynamics also result in wealthy researchers seeking out other wealthy researchers to help their publishing budget go further (see Matthew Effect, below)? Will it result in a Global South researchers being reluctant to collaborate with US and EU colleagues because only the wealthier researchers will be have the resources to process data and publish new findings and discoveries? Will it mean that issues of importance to Global South researchers will slide even farther down the global priority list? Combined with the impact of APCs on preventing researchers from the Global South from publishing in the first place, enlarging the gaps between these worlds may end up hurting research instead of helping it, for these and probably other reasons as well.


The damage to research caused by a world with such fragmented OA policies could also affect international relations. Suppose we continue along our current path and there is no relief for high APC prices (although there may be relief in time, as mentioned earlier). In this case, we might not only see impacts to research, but socioeconomic impacts as well, even political. Research and development is a major economic driver in all nations, and universities are the pillars of policy consultation, education, and social progress. What happens to the cohesion, promise, and growth that research brings to each country and the world when the rich countries co-opt the tools and benefits of research? Are we creating a system where research will cease being a global force for collaboration and unity?


Scholars everywhere but especially in the Global South are on the lookout for lower priced APC options. This need has spawned a huge industry for predatory research publishing over the last decade. Predatory publishers pose as legitimate publishers but they aren’t: they claim to offer peer review, expert editorial boards, high impact factors, indexing, professional affiliations, and physical locations in big cities but they actually lie about most or all of these things. Combined with promises of ultra-fast turnarounds and low prices, these fake journals can seem like reasonable alternatives to unsuspecting buyers, but even for savvy consumers, they are often the only affordable option.

Experts worry that the articles from predatory publishers pose a risk to the larger body of legitimate scientific work. Researchers in the future might end up basing their work on poor quality or even fabricated findings; funders might invest scarce dollars and Euros on the wrong areas of research based on bogus findings; and governments and health officials might develop policies based on wholly fictitious information and expertise. The emergence of the APC model has fueled this rise, but other factors like the spread of DIY desktop and web publishing technology have also contributed, where most anyone with a computer can become a “publisher” and make lots of money. Also to blame are academic evaluation practices that value publishing volume, easily faked journal reputation metrics that lure authors like moths to a flame, a lack of industry oversight, and for the most part (with very few exceptions) almost no punishment for the owners of these paper mills. Although there are no exact figures available, somewhere around 9% of the 3.5 million journal articles published every year, and nearly one-third of the world’s 50,000 scholarly journals might be classified as predatory (see OSI 2021 and Anderson 2019).


The Matthew Effect is the principle of accumulated advantage—essentially, an explanation for why the rich tend to get richer. The continued use of APCs in scholarly publishing is going to lead to profound Matthew effects on researchers, journals, and research itself. How? With regard to researchers, we already know that most are concerned about the cost of publishing. At the same time, most don’t work at a university that has a transformative agreement with a major publisher (where it is free for researchers to read and publish), and many don’t live in a country or institution where APC funding is readily available. Therefore, we’re going to see an uptick in authors shopping around for co-authors to help pay their publishing costs (even by authors who do have APC funding, so they can make their publishing budgets go farther). These sought-after authors are already privileged, working at wealthy institutions in the US or EU. As co-authors added for their deep pockets, these researchers will get even longer publishing resumes and more promotion and grant opportunities as a result (Farley 2021)

With journals, previous research has established that APC prices are largely driven by market power and market concentration, meaning that the largest publishers with the best stable of journals in a particular discipline can charge higher prices for its products (prestige also matters but to a lesser degree; see Budzinski 2020). Since only wealthy researchers will have the resources available to publish in these journals (whether through lots of grant funds or access to transformative agreements), then it stands to reason only the most prominent researchers (see the first paragraph of this section) will be publishing in these journals. The end result will be that these journals will become even more important and will be able to charge even higher prices; by extension, all other journals and the researchers who publish in them will become increasingly less important.

Combining these two effects, it’s clear that research itself will also be affected. What the privileged researchers focus on will be where the research attention goes and funding gets spent; and what the major journals publish will be what other researchers read about and what gets the highest consideration from grant evaluators and policymakers.


Economist Philip Morowski has studied and written extensively about how platform capitalism is re-engineering research from the bottom up, and is now in the early stages of monetizing the entire process of science. The big five publishers, says Morowski, have seen the writing on the wall for a long time with open access, so they have basically taken control of open science, open access and open data efforts so they can make more money from these movements. They’re doing this by breaking up research into smaller component parts, and then using tech platforms to monetize each part, extracting all manner of value from the process in real time. So, instead of destroying commercial publishers, the open movement has made them important loci of control within this newly engineered system (with publishers rebranding themselves as “information analytics” companies in the process). We’ve seen Morowski’s dystopia begin to take shape in recent years as the big publishers have each acquired various key companies that give them beginning-to-end coverage of the entire publication cycle. Even if Morowski is totally wrong and what we’re seeing in the marketplace is more benign maneuvering for advantage in a competitive and rapidly changing marketplace, there’s no denying that publisher power has been increasing. What does this increase mean for the future of open solutions? How about the future of research? We’re investing an awful lot in the judgment and expertise of privately held publishing companies to tell us what research we should and shouldn’t be reading, and what the future of open should look like. There’s an argument to be made that maybe the relationship between research and publishing was better before we started tearing it down and putting it back together Frankenstein style.


There are many ways our current APC trajectory might create unwanted downstream changes to OA institutions. For example, when university systems negotiate transformative deals with publishers (whereby university authors basically pay a bulk publishing plus access fee instead of one-off APC charges), what does this mean for universities who don’t have the market power to negotiate such deals? Will library consortia unravel as the larger institutions (who publish more) are in effect subsidized by the smaller institutions who publish less (Hinchliffe 2019)?

We might also see open access legacy publishers come under pressure. For example, what happens to PLOS, Frontiers, and Hindawi once authors realize they can publish in most any journal for a fee? That is, if authors are going to pay US$3,000 to publish an article, why not see what else that money can buy (in general, these legacy OA journals have lower Journal Impact Factors than the specialty journals who will be flipping from subscription to APC).

What about society journals and university publishers? Will they fold completely under the pressure of converting to APC format? For generations, subscriptions to society flagship journals formed the backbone of the revenue stream for these organizations. Or will they subcontract the management of their product lines to the major commercial publishers (a shift that has been accelerating due to Plan S because of the tremendous administrative burden involved; see Clarke 2018)? What impact will these changes have on the unique scholarship these institutions have been able to support over the years?


Over the short term we’re going to see lots of debate around who should foot the APC bill. Researchers? Universities? Research grants (which for most research is the same as saying “government”)? If the debate pushes too hard on this last option, government will push back.

Asking governments to foot the entire publishing bill (through grant supplements), means either reducing the amount of money we actually spend on research (for example, if $5k of a $100k research grant needs to be earmarked for APCs, that means 5% less is available for research), or it’s going to encourage lawmakers to question what on earth we’re doing, meaning a lot more oversight will be in store (at minimum, although this might not be a bad outcome). It definitely doesn’t mean just getting a five percent boost across the board to cover publishing. Even during the best of times, politicians have a hard time boosting research spending. And in the US, it’s almost comical to believe this that the current Congress will boost spending for research so American researchers can give away more of their American-funded work to the rest of the world for free.34 The US is just not in that kind of financial or political mood at the moment. Instead, it wouldn’t be at all surprising to see a Republican-controlled US Congress demand explanations for why US researchers are paying European publishers with US taxpayer dollars, or to clamp down on sharing until better solutions are developed.

One such solution? A highly-placed industry insider mentioned to OSI in 2019 that pushed far enough, we might see huge government funders like NIH simply create their own publishing capacity in-house, bypassing commercial publishers altogether. If APC charges become burdensome for US researchers and Congress catches wind of this, you can be sure that there will be serious talk about redirecting how this money is spent, whether it’s toward an NIH publishing type of solution, or maybe some kind of national market power solution where the US treats publishers like drug companies and negotiates for lower bulk prices. The US is in the right position to do this; although China publishes slightly more papers per year than the US, the US is, by far, the world’s most prolific publisher of high quality natural sciences research, so the participation of US-funded researchers in the global publishing ecosystem is tremendously important.35

If not APCs, are there any viable alternatives on the horizon? Time will tell. The APC landscape may eventually stabilize, or different open models like subscribe to open (S20) or even India’s national subscription plan may become the wave of the future. Or, harkening back to the horse manure crisis of the 1890s discussed earlier, developments in artificial intelligence (think ChatGPT), big data, blockchain, quantum computing and other technologies may end up completely altering the way we go about digesting and communicating research information.

There is also the possibility that external forces could altogether derail our carefully crafted plans and Humpty Dumpty will be difficult to put back together again. It is not implausible that East-West political tensions could eventually result in information geowalls that undermine our OA policy objectives. Or that politicians not only in the US but throughout the civilized world withdraw their support for building high-functioning global databases through which research data can flow more freely across international borders to countries whose leaders are bent on death and destruction. Or that predatory publishing becomes such a problem that the researchers and the legitimate publishing industry will need to join forces to figure out completely new models for sharing research information because the old model has simply become too corrupted.

We might also see some pushback against Plan S from within the EU, or pushback against the Nelson Memo from publishers (by way of, for example, stepping away from participation agreements with PubMedCentral (PMC), the world’s largest green repository. Since it was first launched in 2000, publishers from around the world have been depositing historical data plus AAM versions of their research papers (with links to VORs) into this repository. Since around 2010, publishers have supplied almost 90% of the content in PMC (Williamson 2019). SpringerNature announced in early 2023 that it will no longer deposit the AAMs from its subscription journals into PMC but will instead leave this for authors to do (Crotty 2023 and Clarke & Esposito 2023). To be clear, SpringerNature is a leader in open access publishing and doesn’t have many purely subscription journals left anyway (almost all are either open or hybrid). So the unintended consequence here isn’t what impact NatureSpringer’s move will have on the world’s largest green open repository, but what might happen if other publishers follow suit.


The most influential open access policies in the world today are rooted in the belief that open means one thing and that open solutions must fall within a limited range of possibilities. OSI’s open access policy framework ideas, developed over years of research and global, multi-stakeholder consultations, clearly demonstrate that instead, open definitions and solutions are diverse, and the needs, perspectives, and resources of researchers from around the world cannot be captured by policies that are one-size-fits-all. As a result, OSI’s policy framework ideas are broad and adaptable, with the primary goal of helping researchers everywhere succeed, and built on a deep, shared commitment to enhancing equity, following the evidence about what works, and ensuring that research can continue to be a force for good for all of humanity.

From this foundation of equity, objectivity, and researcher success, we can and should collaborate to do something with open access, rather than viewing open as an end in and of itself. OSI’s 2022 research communication surveys suggest that vast majority of researchers agree with this perspective and approach, but it is crucial to test this assumption by redoing our survey (or a similar one) with a much larger sample size.

The potential of OSI’s approach to open reform is significantly more promising than our current approach. We are currently constructing a world where everyone has free access to Author Accepted Manuscripts but not official Versions of Record, where “open data” means spreadsheets deposited on GitHub, where only researchers from wealthy countries will publish their findings in the highest quality journals, and where the incentives for openness are not aligned with the career and needs of researchers, all of which leads to a costly and cumbersome global information hodgepodge. This approach should at minimum be supplemented with broad new policies that encourage innovation, generate new value for research and researchers, and contribute to achieving global equity parity between research communities. Not only will we enrich the global research dialogue, but global societies and economies as well.

Adopting these types of solutions does not necessarily mean abandoning Plan S, the Nelson Memo, the UNESCO open science policy, or university transformative agreements. Rather, it suggests that in the future, broader and more flexible policies should be woven into a tapestry of open options and approaches that will make open research more feasible for researchers everywhere. Instead of continuing to implement policies that are one-size-fits-all based on rigid criteria, we can create policies that work well for everyone and better meet the original objectives and aspirations of the open access movement.

What If the policy world has spoken, however, and APCs are the future for everyone? Is this policy approach still worthwhile? Absolutely. Practically, the majority of the world outside of the United States and Europe still requires viable open access policies. The concepts presented in this report can assist in laying this foundation and integrating policies from around the world. Philosophically, the world of scholarly communication is squeezing itself into a narrow approach to open access without understanding why, where this will lead, or the repercussions on the developing world. It is crucial to develop this understanding as soon as possible, before we are too far along in this shift, so we can still adjust our policies as needed. Otherwise, we risk missing out on the full potential of open access, and also fragmenting the research communication space which is so integral to research itself.

As noted early in this report, in our quest to develop a better future for research and research communication, it will surely help inform our discussions if we remember where this journey began, with so many different paths and histories. The concept of open scholarship wasn’t suddenly invented with BOAI. Openness is what scholarship is and has always been about. The concept and practice of openness have evolved over centuries, influenced by politics, philosophy, research requirements, technology, and the market. Using BOAI as the cornerstone of contemporary OA policy is inaccurate and unjust, as this approach is neither democratic nor objective.

The correct approach to the future of open policy is to avoid getting entangled in ideological specifics and instead recognize that our original hopes for open access are all the same—born from a shared vision that open methods can help improve the visibility, transparency, and accountability of research, improve public education and participation, level the knowledge playing field, reduce publishing costs, and harness the Internet’s potential to truly demystify information. This is the path originally envisioned by open access, not one in which costs rise, inequity increases, and the rich get richer, but one in which barriers fall, innovation accelerates, and all researchers everywhere benefit. The narrow ideological path to open is a dead end; the true promise of open lies down a road less traveled.

This road less traveled is, oddly enough, a six-lane highway of diversity and innovation compared to the ideological path. On this road, we know where we’re going, we avoid ideological detours, we work together to make the journey rich and meaningful, and along the information highway’s hubs and spokes, development sprouts everywhere. Possibly, this journey takes place without many rules or structures, without lanes or speed limits. Or perhaps at least basic rules and structures will be useful. We know that better communication and collaboration are essential to the development of new and more effective policies, so structures that facilitate this communication and collaboration would undoubtedly be beneficial—perhaps even the same types of policy structures the international community uses for trade, security, intellectual property, and environmental policy, if the UN can support this effort.

It goes without saying that our current efforts to implement global open access policies are in no way comparable to these other global policy efforts. But in order to modify international research communication rules, one must at the very least take a more comprehensive, objective, democratic, international, and research-focused approach. Adopting this new and improved strategy will create a much higher return on investment for research and society than sticking with our current strategy.

The next step in this process is for a body with global authority, such as UNESCO, to convene a large, representative, international gathering of researchers to discuss the policy recommendations presented in this report (a virtual summit would work). Based on what we know through OSI’s work and through the researcher surveys that have been conducted so far, researchers are likely to support the approach put forward in this report, but we should first confirm this recommendation by consulting a larger body of researchers than we have been able to convene. UNESCO can then propose a framework that governments, universities, and research institutions everywhere can utilize to meet their open scholarship goals.

OSI’s observations and recommendations are described in more detail in the many reports and presentations this group has produced since 2015. These reports and presentations are available from the OSI website at and are also listed in the Annex section of this report.


1. OSI has received funding from and worked directly with UNESCO as part of the agency’s Network for Open Access to Scientific Information and Research (NOASIR). OSI is a unique voice in this group, focused solely on delivering evidence-based assessments of open solutions, rather than advocating for particular open ideologies or outcomes.

2. Notable thinkers include Benedikt Fecher and Sascha Friesike (Fecher 2013), Jeroen Bosman and Bianca Kramer (Bosman & Kramer 2017), Samuel Moore (Moore 2017), Philip Mirowski (Mirowski 2018), Jon Tennant (Tennant 2019), and Rebecca Willen (Willen 2020). See OSI Policy Perspective 3 (Hampson 2020) for a more detailed overview of the philosophical underpinnings of open science.

3. Vint Cerf (co-inventor of the Internet) and Keith Yamamoto (UC San Fransisco Vice Chancellor for Science and Policy) both highlighted this point in their opening and closing remarks to OSI’s 2017 conference (see OSI 2017). For Cerf, increasing the reproducibility of published research is of paramount importance for the future of research, which requires increasing access, which in turn requires a much more serious focus on digital preservation—from hardware and operating systems to software and formats. Without this preservation and access, there can be no modern scientific record. For Yamamoto, the act of publishing cannot be separated from research. “If you don’t publish your experiment, it is exactly like not doing it.”

4. Several excellent books on the history of science communication touch on this theme, including David Wootton’s “The Invention of Science” (Wootton 2019), Adrian Johns’ “The Nature of the Book” (Johns 1998) and James Poskett’s “Horizons” (Poskett 2022).

5. At the time, SPARC was part of the American Library Association. It became a separately funded lobbying group in 2016.

6. The number of scientific journal articles published doubles roughly every 17 years (Bornmann 2021) due to a steady increase in research spending, the emergence of new research disciplines, a splintering of existing disciplines into new specializations, and other factors (see Annex). Trying to stay abreast of these changes, publishers note that their cost per article published has dropped over this doubling period, but the total costs of subscribing to all available content has still been too high for university libraries to bear. Richard Poynder’s 2019 essay, “Open access: Could defeat be snatched from the jaws of victory?” gives one of the most detailed accounts of this history (Poynder 2019). For an insider’s account of the politics at play, read “Public access policy in the United States” by T Scott Plutchak, Fred Dylla, Crispin Taylor and John Vaughn (Plutchak 2022).

7. In the 20 years since 2002, various declarations have added nuance and complexity to the cri de coeur of BOAI. For example, in 2010, the Panton Principles qualified that publicly funded science should be in the public domain (CC-0) and that licenses that limit the reuse of data (like CC-BY) should be discouraged. DORA in 2012 and the Leiden Manifesto in 2015 both took aim at Journal Impact Factors, arguing that qualitative evaluations of research should matter more than quantitative evaluations. FAIR in 2016 argued that data must be easy to find, clearly accessible, interoperable with other systems, and optimized for reuse. The Lindau Guidelines of 2020 reiterated the need for scientific data and results to be made “openly available,” while adding that research and evaluation criteria must be transparent. Harkening back to the biomedical research declarations (see Box 1), Lindau also stated that science has a responsibility to society to communicate, educate and engage.

8. The 16-person 2002 Budapest meeting was followed by a 24-person meeting in Bethesda in 2003. The Bethesda group built on the Budapest group’s work, adding provisions for how users will enact open access. A 2003 Berlin meeting that attracted around 100 representatives built on the Budapest and Bethesda definitions of open, culminating in the Berlin Declaration on Open Access, which is also a foundational philosophy in open access policy (Max-Planck 2003).

9. Our acceptance has arguably even made us blind to conflicts of interest and hyperbole. For example, the CEO of open access publisher Frontiers was deeply involved in the development of Plan S (Schneider 2019). As for hyperbole, the new US open access policy promotes the merits of open access but lacks factual support for its recommendations (Clarke & Esposito 2022).

10. To help better understand the differences between different types of open and make sure we’re all talking about the same things when it comes to analysis and policymaking, OSI constructed a model called DARTS. This model illustrates how different types of open differ with regard to their discoverability, accessibility, reusability, transparency, and sustainability. The DARTS model is described in more detail in Figure 1..

11. To the Nelson Memo’s credit, it does not mandate specific actions, just specific outcomes, so it remains to be seen whether this latitude will create better outcomes for open than Plan S. The smart money at the moment says that this policy will force the widespread adoption of gold open—APC-funded journal articles where the publishing costs are paid by authors.

12. Or even better, these publishing charges will be paid by a third party like a research foundation. This ideal approach, although still rare, is called “diamond OA.”

13. These points and more are also discussed at length in OSI’s other policy perspectives (available from the OSI website at Again, these are just points raised in OSI, not points everyone in OSI necessarily endorses.

14. See Scaria 2018, Kwon 2022, and Nwagwu 2018 for discussions about the cost burden on Global South researchers, who are much more likely than their Northern counterparts to pay APC costs out of their personal budgets. See Zhang 2022 for the global cost of OA. See the DeltaThink website for ongoing news and analysis of the OA market (

15. Plan S and other OA mandates have set timetables for doing away with the subscription model, and these smaller publishers need help navigating this change and complying with all the new and complex reporting requirements.

16. OSI has tried to raise funding for this work but hasn’t been successful.

17. A separate list of these surveys is included in References section of this report.

18. The US National Institutes of Health defines “research integrity” as “the use of honest and verifiable methods in proposing, performing, and evaluating research; reporting research results with particular attention to adherence to rules, regulations, guidelines, and following commonly accepted professional codes or norms” (including espousing shared values such as honesty, accuracy, efficiency and objectivity). See NIH 2023. So, for example, high integrity research includes ensuring proper analyses and objective conclusions, while avoiding bias, conflicts of interest, plagiarism, p-hacking or fake data.

19. See the three paragraphs of section 2 of the Nelson memo (Nelson 2022), for example, which describe how rapid sharing of science information led the charge against COVID: “Immediate public access to COVID-19 research is a powerful case study on the benefits of delivering research results and data rapidly to the people.”

20. As an aside, article 27 of this declaration states that “Everyone has the right to the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he is the author.” It’s possible that open access mandates conflict with this article to the extent a researcher’s copyright interests are taken away unwillingly, or a researcher’s data is made public before they have a chance to analyze it.

21. Not to pick on anyone here, but just by way of relevant example.

22. In the case of the US, the exact details on how to operate a democracy have been added over hundreds of years as the needs and structures have become clearer, beginning with the Articles of Confederation in 1781, followed next by the Constitution in 1788 after years of heated debate, and followed thereafter by 200-plus years of amendments, laws, regulations, and agency policies. The Declaration provided the initial vision guidance; the Constitution provided a framework for operating policy.

23. In this case, especially given that we haven’t even begun to wrestle with how developments such as artificial intelligence, big data, blockchain, quantum computing and other technological innovations will affect research and research communication in the coming decades.

24. But this example is memorable so we’ll use it.

25. UNESCO did consult a variety of science groups and institutions as part of it’s open science policy effort, but these groups did not represent a variety of views and interests with regard to open, and researchers themselves were not consulted.

26. Of course, libraries and funders serve researchers, so they endeavor to craft policies in the best interest of the individual researchers they serve (as well as students, administrators, and others). However, in any large scale and representative sense, researchers are not now nor have they ever been directly involved in the global OA policymaking process. As a group, they are not driving the conversations about need, or creating OA tools and processes. Researchers are also likely to be working on open solutions separate from official open access policy efforts. For example, they may primarily rely on a wide array of open data tools and processes (such as data sharing networks) which are not typically represented in open data policy conversations.

27. See Hampson 2023 for a detailed description and analysis of these survey findings.

28. To the extent this burden even existed before, since subscription costs were covered by libraries and publishing costs were mainly limited to page and color surcharges. Comparing overall system costs is more difficult. A proxy for this determination might be the profit margin of major publishers, and these margins have not decreased during the shift to APCs, so the system costs have probably not come down overall. Indeed, DeltaThink estimates that the OA market is currently much more financially robust than the subscription market (Pollock 2021). Zhang 2022 estimates that the overall costs for our increasingly APC-funded model of scholcomm may now be higher than they were in a subscription-based model.

29. This lack of visibility and focus has meant that funding for research communication reform efforts has long been hit and miss. For example, the National Science Foundation has its own ideas of what research communication looks like, and only funds communication work that fits its thinking. The same is true with other government and nongovernment funders. There hasn’t been any big picture agreement of what this field looks like and what it needs, especially when trying to include industry research as well. By contrast, the good versus evil approach open advocacy groups adopted years ago (stirring up anger over publisher profit margins and directing this anger into effective rallying cries for funding and action) has been a wildly effective way to increase visibility and focus. When your rallying cry is “kill all the publishers!” you’re going to have a more effective fundraising campaign than “let’s find common ground!” (see Lozada 2022 for an interesting overview of the rhetoric of advocacy). This appearance of polarization masks the moderation in this space, however. There are many stakeholders in the scholarly communication reform space who don’t object to change, but at the same time would rather not blindly accept the promises of open activism without more evidence and a more solid plan of action.

30. The conclusions put forward in the 2019 14th Annual Berlin Science Communication Debate sponsored by Bosch are another example of a broad policy recommendation. See Bosch Stiftung 2019.

31. See, for example, Crow and Tananbaum 2020. Greg Tananbaum is a former SPARC consultant, the founder and director of ORFG, and the Head of Secretariat for the NAS Roundtable on Aligning Incentives for Open Scholarship.

32. Quote from Dr. Susan Fitzpatrick, an OSI participant who is part of the NAS roundtable and the president of a foundation that is part of ORFG.

33. In November 2022, Plan S coordinating body Coalition-S reported that 27 publishers have complied with the price transparency requirements of Plan S, covering slightly more than 2000 journals (of which Wiley journals account for about 75% of this total; see European Science Foundation 2022 and There are no firm figures for how many active journals and journal publishers exist in academia, but a ballpark number is somewhere north of 2,000 publishers and 40,000 journals (Hampson 2019b).

34, The nationalism here is just being described, not advocated.

35. This data point can be derived from a number of different angles. See, for example, the Scimago country rankings at Or the Nature Index ( Or the highly cited article index (see the US Science & Technology Indicators website at


Hampson, G, and J Steinhauer. 2023 (April). OSI Policy Perspective 6: Considering evidence-based open access policies. Open Scholarship Initiative. doi 10.13021/osi2023.3553


Aksnes, DW, L Langfeldt, and P Wouters. 2019. Citations, Citation Indicators, and Research Quality: An Overview of Basic Concepts and Theories. SAGE Open, 9(1).

Anderson, R. 2019. OSI Issue Brief 3: Deceptive Publishing. Open Scholarship Initiative. doi:10.13021/osi2019.2419

Arcadia Fund. 2023. Arcadia Fund website.

Archambault, E, G Côté, B Struck, and M Voorons. 2016. “Research impact of paywalled versus open access papers.” Copyright, Fair Use, Scholarly Communication, etc.. 29.

Baldwin, M. 2018. Scientific Autonomy, Public Accountability, and the Rise of “Peer Review” in the Cold War United States. Isis, volume 109, number 3

Basson, I, JP Blanckenberg, and H Prozesky. 2021. Do open access journal articles experience a citation advantage? Results and methodological reflections of an application of multiple measures to an analysis by WoS subject areas. Scientometrics 126, 459–484 (2021). DOI: 10.1007/s11192-020-03734-9

Beard, R. 2021. EIFL Agreements Result in Increased OA Publishing. EIFL website post.

BOAI. 2002. Budapest Open Access Initiative.

Bornmann, L., Haunschild, R. & Mutz, R. Growth rates of modern science: a latent piecewise growth curve approach to model publication numbers from established and new literature databases. Humanit Soc Sci Commun 8, 224 (2021).

Bosch Stiftung. 2019. 14th Berlin Debate on Science and Science Policy Who Owns Science? Reshaping the Scientific Value Chain in the 21st Century. Conference report. en/project/berlin-science-debate/berlin-debate-2019

Bosman, J, and K Kramer. 2017. Defining Open Science Definitions. Blog post

Brainard, J. 2021 (Sep 6). No revolution: COVID-19 boosted open access, but preprints are only a fraction of pandemic papers. Science.

Budzinski, O, T Grebel, J Wolling, et al. 2020. Drivers of article processing charges in open access. Scientometrics 124, 2185–2206.

Cabrerizo, F. 2022. Open access in low-income countries — open letter on equity. Nature correspondence.

CERN. 2023. CERN Open Data Policy for the LHC Experiments.

Chan, L. 2023. Platforms and Knowledge Production in the Age of A.I. Improving Discovery and Collaboration in Open Science – The final TRIPLE conference, Virtual presentation. Zenodo.

Clarke & Esposito. 2021 (June). The Brief (Clarke & Esposito monthly newsletter), Issue 35.

Clarke & Esposito. 2022 (Aug). The Brief (special issue).

Clarke & Esposito. 2023 (Feb). The Brief (Clarke & Esposito monthly newsletter), Isssue 50.

Coalition-S. 2023. Coalition-S website.

Correa, JC, H Laverde-Rojas, J Tejada, and F Marmolejo-Ramos. 2021. The Sci-Hub effect on papers’ citations. Scientometrics (2022) 127:99-126. DOI: 10.1007/s11192-020-03806-w

Crotty, D. 2022 (Oct 27). Speculation on the Most Likely OSTP Nelson Memo Implementation Scenario and the Resulting Publisher Strategies. The Scholarly Kitchen.

Crotty, D. 2023. Email announcement from NatureSpringer to academic authors, sent in February 2023.

Crow, M, and G Tananbaum. 2020 (Dec 18). We Must Tear Down the Barriers That Impede Scientific Progress. Scientific American.

Csiszar, A. 2018. The Scientific Journal. Chicago University Press. ISBN 978-0-226-55323-8

Elsevier. 2022. Choice: Select the publishing model that’s right for you. Elsevier website.

Erickson, A. 2014 (Aug 24). A Brief History of the Birth of Urban Planning. Bloomberg.

European Science Foundation. 2022 (Nov). “More than 2000 journals share price and service data through Plan S’s Journal Comparison Service” (blog post). Coalition S website.

Gadd, E, and CD Troll. 2019. What does ‘green’ open access mean? Tracking twelve years of changes to journal publisher self-archiving policies. Journal of Librarianship and Information Science, 51(1), 106–122.

Hampson, G, M DeSart, J Steinhauer, EA Gadd, LJ Hinchliffe, M Vandegrift, C Erdmann, and R Johnson. 2020. OSI Policy Perspective 3: Open science roadmap recommendations to UNESCO. Open Scholarship Initiative. doi: 10.13021/osi2020.2735

Hampson, G, M DeSart, L Kamerlin, R Johnson, H Hanahoe, A Nurnberger and C Graf. 2021. OSI Policy Perspective 4: Open Solutions: Unifying the meaning of open and designing a new global open solutions policy framework. Open Scholarship Initiative. January 2021 edition. doi: 10.13021/osi2020.2930

Hampson, G. 2019. OSI Policy Perspective 1: Plan S & the quest for global open access. Open Scholarship Initiative. doi: 10.13021/osi2019.2450

Hampson, G. 2019b. OSI Issue Brief 2 (V. 1.1): How fast is open growing? Open Scholarship Initiative.

Hampson, G. 2020. OSI Policy Perspective 2: Common ground in the global quest for open research. Open Scholarship Initiative. doi: 10.13021/osi2020.2725

Hampson, G. 2021a. Competition, Collaboration & Data Sharing in Science: An Overview of Ongoing Challenges. VI BRISPE conference.

Hampson, G. 2023. OSI Policy Perspective 5: Summary of OSI 2022 Research Communication Surveys. Open Scholarship Initiative. doi 10.13021/osi2023.3552

Helsinki Initiative on Multilingualism in Scholarly Communication. 2019. Helsinki: Federation of Finnish Learned Societies, Committee for Public Information, Finnish Association for Scholarly Publishing, Universities Norway & European Network for Research Evaluation in the Social Sciences and the Humanities.

Johns, A. 1998. The Nature of the Book: Print and Knowledge in the Making. University of Chicago Press

Joi, P. 2022 (April 5). Data-sharing in a pandemic: even though scientists shared more than ever, it still wasn’t enough. Blog post on GAVI website (Global Alliance for Vaccines and Immunisation).

Kowaltowski, A, JRF Arruda, PA Nussenzveig, and AM Silber. 2023 (Mar). Guest Post — Article Processing Charges are a Heavy Burden for Middle-Income Countries. The Scholarly Kitchen.

Kwon, D. 2022 (Feb 22). Open-access publishing fees deter researchers in the global south. Nature (news article).

Langham-Putrow A, C Bakker, and A Riegelman. 2021. Is the open access citation advantage real? A systematic review of the citation of open access and subscription-based articles. PLOS ONE 16(6): e0253129.

Lewis, CL. 2018. The Open Access Citation Advantage: Does It Exist and What Does It Mean for Libraries?. Information Technology and Libraries, 37(3), 50-65. doi: 10.6017/ital.v37i3.1

Lozada, C. 2022 (Oct 10). How to Strangle Democracy While Pretending to Engage in It. New York Times.

Lucas-Dominguez, R, A Alonso-Arroyo, A Vidal-Infer, et al. The sharing of research data facing the COVID-١٩ pandemic. Scientometrics 126, 4975–4990 (2021). DOI: 10.1007/s11192-021-03971-6

Makoni, M and W Sawahel. 2023 (Jan). Open access publishing deal for low-, middle-income countries. University World News.

Mallapaty, S. 2020 (Sept). India pushes bold ‘one nation, one subscription’ journal-access plan. Nature news.

Mangravite, L., A Sen, JT Wilbanks. 2020. Mechanisms to Govern Responsible Sharing of Open Data: A Progress Report

McCarthy, M. 2002 (Oct). A brief history of the World Health Organization. The Lancet, vol. 360.

Max-Planck. 2003. Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities.

Mirowski, P. 2018. The future(s) of open science. Social Studies of Science 2018, Vol. 48(2) 171–203. DOI: 10.1177/0306312718772086

Moore, Samuel A. 2017. A Genealogy of Open Access: Negotiations between Openness and Access to Research. Revue Française Des Sciences de l’information et de La Communication, no. 11 (August). doi: 10.4000/rfsic.3220

Morris, EA. 2007 (Spring). From Horse Power to Horsepower. ACCESS Magazine.,to%20desperation%20by%20horse%20manure

Mwangi, KW, N Mainye, DO Ouso, K Esoh, AW Muraya, CK Mwangi, et al. 2021. Open Science in Kenya: Where Are We? Front. Res. Metr. Anal. 6:669675. doi: 10.3389/frma.2021.669675

NASEM. 2020. Reflections on Sharing Clinical Trial Data: Challenges and a Way Forward: Proceedings of aWorkshop. Washington, DC: The National Academies Press.

NASEM. 2022. Roundtable on Aligning Incentives for Open Science. US National Academies of Science, Engineering and Medicine.

NASEM. 2022a. Guide to Supporting Open Scholarship for University Presidents and Provosts. US National Academies of Science, Engineering and Medicine.

Nelson, A. 2022. MEMORANDUM FOR THE HEADS OF EXECUTIVE DEPARTMENTS AND AGENCIES. US White House Office of Science and Technology Policy (OSTP).

Niazi, S. 2022 (Nov 25). Ministry sets ‘One Nation, One Subscription’ deal deadline. University World News.

NIH. 2023. “What is Research Integrity?” (web page). US National Institutes of Health.

NIH. 2023a. GenBank Oveview.,D1)%3AD36%2D42)

Nwagwu, Williams Ezinwa. “Knowledge Production Ethos and Open Access Publishing: Africa in Focus / L’éthos de la production de connaissances et l’édition en livre accès : Regard sur l’Afrique.” Canadian Journal of Information and Library Science, vol. 42 no. 3, 2018, p. 249-277. Project MUSE٧٤٣٠٥٥

OECD. 2006. Background Document on Public Consultation. Organisation for Economic Cooperation and Development.

OECD. 2011. Regulatory Consultation: A MENA-OECS Practitioners’ Guide for Engaging Stakeholders in the Rule-making Process. Organisation for Economic Cooperation and Development.

Owens, B. 2022. Sci-Hub downloads show countries where pirate paper site is most used. Nature news.

Oreskes, N, and EM Conway. 2011. Merchants of Doubt. Bloomsbury Publishing, ISBN 978-1608193943

OSI. 2017. Report on the 2nd annual conference of the global Open Scholarship Initiative. Open Scholarship Initiative

OSI. 2021. OSI Scholarly Communication Infographic 2: Publishing STM Research. Open Scholarship Initiative

Paul, R. 2016 (Mar 23). How horse poop inspired the New York City stoop. 6sqft.

Perry, A, and S Netscher. 2022. Measuring the time spent on data curation. Journal of Documentation, Vol. 78 No. 7, 2022 pp. 282-304. Emerald Publishing Limited 0022-0418. DOI 10.1108/JD-08-2021-0167

Piwowar, H, J Priem, V Larivière, JP Alperin, L Matthias, B Norlander, A Farley, J West, and S Haustein. 2018. The state of OA: a large-scale analysis of the prevalence and impact of Open Access articles. PeerJ 6:e4375. doi: 10.7717/peerj.4375

Piwowar, H, J Priem, and R Orr. 2019. The Future of OA: A large-scale analysis projecting Open Access publication and readership. bioRxiv (preprint). doi:

Plutchak, TS., HF Dylla, C Taylor, and J Vaughn. 2022. Public access policy in the United States: Impact of the Scholarly Publishing Roundtable. Learned Publishing, 35: 650-657., D and A Michael. 2021 (Oct). “Open Access Market Sizing Update 2021” (blog post). DeltaThink.

Pollock, D and A Michael. 2022 (Jan). “Breaking Out Open Access License Types” (blog post). DeltaThink.

Poskett, J. 2022. Horizons: The Global Origins of Modern Science. Mariner Books

Poynder, R. 2019. Open access: Could defeat be snatched from the jaws of victory?

REF. 2021. UK Research Evaluation Framework.

Research4Life. 2022. Our Vision for 2030.

Rouhi, S, R Beard, and C Brundy. 2021. Left in the Cold: The Failure of APC Waiver Programs to Provide Author Equity. Sci Ed. 2022;45:5-13. DOI: 10.36591/SE-D-4501-5

Salter, M. 2022 (May 4). Guest Post – Open Access in Japan: Tapping the Stone Bridge. The Scholarly Kitchen.

Scaria, AG, and R Shreyashi. 2018. Open Science India Report. OSF Preprints. doi: 10.31219/

Schneider, L. 2019. For Better Science. “Frontiers and Robert-Jan Smits emails reveal how Plan S was conceived” (blog post).

Sci-Hub. 2023. SciHub website.

Sharan, M, C MacCallum, R Anderson, S Heffner, and K Wulf. 2023. Resolved: Open practices make science better – Debate Materials (v1.0). Zenodo.

Simard MA, Ghiasi G, Mongeon P, Larivière V (٢٠٢٢) National differences in dissemination and use of open access literature. PLOS ONE ١٧(٨): e٠٢٧٢٧٣٠.١٠.١٣٧١/journal.pone.٠٢٧٢٧٣٠

Smith, AC, L Merz, JB Borden, CK Gulick, AR Kshirsagar, and EM Bruna. 2022. Assessing the effect of article processing charges on the geographic diversity of authors using Elsevier’s “Mirror Journal” system. Quantitative Science Studies ٢٠٢٢; ٢ (٤): ١١٢٣–١١٤٣. doi: 10.1162/qss_a_00157

Suber P. 2008. An open access mandate for the National Institutes of Health. Open Med. 2008;2(2):e39-41. Epub 2008 Apr 17. PMID: 21602938; PMCID: PMC3090178

Tennant, J, JE Beamer, J Bosman, B Brembs, NC Chung, G Clement, T Crick, et al. 2019 (January). Foundations for Open Scholarship Strategy Development. doi: 10.31222/, J, R, Agarwal, K Baždarić, D Brassard, T Crick, DJ Dunleavy, and T Yarkoni. 2020 (March 6). A tale of two ‘opens’: intersections between Free and Open Source Software and Open Scholarship. doi: 10.31235/

Tulchinsky TH, and EA Varavikova. 2014. A History of Public Health. The New Public Health. 2014:1–42. doi: 10.1016/B978-0-12-415766-8.00001-X. Epub 2014 Oct 10. PMCID: PMC7170188

UN. 1948. Universal Declaration of Human Rights.

UNESCO. 2021. UNESCO Recommendation on Open Science.

Vivli. 2023. Vivli Policies in Brief.

Waltman, L, S Pinfield et al. 2021. Scholarly Communication in Times of Crisis: The response of the scholarly communication system to the COVID-19 pandemic. RoRI. DOI: 10.6084/m9.figshare.17125394

Whitman, W. 1871. Democratic Vistas. Republished in 2016 by CreateSpace Independent Publishing Platform, ISBN 978-1530539475

Willen, R. 2020. Justice oriented science, open science, and replicable science are overlapping, but they are not the same. Blog post

Williamson, OP, and CIJ Minter. 2019. Exploring PubMed as a reliable resource for scholarly communications services. J Med Libr Assoc. 2019 Jan;107(1):16-29. doi: 10.5195/jmla.2019.433. Epub 2019 Jan 1. PMID: 30598645; PMCID: PMC6300231

Wootton, D. 2015. The Invention of Science: A New History of the Scientific Revolution. HarperCollins

Zhang, L, Y Wei, Y Huang, et al. 2022. Should open access lead to closed research? The trends towards paying to perform research. Scientometrics 127, 7653–7679.

Researcher surveys cited in Box 1

AAAS. 2022. Exploring the Hidden Impacts of Open Access Financing Mechanisms: AAAS Survey on Scholarly Publication Experiences & Perspectives. American Association for the Advancement of Science.

Davies, T, SB Walker, M Rubinstein, and F Perini (eds). 2019. The State of Open Data: Histories and Horizons. African Minds, IDRC. ISBN 9781928331957. Book pdf from state-open-data-histories-and-horizons. HTML rendition at

Editage. 2018. Author Perspectives on Academic Publishing: Global Survey Report 2018.

Faniel, I. 2020. What researchers need when deciding to reuse data: Experiences from three disciplines. NIH Workshop, Session 3: Enabling Data Reuse.

Graf, Chris. 2019 (Nov 4). “Open Research and Data Sharing: Are We Hearing What Researchers Are Telling Us?” Wiley website,

Hrynaszkiewicz, I, J Harney and L Cadwallader. 2021. A Survey of Researchers’ Needs and Priorities for Data Sharing. Data Science Journal, 20(1), 31. DOI:

National Academies of Sciences, Engineering, and Medicine (NASEM). 2020. Reflections on Sharing Clinical Trial Data: Challenges and a Way Forward. Workshop proceedings. Washington, DC: The National Academies Press

Perrier L, E Blondal, H MacDonald, 2020. The views, perspectives, and experiences of academic researchers with data sharing and reuse: A meta-synthesis. PLOS ONE 15(2): e0229182. DOI: 10.1371/journal.pone.0229182

Scaria, AG, and R Shreyashi. 2018. Open Science India Report. OSF Preprints. doi:10.31219/

Segado-Boj, F, JJ Prieto-Gutierrez, and J Martin-Quevedo. 2022. Attitudes, willingness, and resources to cover article publishing charges: The influence of age, position, income level country, discipline and open access habits. Association of Learned and Professional Society Publishers. doi: 10.1002/leap.1455

Staunton, C., S Slokenberga, and D Mascalzoni. 2019. The GDPR and the research exemption: considerations on the necessary safeguards for research biobanks. Eur J Hum Genet 27, 1159–1167. doi: 10.1038/s41431-019-0386-5

Stuart, D, G Baynes, I Hrynaszkiewicz, K Allin, D Penny, M Lucraft, et al. 2018. Whitepaper: Practical challenges for researchers in data sharing. figshare. Journal contribution.

Taylor & Francis. 2019. Taylor & Francis Researcher Survey.

UNESCO. 2021. UNESCO Recommendation on Open Science.

Wiley. 2019a. Wiley Open Research Survey.

Wiley. 2019b. Wiley Open Data Survey.

Other researcher surveys

Digital Science. 2019. The State of Open Data. doi: 10.6084/m9.figshare.9980783

Funk, C, and M Hefferon. 2018. “As the need for highly trained scientists grows, a look at why people choose these careers” (blog post). Pew Research Center.

Graf, C, D Flanagan, L Wylie, et al. 2019. The open data challenge: An analysis of 124,000 data availability statements, and an ironic lesson about data management plans. Authorea. October 31, 2019. doi: 10.22541/au.157253515.58528497

Nature editorial. 2021. “Industry scores higher than academia for job satisfaction” (news item). Nature. doi: 10.1038/d41586-021-03567-3

Taylor & Francis Open Access Survey. June 2014. Oxford

Tenopir, C, E Dalton, L Christian, M Jones, M McCabe, M Smith, and A Fish. 2017. Scenarios among Authors of Academic Scholarship College & Research Libraries, 78(6), 824. doi: 10.5860/crl.78.6.82

WAME Survey for OSI2017 conference—selected results. 2017. World Association of Medical Editors. As used in Barret, K, P Baskin, S Murray, A Packer and M Winker. 2017. OSI Journal Editors Stakeholder Report. OSI2017 conference. doi: 10.13021/G8osi.1.2017.1908


See pdf file (click on the button at the top of this page).