Helping science succeed
Helping science succeed

OSI debates Plan A

Since late 2019, OSI participants have been discussing what our “Plan A” should look like (this is a working title—the final name may change). Plan A represents OSI’s first draft of a framework for global, inclusive, sustainable, and achievable action on scholarly communication reform. Feedback from outside the OSI community is welcome. Please email [email protected].

Plan A (v. 3.0-Deember 29, 2019): An inclusive, rapidly achievable, sustainable approach to global scholarly communication reform


The Open Scholarship Initiative (OSI) is the world’s only large-scale, high-level, multi-stakeholder effort focused on developing an inclusive, rapidly achievable, sustainable approach to global scholarly communication reform. OSI is comprised of top leaders in scholarly communication from over 250 institutions around the world, representing 27 countries and 18 stakeholder groups. OSI’s initial plan presented here—Plan A—is a starting point for discussion on developing a global roadmap for reform. Partners in Plan A are needed for funding, development, and implementation; feedback from the global stakeholder community is also welcome. This plan will be revised over time in collaboration and consultation with the open research roadmap effort currently underway at the United Nations (of which OSI is also a part).


Plan A proposes that beginning in 2020 and continuing for a period of five years, the global scholarly communication community will cooperate and collaborate on three main categories of action, in this order of priority: studies, infrastructure development, and education/outreach:

  1. Studies: We need to develop a better understanding of the scholarly communication landscape. Our community’s lack of understanding about key issues has, for the last 20-plus years, made it difficult to create effective reforms. To this end, we propose working collaboratively to support and conduct studies that will help us understand the scope of predatory publishing, create a viable alternative to the impact factor, test whether embargos can be reduced or eliminated, measure the impacts of open research, model how to change the culture of communication in academia, understand definitively whether a global flip to APCs will work, and more. OSI has identified 12 priority studies that need to be conducted, has already mapped out protocols for some of these studies, and has lined up world-class researchers to help manage some of this work. See the annex section for details—additional recommendations are welcome.
  2. Infrastructure development: The global scholarly communication community needs new products, services, tools, websites, and other innovative resources to help encourage, achieve, sustain and monitor reforms in this space. Some of these items include a common infrastructure solution (possibly an all-scholarship repository built using CERN’s Invenio; the precise details of this solution need to be more thoroughly investigated), an APC discount/subsidy database, an open index of scholarly publications (along with an open impact factor), an APC price comparison tool, a Yelp site for scholarly publishing, repository upgrades, publisher standards, an annual “state of open” survey and more. We propose working together to develop these and other needed items so reforms can be more quickly and easily adopted, and so the scholarly communication landscape can be more quickly and easily improved and maintained. Seven priority projects have been identified, as detailed in the annex section. Additional recommendations are welcome.
  3. Education/outreach: The scholarly communication community needs to be better informed with regard to opportunities, impacts, processes, options, and so on, and also needs to have better systems in place to listen to stakeholder feedback and create/adjust solutions accordingly. Of particular focus on the listening side, we need a much clearer and more detailed understanding of exactly what we hope to accomplish with reforms so we can make sure to answer the right questions, collect the right data, and build the right systems. New international meetings are part of the needed approach here; so too is greater alignment between various existing roadmap efforts (which OSI has been working on; this is called out below as a separate action item since it is a distinct subset of education and outreach). The education and outreach needs in this space are vast and the actors are numerous. Specific recommendations for capacity building, collaborative action, new initiatives and so on are welcome.

In addition to these three main action items, Plan A also proposes that together, we:

  1. Pilot open solutions in one area of urgent need like climate change research
  2. Develop sustainable solutions for meeting urgent needs, such as (but not limited to) zero-embargo compassionate use programs for patient families, and a more robust R4L program for lower-resourced regions and institutions
  3. Hold meetings where all stakeholders can discuss the outlines of a new global roadmap for open scholarship
  4. Continue to advise and collaborate in UNESCO’s global roadmap effort (including hosting and participating in meetings).
  5. Combat predatory publishing through education, improved standards, and other means (some but not all of which are covered in the first three action items).
  6. Work to better understand the needs, goals and concerns of researchers in different disciplines, fields, labs, regions and institutions, and at different career stages (researcher perspectives vary widely, meaning that one-size-fits-all solutions are unlikely beyond establishing some fundamental common-ground agreement).
  7. Plan for and begin building a future that meets these varied needs and goals and integrates open in such a way that it is embraced by researchers, advances research, and increases the value of research to society.

This work will be guided by 12 principles that represent a global, multi-stakeholder, common ground perspective on the future of scholarly communication. Plan A’s work and work products will be:

  1. Researcher-focused. Research communication tools, services and options need to be developed with heavy input from the research community, with solutions/approaches driven by researcher needs and concerns
  2. Collaborative. Successful and sustainable solutions will require broad collaboration, not just to ensure that all perspectives are considered, but also to ensure there is broad ownership of ideas.
  3. Connected. There are great many interconnected issues in scholarly communication. We can’t just improve “open,” for instance, without also addressing impact factors, peer review, and predatory publishing. Reforming scholarly communication will require a systemic approach.
  4. Diverse and flexible. There are no one-size-fits-all solutions to scholarly communication reform. Instead, there are many different pathways to reform, including many pathways that have not yet been conceived/deployed. Diversity, creativity and flexibility in this solution space should be encouraged so we can focus on our community’s common goal of improving scholarly communication instead of insisting on common strategies or philosophies for improvement.
  5. Informed. We need a better understanding of key issues in scholarly communication before moving forward. For instance, what is the impact of open research? The more accurate and honest our assessments, the more accurate and honest our reform efforts can be, the easier these efforts will be to promote, and the more successful they will be.
  6. Ethical and accountable. We need enforceable, community-developed/driven standards to ensure the integrity of journal publishing, repositories, and other related activities/products, and to ensure that unethical approaches are not embraced.
  7. Directed. We must discuss and plan for what the future of scholarly communication means, beyond just having access. For instance, we need to identify precisely what we plan to do with open information, where we will need data interoperability, what tools and procedures we will need to achieve this interoperability, and so on.
  8. Equitable. Researchers everywhere need to be able to access and contribute information to the global research corpus with minimal barriers. To the extent practicable, research information—particular information central to life and health—should not be unreasonably constrained by issues such as high access costs, poor journal indexing, and a lack of capacity-building programs.
  9. Sustainable. Scholarly communication reform approaches need to be sustainable, which flows from all the other elements in this list. That is, the reform solutions we design need to be achievable, affordable, popular, effective, and so on.
  10. Transparent. This community needs to maintain as much transparency as possible in this effort (with regard to pricing, usage, ownership, and so on) in order to address the trust issues that have plagued this space for so long.
  11. Understandable and simple: This community needs to agree on a few simple, high-level, common-ground goals for scholarly communication reform—not anything terribly specific with regard to gold this or CC that, but a general set of goals that are understandable, achievable, and adaptable. By setting out general goals that can be easily achieved, participation can be made simple and easy, with low barriers to entry.
  12. Beneficial: In the end, these reforms need to benefit research first and foremost. While the argument to improve benefits to society is palpable, these benefits need to be matured carefully, deliberately, and realistically in order to ensure that societal benefits are indeed being conveyed as intended, and that research is not being harmed in the process.

It is proposed that the international research stakeholder community jointly manage Plan A through OSI. A detailed governance structure for this plan will be developed over time in consultation with participants and funders. Our hope is that this plan will be fully launched by mid-2020, continuing for as long as funding and support persists.

By working together on realistic, robust, collaborative solutions that improve the capacity of research for all researchers everywhere, Plan A’s vision is that we will arrive within the next 20 years at an “Open Renaissance” where many kinds of improvement happen to research and the research ecosystem grows exponentially more powerful (with more data, more connections, and more apps), which will further catalyze innovation and improvements in research. New fields and directions will emerge based on “connecting the dots” (thanks to data and repositories), funding efficiency will improve, and discovery will accelerate; the social impact of research will surpass today (including improved literacy, public engagement, and public policy impact); and knowledge will become more of a global public good, with society reaping the benefits.


The Open Scholarship Initiative is a global, multi-stakeholder effort that has been working in the scholarly communication space since 2015. OSI’s overarching goals are to improve the openness of research and scholarly outputs, lower the barriers for researchers and scholars everywhere to engage in the global research community, and increase opportunities for all countries and people everywhere to benefit from this engagement. OSI is managed by the Science Communication Institute, a US-based 501c3 nonprofit public charity.

OSI fills the “NOASIR” role for UNESCO, serving as this agency’s Network for Open Access to Scientific Information and Research. What this means is that UNESCO is relying on OSI to support and cultivate the international open environment and connect stakeholders, support research and development in open technologies, policies and practices, defend access to scientific journals to developing countries, and serve as a laboratory for innovation and a catalyst for international cooperation. OSI is also consulting with UNESCO’s Natural Sciences Directorate, assisting the directorate in its effort to develop a UN-wide approach to the future of open science at the ministerial level.

OSI currently includes around 400 high-level representatives from 27 countries, 250 institutions, and 20 stakeholder groups in research and scholarly communication.



OSI will begin conducting studies that target key issues in scholarly communication where a lack of firm understanding is making it difficult to create effective policy reforms. These studies will be “leveraged” through OSI, not outsourced. That is, OSI has enough internal and volunteer capacity to do all the study design, oversight, writing and analyses in-house. Grant funds will be used mostly for data-gathering and statistical analyses. The OSI team will identify and hire researchers as needed (some may end up being OSI participants already) who can conduct original research work as needed, and hire statisticians as needed to crunch numbers and maybe take a first pass at analysis, but the final writing and analysis will be done in-house by OSI participants. In this way, we can get the most studies possible with the smallest outlay of time and money. The studies we will conduct are as follows:

  • DECEPTIVE/PREDATORY PUBLISHING: Exactly how fast is deceptive/predatory publishing growing, how much of it exists, and what are its dimension (by region, discipline and so on)? Very little definitive is known about this phenomenon, and yet it is perhaps the single most disruptive influence in publishing today (Anderson 2019; Strinzel 2019). As more emphasis is placed by libraries and funders on open access publishing, more open access publishing options are becoming available to authors. Some of these options are legitimate, some are not. This study will describe what we already know about predatory publishing, and will also enlist the aid of leading researchers who are part of OSI to suss out long-term data about the growth of predatory titles over time. A rough outline of this study is as follows:

Title: Using new and improved data to assess the academic journal landscape

Section Description Pages New or novel? Notes Lead author?
Intro Overview 0.5 No Why can’t we just do a count in Google? Well, for one, they won’t let us. Second, there’s no accounting for quality. The future needs to be built on systems that are reliable and accountable. Glenn Hampson
What is a journal? Essay 1 No   Rick Anderson
The growth of journals and journal articles Statistics 2 Yes This is a known concept but will use new/better data from 1findr Eric Archambault
Breaking down the nature of this growth Statistics 3 Yes Same as above. Focus on regions, disciplines, rates, and types (open, subscription, hybrid, other; predatory, indexed, non-indexed), plus—from other studies—how this compares to growth rates for “other” types of science communication like white papers, blog posts, preprints; who is publishing and why; etc. (from other studies) Eric for new material, Glenn for rest
Discerning legitimacy Overview 0.5 No A quick case for how we define real science publishing and how evolving publishing norms are makeing it easier to push these boundaries Rick
The statistics of legitimacy Stats 4 Yes A detailed look at what Cabell’s is doing, plus a detailed breakdown of the predatory landscape (rates, regions, disciplines, etc.), as well as a breakdown of what kinds of “violations” exist. How much of this “predatory” work is mixed in with real work, and how does this change the growth estimates that Eric came up with? This will need to be broken down by region and discipline—the aggregate numbers won’t be revealing. Simon Linacre
Testing assumptions Stats 4 Yes Random sample Google search results in various topics from different parts of the world to if what comes up in Google searches matches what “should” come up in terms of significance and legitimacy. [This is important insofar as GS is the primary search mechanism for a majority of the world’s researchers.] For instance, does searching for “cancer vaccine research” return real work more often than not, or lots of predatory work? Understanding this will help us understand how worried we should be about fake science corrupting our knowledge base. Not sure
Re-thinking the landscape Informatics 2 Yes How else can we visualize what’s happening in scholarly publishing? For instance, would it make more sense to group journals into “read” and “not read” (and/or relevant and not relevant, compliant and/or noncompliant, etc.)? By audience saturation? Etc. In other words, is it necessary to think in terms of the growth of articles and journals if what’s actually being used/read is remaining essentially unchanged (save for new journals covering new fields), or if journals are born and quickly die? Glenn et al
Issues and recommendations Policy 3 Yes What are the issues that are important in this landscape (like inclusion and preservation), and what issues are preventing us from tracking academic scholarship more closely (ISSN errors, naming differences, indexing problems, completeness issues like poor inclusion of SciELO journals, etc.), how prevalent are these, and what can/should we do to remedy these? Is a global open index a solution (plus a global open impact factor)? These ideas will be explored more fully in a forthcoming OSI project. Glenn et al
  • IMPACT FACTORS: Impact factors are one of the most destructive, most corrosive measures used in science today (OSI 2016a, Bosman 2013). They are also one of the most important and widely used. How can both of these statements be true? Because impact factors are the statistic we love and hate—we know they are more or less meaningless (Lozano 2012), but we also know that high impact factor work translates into promotions and grants. And so we turn a blind eye to their shortcomings and keep using them. Much has been written about the use and misuse of impact factors (i.e., explaining what they were intended to measure versus how they are promoted), alternatives to the impact factor, and calls for broadening the metrics we use in assessments (particularly RPT). But nothing has ever been written about the statistical validity of this measure. In fact, the impact factor isn’t mathematically valid at all for the purposes of measuring “impact” (for several reasons—the most significant of which are that this is an aggregate journal level metric and not an article level metric; also, citation counts are just aggregate, not positive or negative, so a bad article could be highly cited as an example of what not to do. After dissembling the mathematical foundation of impact factors, this study will propose how to remake the impact factor to improve its use. It will also rethink policies regarding how we use future impact factors in order to avoid perpetuating the “arms race” situation we have now where publishing in high impact factor journals is seen (incorrectly) as a proxy for quality, relevance and impact (dissembling this narrative will require evidence). Finally, this study will review the existing literature for an explanation of why we use these measures in the first place (plus an overview of who uses them and how), and review other proposed means of measuring impacts (existing tools, new tools, etc.). One final approach that may also be explored as part of this paper, depending on how far along the development of a proposed product has progressed (see “open impact factor + open index”) is a new “open impact factor” measure (built on the new math but using a global index) that everyone can have/use and that doesn’t discriminate against small/new publishers. Currently, only journals indexed by Clarivate (representing a narrow and elite set of journals) can have an actual impact factor calculated; everyone else needs to use a fake impact factor (like the Global Impact Factor) or invent one out of thin air. Creating an open impact factor will first require creating a global index, which is described in more detail in the open impact factor + open index product proposal.
  • EMBARGOES: How necessary are embargoes? Publishers insist that a 6-12 month delay is necessary between publication and free public access in order to protect subscription revenues. Critics contend that this time could be shortened—that there are other ways to protect revenue streams that don’t involve long paywalls. To-date, the only estimates of ideal embargo length have come from citation half-life studies. In order to generate more “real” data on this matter that directly answers the question of how long is too long (instead of inferring this from half-lives), we will conduct a blind with the cooperation of publishers (Elsevier volunteered to participate in this study in 2016; we will revisit this offer and see if we can also include other publishers). This study will reduce or eliminate embargoes for a select number of publications and will monitor this impact of this action on revenues. If the impact is negligible, the evidence may suggest that embargoes can be shortened (or that revenue loss can be offset through other value-added access means—e.g., increasing access to the article but not the dataset, which will lead to more purchases of the dataset). The need for embargoes remains a major sticking point in open debates. Figuring out how to make progress on this issue is important to the future of open.
  • IMPACTS: Not to be confused with “impact factor,” understanding the actual impacts of open in research, education and society is vitally important. This is more of a meta study than anything, but it’s needed to better “sell” the advantages of open (or to better understand why open is not selling and what we really need in open—more standardization of data, for instance). The OA citation advantage is the most visible attempt so far to quantify open impact, but studies trying to measure even this one statistic have reached different conclusions to-date. Eric Archambault’s most recent study (Science-Metrix 2018) is the most authoritative, but even this study didn’t look at the full spectrum of open products, just “gratis” (which crosses several categories of open). What we need to know is much more granular: what kinds of green open are the most effective (for instance, the green in institutional repositories, or on preprint servers, or where?), how well is gold received by researcher (and what type), bronze, public access, and so on? In other words, exactly what kind of open is needed to improve visibility and reuse? What kind of open works best and why (what factors are most important—readability, findability, reusability, all of these, or none of the above)? What measures other than citation might we use to triangulate on actual impact (since citations can be influenced by press coverage, topic salience, etc.). What correlates can we note between open and research uptake, R&D investment, and more? The entire corpus of open work to-date has taken it as an article of faith that all open is created equal and that open itself—vaguely defined as it is—is meritorious. We need to get a clearer idea of what we’re working to achieve and why, beginning with understanding how the current constellation of open outcomes are being received in the marketplace. (Possible OSI research leads: Rob Johnson, Caroline Wagner, Eric Olson; Rob’s possible time frame for working on this is June-Aug 2020)
  • PUBLISHER PROFIT MARGINS: A major point of contention in this space is how much profit Elsevier makes. Critics say 37 percent. The company (in correspondence with the OSI list) says much less—that Elsevier’s income and expenses are entangled with those of its parent company RELX and that revenues come from many sources not related to academic publishing. A clearer picture is simple enough to arrive at by hiring auditors to examine the books (not just of Elsevier but other major publishers as well) and issue an authoritative analysis, and also by reviewing the scholarship on how to properly interpret profit margins within and across industries . We will also review the landscape of funding and costs for universities to see how publishing fits into all of this. Charges of profit-mongering and double-dipping have fueled attacks on commercial publishers or at least 15 years now and these attacks have been used as an excuse to keep publishers from participating equally in global conversations about the future of open. To the extent we can help shed more understanding on these numbers, it will help provide a firmer foundation of transparency and realistic expectations for open reforms. In order to develop a fuller understanding of the underlying tensions in this debate—it’s largely just a push and pull between libraries and publishers, with each accusing the other of financial misdeeds— we may also find merit in expanding this study to include a look library finances as well. The publishers with whom we have spoken are willing to participate in this study insofar as providing requested data.
  • CONNECTEDNESS/STANDARDS/ROADMAP: How related are different concepts and applications of open (across coding, books, journals, etc.), and where can we merge these concepts, applications and even open efforts? As we (not just OSI, but the United Nations, scholarly societies and others) begin developing new roadmaps for the future of open, it behooves all of us to collaborate not just within scholarly publishing, but between journal publishing, book publishing, data science, and so on. OSI is actively pursuing partnerships in the roadmap effort on several fronts but needs to have a roadmap of its own showing who is working on what, what concepts overlap, what concepts differ, and how this landscape of interests and perspectives fits together. From this work, it should be possible to create a new global conversation around global open standards and a global open roadmap built on common ground and connectedness and that applies broadly to all fields and all open efforts. From this position, we can establish policies that are flexible and adaptable and that all pull in the same direction toward more open. A study like this hasn’t been conducted before—this would be a first attempt to define the full landscape of open.
  • NEEDS: Tying in closely to our impact study, the scholarly communication community also needs a study that looks at how much open is needed by field (for instance, is CC-BY licensing always necessary everywhere)? As noted in the impact study description, open efforts have long proceeded from the assumption that we know what works and what the market needs, but in fact we have no idea. This study would first survey existing literature to get a fuller picture of what we already know with regard to researcher wants (primarily various author surveys conducted over the years by publishers and universities). Information gaps would then be filled via new, global surveys, facilitated with the assistance of Editage/CACTUS and others in OSI who have volunteered to help. Getting a broad sense of this demand across regions and institutions, as well as across disciplines and faculty types (as is usually done) is critical insofar as trying to ascertain global needs and perspectives and not just Northern/Western needs. Getting a better sense of what kind of open we should be working toward is also critical. The impact study will look at this from a market perspective, assessing what’s being used. The needs study will look at this from an aspirational perspective—what needs are present that are not being met? Do current solutions align with marketplace options? Is there alignment between what researchers are asking for and what the marketplace looks like?
  • PUBLISHING IN RPT: Publish or perish has been the norm in academia for decades now. This dynamic is not abating; indeed, it’s accelerating (Plume 2014). Around the world, we see a wide variety of influences that are causing the number of research articles to stay high, including requiring publishing for a PhD (India), awarding cash bonuses for publishing in high-impact journals (in China; Montgomery 2018), having journal articles ghost-written for you to improve resumes (Russia), and everywhere, having more opportunities available to publish (faster, at lower cost, as part of large multi-author teams, as part of grant requirements—regardless of whether study findings are complete or meritorious, as salami-sliced articles, as a consequence of increased specialization, and more. Concurrent with this avalanche of paper, there is also increasing sloppiness in the system wherein tenure committees aren’t necessarily valuing the quality of publications—that is, publishing in predatory journals may not always be noticed or questioned (Shamseer 2016). OSI has debated this issue at length and there aren’t any good answers. Do we expand the scope of what “counts” in publishing to include blog posts, videos, press interviews and more? Do we lower the bar and allow preprints to count for more? Do we create professional standards such that publishing in an non-indexed journal (see tech project on indexing) is disallowed. Or even more aggressively, do we create standards that say publishing in such journals is unethical? OSI isn’t the only group that has debated this issue. What is needed is a landscape analysis of RPT practices worldwide with regard to publishing. From this analysis, we will develop a set of best practices recommendations for UNESCO and national departments of education. Once we lower the pressure to publish in academia, it will become easier to rationally discuss and implement solutions aimed at improving the quality and quantity of research publishing. Until then, and without addressing this systemic issue, reform measures will simply be reactive.
  • PEER REVIEW: Peer review is what separates vetted science from non-vetted science. It’s a critical part of the current scholarly publishing ecosystem. Peer review is also unpaid labor and an incredible burden to many in academia. To this end, different methods of peer review are evolving and being tested—for instance, post-publication peer review, which allows articles to be quickly shared and then refined via broad feedback in real time online. Peer review is also being faked—deceptive journals promise peer review but deliver only a cursory editorial review instead, if that. OSI has debated this issue at length and is well-positioned to author a landscape analysis of the current state of peer review, along with best practices recommendations for UNESCO and national departments of education. Without figuring out the right way forward for peer review, our open efforts will flounder—we can’t create more open without ensuring the scientific integrity of these articles. We also need to develop and share best practices with the global community in an authoritative way, which this landscape analysis will facilitate. This effort will be focused on settling the highest priority concerns in peer review (Tennant 2019): what is peer review anyway, what value does it add, how do we define expertise, how do we protect diversity and more. These questions will be answered through broad stakeholder polling and consensus. This study will be part fact-finding, part survey, part consensus cultivating, and will involve meetings, email discussions, proposal drafts floated to institution heads, and collaboration with standards agencies like NISO and editorial agencies like WAME (which all participate in OSI).
  • GLOBAL FLIP: California’s library system, cOAlition S, MPDL’s OA2020 Initiative, and other influencers in global scholarly communication system all believe quite firmly that a global “flip” to open is economically feasible, wherein closed subscription publications convert to APC-funded open publications. This belief is grounded at least in part in a 2015 study from the Max Plank Digital (Schimmer 2015) suggesting that the world has enough capacity to make this flip possible and that costs will come down as a result of APC competition. These data have never been examined closely in another research piece (they have been challenged in numerous blog posts since then) but they need to be so the global community can assess this strategy more objectively. Mounting evidence suggests that authors do not comparison shop for APCs (Tenopir 2017), so there is no downward pressure on prices. What we have instead are escalating prices, and a shifting of the cost burden from institutions to authors, all of which is only widening the gap between haves and have-nots. Are APCs the way to go? Maybe, maybe not. The fact is we don’t know. More research is needed. This study will go back to square one and re-examine the data and assumptions of the original global flip study, updating data points and re-examining assumptions such as price competition based on new studies. It will then look at the variety of pricing models that have emerged in the global publishing system over the last 10 years (such as PAR) and estimate what may actually be possible—that is, estimate what the market may actually be looking for and what reforms may be achievable. Based on this analysis, this study will search for the “sweet spot”—maybe, for instance a global flip to PAR in 10 years bracketed on the high and low end by layers of subscriptions and preprints, or whatever the case may be. This analysis is important insofar as trying to visualize the end-zone for reforms. We know what problems exist and what changes need to be made. What we don’t know is where the market is headed. Having a better idea of this will allow the global community to start pulling in the same direction and improve collaboration on measures that aim for the same goal.
  • GLOBAL RESEARCH PUBLISHING STANDARDS: Figuring out how much deceptive/predatory publishing exists, what it looks like, who is using it and why (see previous study proposal on deceptive/predatory) is just part of the effort to improve global research publishing. Another critical part is to figure out what research publishing standards we need. Several organizations in scholarly communication have discussed best practices over the years (most notably editorial and umbrella groups like NISO, WAME, COPE, and OASPA), but these discussions have stopped short of creating and issuing internationally-backed recommendations for publishing standards and the methods for enforcing these standards. This study will first gather together best practices recommendations that have been discussed to-date, update these with input from the organizations represented in OSI (which includes editorial and umbrella groups plus over 200 other organizations), and then evaluate realistic measures for creating and enforcing standards for the global research publishing community which will be observed not just by publishers but by others as well—most notably funders and universities. The goal of these standards will not be to erect barriers to publishing, but to map out the boundaries of what we mean by “open,” “publishing,” “peer review,” and other terms that lack a clear definition. These standards will also define the minimum expectations we should have for publisher competency so that the global research publishing enterprise as utilized by universities in particular is consistent and well-defined. Since this study will rely on findings from several other OSI studies, it will need to wait until these other studies are complete before beginning. Creating thoughtful, fact-based, widely-adopted standards for global research publishing is critical to ensuring that research publishing grows in a way that represents the needs of researchers and not just market forces (e.g., less deceptive publishing, less pressure to publish in journals, etc.).
  • REPLICATING THE SCIELO MODEL: SciELO is one of the most unique organizations in the world of scholarly communication. It is a soup-to-nuts provider of everything from publisher training to editorial services to data management and repository management, serving as a pioneering open access network and hub for dozens of journals across Latin and South America. It is a model for how the publishing industry should evolve in the global south to ensure improved focus and better access. We will undertake a study to determine the feasibility of expanding SciELO from Latin and South America to CAMENA (Central Asia, the Middle East and North Africa), Sub-Saharan Africa, and SE Asia. Is there a need in these regions? Interest? Potential financial support? Should these new SciELO’s operate independently or in cooperation with one another? Based on the outcome of our study, we will then approach UNESCO and other possible funders and partners with financing and development proposals (note: an initial version of this plan was raised last year at SciELO-20 with the heads of SciELO and its parent body FAPESP, as well as UNESCO).
  • IMPROVING SCHOLARLY PUBLISHING RESEARCH: The majority of research into scholarly publishing-related issues and reforms isn’t adequate. This is an impossible statement to corroborate—it’s an observation based on the volumes of research the OSI group has reviewed over the past four years. Too much of this research exhibits a fundamental misunderstanding of the nuances in this field. In an effort to promote better research, we will research and publish a paper that describes the conditions researchers need to keep in mind when doing open research. For instance, when researching predatory journals, Beall’s List should not be used as a starting point since this list is not transparent and is no longer supported (i.e., the criteria for inclusion on this list were always taken on faith—Beall never made these criteria public—which is not how science should be done). Also, we cannot assume “open” means the same thing as open access. Too much research tracks “open” without understanding that it exists in many variations, and gold/green CC-BY open is just one such variation. Also, we cannot treat databases like Scopus are being representative of all journals. This database is, in fact, narrow and highly selective. There are many more observations about scholarly publishing research we’ve noted over the years; publishing this as guidance will help improve the quality of future research work in this area.
  • OTHER: The OSI group is constantly talking. It’s quite likely that other study ideas will be raised. If some of these ideas are meritorious, they will be added to this grant proposal with permission and pursued if possible.


OSI will also begin developing tech products and solutions that fill key needs in the scholarly communication ecosystem where a lack of government and/or private sector action has hindered the progress of open reforms. As with OSI studies, these products and solutions will be “leveraged” through OSI, not outsourced. That is, OSI will design and oversee development in-house, and NSF funds will be used for certain programming and other work that cannot be handled in-house. The OSI team will identify and hire personnel as needed (some may end up being OSI participants already) who can conduct this work as needed, but the final design decisions and assessments will be done in-house by OSI participants. All of these products and solutions will fully deploy before 2025. Grant funds (if available) will be used to maintain these products and solutions over grant periods, but all solutions will become self-supporting through various combinations of advertising, sponsor fees, and member fees for content providers (none of these products/solutions will have user fees for basic access, although premium access models may emerge as a means of support). The products/solutions OSI will consider building are:

  • APC DISCOUNT/SUBSIDY DATABASE: There are no databases of article processing charges (APCs) or subscription discounts or subsidies. Researchers looking for charges, discounts or subsidies need to search for these one at a time. Research4Life leaders (who are part of OSI) have noted that building such resources would be immensely helpful to authors, particularly those from the global south where discounts and subsidies are most needed, and also where price comparisons are more needed. OSI researchers will collect and input initial APC and discount/subsidy data over a period of six months, after which point publishers and discount/subsidy providers will be given instructions on how to keep their data current. This data from this system will feed into other systems we develop (see, for instance, the Yelp product).
  • OPEN IMPACT FACTOR + OPEN INDEXES: Our uneven progress toward open is having unintended consequences. Among these consequences are the unavailability of legitimate impact factors for all journals (because not all journals are indexed), uncertainty about the number and growth of so-called deceptive/predatory journals (see deceptive/predatory study proposal), and the growing incidence of citations from non-indexed journals. Regarding this first problem, because the need exists for thousands of journals to get some sort of legitimate impact factor (whether this uses the same math as the current impact factor is a separate question—see the impact factor study, which will precede the development of this tool), because most journals will never earn a legitimate impact factor through Clarivate (since these journals don’t pass rigorous tests for index inclusion), and because the alternatives (such as “global impact factor” or “universal impact factor”) aren’t legitimate, there is a need in the marketplace for new solutions that are legitimate. OSI has discussed developing three possible solutions to these challenges: (1) Creating an open impact factor measure (described below), (2) creating an all-inclusive open index, and (3) creating an index of indexes. All three products/services have unique audiences and all three will be developed/piloted together. The first solution—the open impact factor—simply decouples Garfield’s impact factor calculation from the private management and ownership of it by Clarivate—decoupling the algorithm from the data source so we can have as many lowercase “impact factors” with as many algorithms as we want. (Clarivate has trademarked “impact factor” and “journal impact factor” in the US but does not own the mathematical concept. This move is not wresting control of the impact factor away from Clarivate since the product they provide has substantial independent merit. Rather, it is simply providing legitimate alternatives to the “universal impact factor” and “global impact factor” for journals that do not qualify for a Clarivate-issued impact factor.) To do this will first require a developing a global index of journals, which is proposed solution number two. Current indexes are limited in scope and focus primarily on English-centered indexes. In order to improve the identification of deceptive journals it is necessary that we have a universal indexing system that overcomes the natural or operational exclusion of current indexes. Today such indexing is provided only by Google Scholar. Idea number three is to create an automated journal whitelist look-up, whereby a program will make an API call to a look up and return a list of whitelists on which a given journal appears (with cooperation from Cabell’s, this call could also include blacklists). This system will return a finding like: “Journal X is indexed by WoS, JCR, Scopus, DOAJ, and MEDLINE.” The lookup will also include subject lists (like EconLit, PsycINFO, MLA, and so forth) as well as regional titles. This system will be used to help dissuade citing non-indexed and possibly suspect work. Journals will be encouraged to adopt an editorial policy whereby if a referenced journal does not appear on a whitelist, then authors must justify the citation. This approach does not require much in the way of new infrastructure or the creation of new lists. It will, however, require various whitelist publishers to agree to allow such an API look-up (akin to Indeed or Monster scraping various job boards to provide one meta job board). The look-up would not contain any additional information from the white lists—only an indication of whether a journal appears on it.
  • APC PRICE COMPARISON TOOL:  As noted earlier, several recent studies have confirmed (Tenopir 2017) that scholars do not shop around for the best prices on APCs. And yet price shopping is behavior is assumed to exist and is fundamentally important to the success of the University of California’s position with regard to cancelling access to Elsevier journals and hoping that alternative publishing options will not only take hold but save the system money (as enunciated by the UC’s lead negotiator Jeff Mackie-Mason; see Mackie-Mason 2016), and also to the MPDL’s OA2020 effort (which underpins the EU’s Plan S initiative). APC price shopping may not exist yet simply because there is no tool to help facilitate this (to be clear, price is a factor, but surveys have shown that authors care more about quality and impact than price; the argument here is that if it was easier to compare prices, then maybe price would factor more in decisions). Although many in OSI are opposed to the carelessness of Plan S, we are not opposed to the idea of helping contain costs in publishing; developing an APC price comparator tool would therefore be of great service to the global scholarly communication community. No such tool currently exists. The development and deployment of this tool would need to proceed with care. While providing price information is valuable, we don’t want to help promote fake journals either. Therefore, with help from Cabell’s, DOAJ, SSP, and other relevant organizations in OSI, we will begin by creating a self-populating database of APCs from currently indexed journals only (seeded with initial data as available, at which point publishers will be emailed and instructed how to self-update information). Non-indexed journals with egregiously bad behavior (plagiarism, fake peer review, etc.) will not be listed in this database; non-indexed journals with smaller question marks (new, no street address, broad subject coverage, regional interest, etc.) may be listed with asterisks (indicating that authors should seek input from their library officials before publishing in it).
  • YELP SITE FOR SCHOLARLY PUBLISHING: OSI will build a few tools that have wide “category-killer” appeal and real paradigm-shifting potential for scholarly communication. A Yelp site for publishers is one such tool (an All-Scholarship Repository is another). Both of these tools will have significant overlap with other tools we build and that exist on the market today—that is, they will incorporate some of the same data, but they will have broader audiences and fill more needs at once. The core purpose of the Yelp site for scholarly publishing is to provide an easy-to-use, familiar-looking interface where customers (authors, editors, reviewers, funders and more) can rate scholarly publishers (not just commercial journals but university presses, scholarly society journals and more) and where publishers can provide important contact and product information—a link to their website, a summary of their products and services, links and credentialing badges that verify data such as indexing and impact factors, and much more. Customers will be able to search this database for publishers in their field, price range, region and more—like the actual Yelp site, searches can be filtered in a wide variety of ways. Customers will also be able to provide reviews regarding their experiences with publishers, which will help round out the data provided by Cabell’s blacklist and other information sources. For instance, customers might report that their peer review experience with a particular blacklisted publisher was perfectly acceptable, or conversely, that it was entirely inadequate with a highly-ranked publisher. The reviews that get posted on this website will take a few years to become accurate. At first they will be dominated by people who are either trying to mask bad products or punish good ones, but over time we suspect that this will become the go-to resource for all authors looking to publish their research and funders looking to identify reliable open access publishing options. As such, it will be heavily trafficked (at least relative to other products in the scholarly communication space) and a good revenue-generator. Ad revenue will help support the upkeep and sustainability of this product, with excess revenues accruing to OSI toward the development of OSI’s other products (and studies); sponsorship support will also be important. This will be a complicated product to develop, launch and fine-tune, and very labor intensive as well. If we are able to begin product development in early 2020, it will take six months to work out the architecture, six more to populate with starter data, and six months after that to beta test and refine—a total of 18 months before the first iteration of this site is up and running. Due to its complexity, the vast majority of this product will be hired out—very little of the programming work will be conducted in-house.
  • ALL SCHOLARSHIP REPOSITORY: The All-Scholarship Repository (ASR) is the ultimate game changer in scholarly communication. Rather than continuing to rely on (and expand) our global network of institutional and national repositories, and then exert herculean and ultimately inadequate efforts to connect the meta data in these repositories (which ends up only providing a glimpse into the contents of each repository, not full access to the contents themselves—at least at the moment), ASR jumps over this step and instead creates a single warehouse for all scholarly research content. The advantages of this global preprint server concept are multifaceted: full-text searches across all articles, the potential for widescale database standardization and integration, the potential for vastly expanded cross-discipline integration, the potential to implement widescale online peer review solutions, real-time and transparent impact measurement (via downloads, views, comments and reader scores), instant open for all content, and more. ASR, in essence, solves a hundred pressing issues in scholarly communication in one fell swoop. It’s a leap, though, and will require widespread buy-in in order to succeed, including from publishers whose content is needed for this system. Where would publishers end up with this system? The same as now, publishers would identify the best and most promising research and publish these articles in their journals. They would also put their own interface on the ASR (a public resource) and curate contents as they see fit, adding value by analyzing trends, highlighting significant new discoveries in fields of interest, and more. The only difference would be that the preprint world would be “unshackled” from the print world, and would be free to grow at its own pace and direction. This may eventually mean fewer print journals and more reliance on the ASR, but a possible decline in publisher subscription revenues would be offset by an increase in value added revenues. In terms of architecture, ASR would be single database with many spokes—many independent owner/operator channels through which data can be added and outputs can be customized. The Digital Public Library of America is the best example of how this system would operate. The central ASR database would be replicated and archived continuously; it would also be cloned by owner/operators. A fuller description of the ASR concept and operation is available in the appendix of OSI’s February 2015 report (OSIWG 2015). The time frame for developing and launching ASR is longer than for our Yelp site since we will need about a year to discuss and arrange collaborations with major pre-print and government servers about data scraping and integration (we aren’t expecting that ASR will replace any existing services until it is very populated, although the prospect of replacement will be promoted; US government agencies in particular, if directed by OSTP, might be keen to explore repository replacement instead of long-term and costly upkeep and modernization). If funding for ASR is secured by early 2020, our goal is to have an initial version of this repository running by end-2022. Like the Yelp site, this site will have revenue generating potential, but on a much more massive scale—not only advertising and sponsor revenue channels, but also percentage revenue arrangements with publishers who provide data for the site and resell data from the site. Excess revenues will be directed to OSI to ensure the continued full funding of OSI operations, in accord with the NSF’s guidelines on this matter.
  • PREDATORY PUBLISHER BLACKLIST: In collaboration with other organizations in this space OSI will create a free, publicly available list of the largest, most prolific predatory publishers. Curating and maintaining the full list is a labor-intensive endeavor and will remain a retail product of Cabell’s, but the OSI list will serve as an initial “quick check” for potential authors, highlighting the most egregious and prolific predatory journals who account for the most of this kind of output and/or the most blatantly fake outputs (like OMICS). This site will also provide background information on predatory publishing, links to resources like Think-Check-Submit and Cabell’s (for the full list of predatory publishers), and case studies on why this kind of publishing should be avoided (due to risks it poses to careers and science). There is no other resource like this on the market.
  • ITUNES SINGLE ARTICLE DOWNLOAD: The idea of having an iTunes-type of tool for single-article downloads has been kicked around for years in publishing but never pursued. Various experts have dismissed it out-of-hand for various reasons, with criticisms like we shouldn’t have to pay anything for these articles, and customers won’t pay when they can find them for free with a little digging (interlibrary loans, etc.). These criticisms have never been tested though. Our hypothesis is that, in fact, creating a model where consumers can legally access the latest work (or close to it—maybe downloads from this system would be embargoed only briefly but not for as long as free articles) would be extremely well received by both publishers and the marketplace, creating new revenue pathways for publishers and cheaper access for customers. As with some of the other tech solutions we’re proposing, this one may end up being a “module” of the ASR, so it will be developed with this in mind. That is, eventually the ASR may feature access to various categories of articles and products—free, cheap, PPV and subscription, for instance—and inasmuch, the architecture of this iTunes site should integrate seamlessly with the ASR. Ultimately, we view the iTunes site as a transitional tool—as a way to allow publishers to daylight a hundred years of backlisted articles now but in such a way as to still generate revenues from these assets. Careful modeling will need to take place first to determine price points, catalog, frontlist integration and more. Over time, as the ASR becomes richer and more populated, it may become more advantageous to de-monetize more and more of this backlist. Like the ASR and Yelp sites, the iTunes site will have significant revenues accruing from ads and sponsors. It will also accrue revenues from percentage sales. As with ASR, excess revenues from this site will be directed to OSI. Development and deployment will be on the same schedule as the ASR site, with full operation by end-2022.


In addition to studies and tech products, OSI’s existing work/priorities will also be supported by this grant. This includes:

  • Consolidation and implementation of OSI recommendations: OSI has accumulated a wealth of knowledge over its four years of operation. We are in the early stages of publishing materials that consolidate this knowledge into issue briefs and policy perspectives. A few of these have been published to-date; many more are planned (around 50 have been identified), to be written by OSI participants. In terms of priorities, the next most needed publication is OSI’s “Plan A” for open—a summary paper that captures the general sense of the OSI group with regard to what steps the global community should take next in order to ensure the rapid, collaborative and sustainable development of global open science. We expect this Plan A document to be issued by year-end 2019. Plan A will, in essence, be OSI’s roadmap for the future of open science. A number of different stakeholder groups (including IGO’s, led by UNESCO; scholarly societies, led by the NAS; the AAU, representing university provosts; and others) also realize that broad, collaborative action is needed now. What we are seeing as a result are parallel, high-level efforts happening around the world to create a new roadmap for the future of open. However, there is no convergence of activity and no central point. OSI will fill this role and communicate this convergence perspective in Plan A—as an observatory to keep these similar and important efforts connected, aware of each other’s existence and activities, and coordinated so actions and policies can have more impact. We need this central hub to ensure that we can have reasonable, sustainable, global, inclusive action—a group to inform, coordinate and share policies that will lay the groundwork for the future of open research/data and open science in particular.
  • Annual global survey of state of open: How is open changing? The fact is we just don’t know. Studies measuring open aren’t conducted at regular intervals and don’t use the same methodology. In order to measure global progress toward open, we need a baseline and consistent, comprehensive, global measurements. Several OSI participants have volunteered to help develop this product and implement it. The Center for Open Science is once such partner; Editage/CATCUS is another (who will help translate this and disseminate it to global audiences). This annual survey will be an important tool in helping us better understand current needs and perspectives, understand where we need to focus our open efforts, and track our progress toward achieving our objectives.
    • One of OSI’s goals is to help countries understand open and understand how this issue (and current global proposals) impacts their equity, education and development goals. Our issue briefs (which UNESCO has promised to help co-brand and promote) are one tool in our education arsenal. Our studies and tech products are other tools. In addition to these, we will improve/enrich the OSI website with the goal of making it more of a hub/resource for open and a more useful teaching tool.
    • There are many ways to learn about open, far fewer ways to collaborate on global actions to improve open that aren’t biased toward set end-points (e.g., “let’s do a global flip,” or “let’s remove publishers from the process”). There are a great many groups looking for constructive ways to engage in realistic measures. An important approach OSI will cultivate beginning in 2020 is to bring organizations together to help pick the low hanging fruit—to create a global environment of cooperation for solving the most urgent problems together and in doing so build a track record of success. We don’t need a Plan S that changes everything for everyone tomorrow without regard for the consequences. We do need a Plan A that describes what needs to be addressed and describes realistic and sustainable ways to begin tackling these issues together in ways that are easy and make sense for everyone, and importantly, that have incentives aligned such that partners will be joining in this effort out of self-interest and not due to threat or obligation.
    • Events: OSI has hosted two full-group meetings to-date (in 2016 and 2017), one executive team meeting (in 2018), and helped sponsor several other meetings in this space (such as SciELO-20 in 2018). We will need to hold and sponsor a number of other meetings in the coming years. There is no better way to get solid input from a diverse range of participants than to hold meetings. Email works okay to continue the conversation, but there is simply no substitute for breaking down walls and making progress than in-person meetings. OSI participants will also participate as speakers and panelists in other global meetings, communicating OSI’s lessons of experience and also forging partnerships with universities, publishers, research institutions, governments, funders, societies and policy groups interested in moving forward with workable, global solutions to open research. By November of 2019, OSI will have marked four such efforts: (1) A presentation about OSI on the opening panel of the SciELO 20th Anniversary conference; (2) A presentation about OSI in the keynote portion of this year’s Charleston conference, and (3) Inclusion of OSI and key OSI outputs (such as the DARTS open spectrum) in the 50th Anniversary addition of the STM Report, a key resource for the scholarly publishing community; and (4) Inclusion of OSI in a debate at the 2019 Falling Walls conference about the future direction of open science.


AAP. 2018 (Nov 8). AAP, Researchers, Deeply Concerned About Plan S. Association of American Publishers

AHA. 2019 (Feb 4). AHA Expresses Concerns about Potential Impact of Plan S on the Humanities. American Historical Association

AIP. 2019 (April 30). An Interview with OSTP Director Kelvin Droegemeier. American Institute of Physics, newsletter 42.

Anderson, R. 2019. OSI Issue Brief 3: Deceptive Publishing. Open Scholarship Initiative. doi:10.13021/osi2019.2419

Archambault, É, D Amyot, P Deschamps, AF Nicol, F Provencher, L Rebout, and G Roberge. 2014. Proportion of open access papers published in peer-reviewed journals at the European and world levels–1996–2013. European Commission

Bosman, J. 2013. Nine reasons why Impact Factors fail and using them may be harmful to science. March 11, 2013 blog post.

Budapest Open Access Initiative (BOAI). 2002.

cOAlition S. 2018. Plan S website.

Davis, PM, and WH Walters. 2011. The impact of free access to the scientific literature: a review of recent research. Journal of the Medical Library Association: JMLA 99(3), 208-217.

Force11 (

Hampson, G. 2018a. Finding common ground. SciELO conference presentation

Hampson, G. 2019a (2nd ed.). OSI Policy Perspective 1: Plan S & the quest for global open access. Open Scholarship Initiative. doi: 10.13021/osi2019.2450

Herb, U, and J Schöpfel. 2018. Open Divide Emerges as Open Access Unfolds. In J. Schöpfel & U. Herb (Eds.), Open Divide? Critical Studies on Open Access (pp. 7-13). Sacramento, USA: Litwin Books.

Himmelstein, DS, et al. 2018. Sci-Hub provides access to nearly all scholarly literature. eLife 2018;7:e32822. doi: 10.7554/eLife.32822

Lozano, G., et al. 2012. The weakening relationship between the Impact Factor and papers’ citations in the digital age. arXiv. May 19, 2012.

Mackie-Mason, J. 2016 (April 26). Economic thoughts about “gold” open access. madLibbing (blog).

McKenzie, L. 2019 (August 16). Linking Liability. Inside Higher Ed.

Moniz, EJ. 2019 (June 14). Innovating a Green Real Deal. Science. Vol. 364, Issue 6445, pp. 1013. doi: 10.1126/science.aay3140

Montgomery, L and R Xiang. 2018. Understanding open knowledge in China: A Chinese approach to openness? Cultural Science Journal 10(1), 17–26. doi: 10.5334/csci.106

Open Science Initiative Working Group (OSIWG). 2015. Mapping the Future of Scholarly Publishing. Science Communication Institute

OSI. 2016a. Report from the impact factors workgroup. Open Scholarship Initiative. doi: 10.13021/G88304

OSI. 2016b. Report from the embargo workgroup. Open Scholarship Initiative. doi: 10.13021/G8S014

Plume, A, and D van Weijen. 2014 (September). Publish or perish? The rise of the fractional author. Research Trends 38.

Plutchak, TS. 2018. OSI Issue Brief 1: What do we mean by open? Open Scholarship Initiative. doi:10.13021/osi.v3i0.2367

Salmon, J. 2016. The Grand Compromise of US Public Access Programs: Going Green. US Department of Energy (OSTI).

Schimmer, R, KK Geschuhn, and A Vogler. 2015. Disrupting the subscription journals’ business model for the necessary large-scale transformation to open access. doi:10.17617/1.3.


Science-Metrix. 2018. Analytical support for bibliometrics indicators: Open access availability of scientific publications. Science-Metrix.

SciHub (currently at

Shamseer, L, D Moher, O Maduekwe, L Turner, V Barbour, R Burch, J Clark, J Galipeau, J Roberts, and BJ Shea. 2016. Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison. BMC Medicine, 2017 15:28. doi: 10.1186/s12916-017-0785-9

STM. 2018. The STM Report: An overview of scientific and scholarly publishing (50th edition).  2018 (Edited by Johnson, R, A Wilkinson, and M Mabe). International Association of Scientific, Technical and Medical Publishers.

Strinzel, M, A Severin, K Milzow, and M Egger. 2019. Blacklists and Whitelists To Tackle Predatory Publishing: a Cross-Sectional Comparison and Thematic Analysis. mBio. Doi:10:e00411-19.

Taylor & Francis. 2014. Taylor & Francis Open Access Survey.

Tennant, J, and T Ross-Hellauer. 2019. The limitations to our understanding of peer review. SocArXiv. doi: 10.31235/

Tenopir, C, E Dalton, L Christian, M Jones, M McCabe, M Smith, and A Fish. 2017. Imagining a Gold Open Access Future: Attitudes, Behaviors, and Funding Scenarios among Authors of Academic Scholarship. College & Research Libraries, 78(6), 824. doi:10.5860/crl.78.6.824

Unpaywall (