Helping science succeed
Helping science succeed

Deceptive publishing: Summarizing the OSI conversation

Predatory publishing is term coined in 2008 by Jeffrey Beall, a scholarly communications librarian at the University of Colorado at Denver. Between 2010 and early 2017, Beall published a list of thousands of journals and publishers that were suspect in their motives, methods and quality. Unfortunately, Beall’s criteria, methodology and rankings were never made transparent, publishers who complained were unable to remove themselves from the list, and Beall himself was unwilling to open up this enterprise to create a lasting, open, and sustainable resource. Under pressure from his university (which in turn was under pressure from affected publishers), Beall stopped publishing his list in January of 2017.

The demise of Beall’s list has left a void in scholarly communications. Contested as his original lists were, they brought focus to an important issue is research. The need for a resource like this still exists—predatory publishers aren’t going away.

Defining predatory

The term “predatory” has been falling away in favor of “deceptive,” which is a better description of the tactics used by these journals. Deceptive journals are those that lie about their peer review practices, editorial board composition, impact factors, indexing, affiliation, or other processes essential to evaluating research and protecting research integrity. The issue of deceptive practices is fairly black-and-white. What matters isn’t marketing tactics or street address, but whether publishers are misrepresenting themselves and their journals in order to drum up business, and in addition, are uncritically publishing everything they receive.

Why is this important?

Deceptive journals are harmful to research on several levels:

  1. When peer review is faked, and even cursory reviews don’t occur to weed out plagiarism, fraud, and quackery, information gets introduced into the research ecosystem that has no business being there. Therefore, deceptive journals threaten the integrity of research and the foundations of knowledge.
  2. Deceptive journals divert funder money (much of which comes from governments) away from legitimate research publication efforts and toward organizations that defraud the research community by selling counterfeit publication credentials.
  3. These journals make the work of researchers and students harder insofar as requiring them to decide how and whether to value research work published in fraudulent venues.
  4. These journals can be consciously and deliberately used as a way of deceiving colleagues, particularly those serving on tenure and promotion committees. While it may be relatively easy to detect a scam publisher by watching for telltale signs and then doing a little bit of digging, it’s harder to detect a citation to a scam publisher when it’s nestled among citations to legitimate publishers. The websites of scam and legitimate publishers may look meaningfully different from each other, but article citations pretty much all look the same. Higher education systems that place a great deal of emphasis on quantitative measures in promotion and tenure decisions only make this problem worse.
  5. Finally, deceptive journals exacerbate the already wide gap between developed and developing countries, in part because researchers from developing countries often have fewer resources available to publish in the more expensive tier one journals—these lower cost, and often times much lower quality and even fraudulent journals can often be the only affordable venue available for publishing research. There is also an information literacy gap at work here, where new investigators and those in the developing world can be less aware of what constitutes deceptive publishing and more likely to make use of and/or get taken in by publishing scams.

How big is the problem?

The short answer to this question is no one really knows. Shen and Bjork (2015) attempted to put a number on it but their methodology was fundamentally flawed, purporting to show the growth of “predatory” publishing but only showing the growth articles from Beall’s list journals (about 8,000 were identified for this study). Since Beall’s list is just a black box, with methodology and rankings that have never been transparent, we cannot use this in any scientific way as a baseline for examining the growth of predatory publishing.

What Shen and Bjork’s study does show, however, is a rapid growth in “non-indexed” journal articles, which can often look much different than indexed journals in terms of the size of publishers, the type of content published, atypical (and often cheaper) fee structures, faster turnaround, and even less rigorous review processes—none of which is faked, just different from the western publishing norm. We have no way of knowing how many of the thousands of publishers on Beall’s list were simply guilty of having a bad street address, how many were added to the list for sending annoying spam emails, or how many were guilty of more substantive issues like lying about their peer review processes and editorial boards. Therefore, Shen and Bjork’s numbers probably more accurately reflect the failure of the current publishing system to deal with the global variety found in non-indexed research journals.

What kind of volume are these journals adding to the scientific research corpus? According to this study, about 420,000 articles were published in 2014 from 8,000 Beall’s list journals, up from 53,000 only four years earlier. Only about eight percent of these journals are listed on DOAJ, which doesn’t necessarily mean they are predatory—just not indexed yet. If accurate, this is a significant fraction of the 2.5 million articles (in 2014) that Mark Ware’s annual STM report estimates are generated each year.

Another shortcoming of the Shen and Bork study is that only APC journals were looked at, and not all gold is APC gold. Archambault (2014) estimates that around 210,000 gold OA papers total were published in 2014 (around 12.1% of the total market), and Elsevier (2013) estimates that an additional 5.5% of the market is gold OA via APC. And then there’s hybrid, green and other variants. So after all is said and done Archambault estimates that about 54% of the total global article output—over a million articles annually—is some form of open at the moment, but it’s hard to tell how much of the so-called predatory output is in fact included in this total. For one, it’s possible that since only about 8% of Beall’s list journals are in DOAJ, then only 8% of the 420,000 predatory papers found by Shen and Bjork are being counted in the current global totals of article outputs of about 2.5 million articles in 2014 (according to Mark Ware’s latest STM report). It’s also possible that there’s just so much noise in the input signal that the output signal is meaningless—the majority of these 420,000 could already be included in the “legitimate” total. This is probably the more likely explanation. We’re seeing rapid growth in a small segment of the journal market. What this means needs to be better understood, particularly with regard to what might be an emerging market preference for quick, cheap and cursory turnabout.

What we do know is that according to the STM report, OA journals make up about 26-29% of the 28,100 global total (Plum and van Weijen 2014). DOAJ lists 10,100 of these, although not all are peer reviewed. About 13% of Scopus journals (of 22,000 total) are OA, as are 9% of WOS journals. So claiming there are 8,000 predatory journals out there pumping out 20% of the total global volume of research papers just isn’t plausible. What is more likely here is that there is lots of flux and growth in the system which is difficult to measure accurately, and that with certainty (because there’s quite of big body of research on this), there are a number of publishers—exactly how many we don’t know—who are scamming the system and polluting the research stream.

So what do we do about it?

OSI has been debating this issue at length. Exactly what should the scholarly communication community do about it? On the one hand, there are those who ask “what should we do about what?” The growth of journals? Journals have been growing at a rate of 3% per year for the last hundred years, spurred on by R&D spending and the rapid growth of new disciplines. Added to this, the same pressures pushing users toward “rogue” access solutions like SciHub are also encouraging the growth of “new and innovative” publishing platforms. OSI2017’s “Rogue Solutions” workgroup identified several pressures points in the research community that are making non-traditional publishing venues more attractive, including the cost of publishing in tier one journals and restrictive licensing agreements. Faster turnaround times and much higher (or even universal) acceptance rates also fit the needs of some academics better than traditional publishing models.

So again, what we actually more concerned about is not small startups who may be focusing on narrow issues in imperfect and unorthodox ways, but journals that lie about who they are and what they’re doing, will print anything for a price, and in doing these things pose a threat to research and the entire scholarly publishing trust. What do we do about this? Ideas range from creating a new blacklist, to creating a whitelist, to creating a Yelp-like list (where the community gets to decide who is legitimate and who isn’t), to taking more proactive measures like creating standards and reaching out to the community.

OSI participants summarize the current state of affairs like this:

  1. Let’s forget about Beall’s list and consider coming up with something new that does a more open, transparent and accountable job of shining a spotlight on the bad actors in this system. (And w say this with respect for the heavy lifting that Jeffrey has done over the years to bring this issue to the fore. This would just be the next iteration of what he started.)
  2. Let’s clearly define what a “bad actor” is in a way that doesn’t discriminate against legitimate LMIC journals and authors or against new publishers, OA publishers, or any other group on the basis of anything other than their meeting a mutually-agreed upon set of standards.
  3. Let’s gradually figure out what to do next—whitelist, blacklist, hit list (a worst of the worst list), accreditation, or whatever. A hit list approach would probably need to be carried out by a government or IGO in order to have the appropriate gravitas—not that this couldn’t happen. The blacklist approach is simply a “best value” shopping list. And the whitelist approach is probably the gold standard but would take a long time to create. An accreditation regime would also probably take a few years to get off the ground. So we’re probably looking at short-term and long-term projects here if we’re interested in everything—something to fill the gap soon (like info literacy projects), followed asap by a first-cut blacklist of the worst of the worst (that gets heavy promotion in universities), followed asap by accreditation and whitelist project.

Blacklists & whitelists

Would a new blacklist to replace Beall’s list be helpful?—or at least a “hit list” of the worst offenders, all backed up with clear and convincing evidence contributed by a variety of sources? Probably. Consider the colossal waste of time accruing right now as researchers, students, editors, funding agencies, and tenure committees hunt to find out if a suspected deceptive publisher has real or fictitious editors, indexing status, or impact factor, actual content or mostly plagiarized or junk work, actual peer review or just fake and cursory review, and so on. A new Beall’s list wouldn’t be a list that penalizes journals that have bad addresses or low APCs, but one that calls out journals with blatantly fraudulent practices, and the inclusion criteria, evaluation, and adjudication process would be completely transparent. There might be legal risks associated with this approach, as Jeffrey Beall discovered—risks which can be mitigated if this approach is open and objective (like a Consumer Reports ranking)—but even with a blacklist we’re only scaring people away from the worst actors and not keeping pace with the rapid entry of new actors. Plus there’s nothing to stop a blacklisted publisher from changing the name of their journal(s) and popping up again.

Old versions of Beall’s list are still used, and Cabell’s launched a subscription-access version of a blacklist in mid-2017. In the Cabell’s version, journals can be penalized for poor grammar and/or spelling, if the editor publishes in his/her own journal, if member benefits are deemed inadequate (e.g., calling yourself an “academy” without offering any benefits to members), bad web design (overemphasizing fees), no stated digital preservation policies, no email opt-out mechanism, and more—see http://www.cabells.com/blacklist-criteria for the full description. Although this list is paywalled, and evaluations are made for only 11,000 journals, it is an important work in progress and is also transparent. However, the “60+ behavioral indicators” used by Cabell’s are a far more expansive test of deceptive behavior than OSI recommends, sweeping small, legitimate startup publishers with limited resources into the same dustbin as blatant offenders like OMICS.

How about a whitelist? While certainly helpful (and Cabell’s publishes one of these, too) such a list would never be fully inclusive and would only serve as a dictionary, not as a vetting mechanism.

The OSI2017 “Rogue Solutions” workgroup proposed developing a Journal Master List—a combination blacklist plus whitelist—in order to improve our understanding of what journals exist. Using this master list as a starting point, it might then be possible to develop full ranking or evaluation systems—even an automated scholarly content quality assessment tool that could help libraries and readers avoid fake journals.

Accreditation

How about an accreditation list— a whitelist with teeth if you will? If you’re not on it, you need to work to get on it, and if you don’t, you’re not going to be a legitimate destination for legitimate research. In an accreditation regime, researchers can continue to publish in non-accredited journals, but these journals would not be indexed up by OpenAIRE, Chorus, DOAJ, or other credible resources, and none of the work published in these journals would be allowed in any citations. An industry, discipline or society accreditation mechanism might serve as a self-managed tool to help keep the junk out of research. But to make this effective, universities and other stakeholders would need to sign onto this idea and agree that going forward, any work published in any non-accredited journal would be looked upon disfavorably, even with malice. The most apt analogy here might be fake degree-granting institutions. Inside academia anyway, people simply don’t get credit for degrees earned at non-accredited institution. And similarly, “credit” shouldn’t be granted in academia for research articles published in non-accredited journals.

Most Latin American countries already have public systems that control, accredit  and rank journals. These systems are maintained by public research agencies with the support of research communities. The systems vary from country to country and they aim at supporting national research evaluation systems.

Here are 10 criteria that a journal might need to meet in order to become accredited:

  1. Does this journal cover an actual field of research (not pseudo-science, not fabricated science, etc.)?
  2. Is this journal selective or does it publish anything for a fee (note, though, that proving the actual rejection rate might be too difficult; also, we wouldn’t want to discriminate against another Science or Nature wannabe that covers a broad variety of subjects)?
  3. Has this journal ever published fake or demonstrably biased content (e.g., pushing a clearly unscientific special interest perspective) or taken previously published content from other journals and presented it as new and original?  If so, how did the journal or publisher respond to this error and have satisfactory measures been put in place to prevent this from happening again?
  4. Does this journal have an editorial board appropriate for its subject matter?
  5. Has the membership of this editorial board been verified (do these people actually exist and know they’re on this board)?
  6. Has this board signed a statement agreeing to their responsibilities as members of the board (provide a boilerplate statement that all boards need to sign that makes it clear board members have a professional responsibility here and can’t just add their name to a board and call it good)?
  7. Does this journal provide (or otherwise arrange for) peer review? If no, why not—what alternative to traditional peer review is it using?
  8. Does this journal have a reliable process for archiving (PubMed or some other system; we can offer a plate of alternatives, as well as tools to help with compliance)?
  9. Does this journal have truthful advertising (as far as the accrediting body can tell—no deliberate falsehoods on its website or in email approaches about impact factors or DOAJ indexing, clear disclosure of APC fees and policies, no false claims of society affiliations, etc.)?
  10. Has the journal satisfactorily addressed any valid complaints (if any) not covered by the above nine point that have been sent by or received through the accrediting body?

New and emerging journals wouldn’t be discriminated against with these criteria—although accreditation status could get reviewed by request and subject to reversal.

Standards

Another approach discussed in OSI is to come to a consensus on and then somehow enforce global publishing standards:

  1. At present, there are no mandatory, enforceable international standards for journal publishing (there are best practices guidelines, ethics guidelines, internal standards, etc., but no list that says a journal must do x, y and z).
  2. If journal standards were to be developed, there shouldn’t be a single standard of excellence. Would minimum standards be okay?
  3. These standards should apply to journals, not publishers (to the end products, not the producers)
  4. These standards should be voluntary for now—not accreditation standards
  5. We (OSI) should create (and promote through the OSI website) an accepted definition of what constitutes deceptive publishing. Maybe this is followed up with a blacklist, maybe not—but at minimum we agree on the definition.
  6. We (OSI) should join NIH in discouraging publishing in journals that are deceptive (as defined by OSI)
  7. We (OSI) should begin the process of improving the capacity of journals that aren’t meeting standards, and over time, discourage publishing in these journals if they don’t improve
  8. We (OSI) should assist the FTC and other experts as possible to offer input on the language used in future rulings on this issue.

To clarify, the journal world already has lots of standards. What we don’t have is a rule book that says “in order to be considered a journal, this is what you have to do,” beyond the specific requirements requested by libraries. There are also loads of industry standards such as DOI, metadata creation and so on that journals do to participate in services or ensure they are discoverable. But this is all voluntary, and expensive. To not follow these kinds of rules may be counterproductive in terms of discoverability, but not doing so doesn’t make a journal predatory.

This is where is where the recommendation for more standards get tricky. If the market wants to allow more innovative platforms and those platforms literally don’t want to play by the rules, then setting up a new set of rules will be meaningless. Further, if our goal is to reduce the cost of publishing, then adding new required standards may be counterproductive. And it’s not as though this issue has been neglected to-date. COPE, ICMJE, CSE and WAME all have a very solid history of documented expectations of what exemplary journal publishing entails – we know what good publishing practices are.  Enforcement has always been the problem. The problem is that there isn’t any entity empowered to provide sanctions.  Publishing is a diverse global enterprise – we can refer to “the industry” but it doesn’t exist as a monolithic entity with the power to reward and punish.  What entity has the reach and muscle to sanction “predatory” publishers, even if they could be identified?  The FTC is going after OMICS.  That’s one very big dog going after one publisher in a case that will likely drag on for years unless it is settled in some mutually less than satisfying way.  A number of possible enforcers exists, but it’s all ad hoc and piecemeal at the moment.

Standards need enforcement, and enforcement needs enforcers and penalties. Since there is no global governing body with such power, the only solution is agreement between all major stakeholder groups —particularly universities—on what legitimate publishing and publishing incentives and practices should look like and what the penalties for noncompliance should be.

Outreach

Greatly improving outreach to the global stakeholder community is another approach discussed at length in OSI. This outreach might include:

  1. Education/awareness efforts aimed at researchers who have fewer publishing options and possibly less awareness of the distinction between scam publishers and legitimate ones
  2. Education/awareness efforts aimed at researchers from fully served regions, but with a focus on the damage that scam publishing does to science
  3. A Yelp-like journal review resource system

To aid with the point one, Rick Anderson created a red-light, yellow-light criteria that might work alongside resources like ThinkCheckSubmit. Rick’s criteria are divided them into two broad categories: red light criteria are practices that indicate fraudulent or deceptive intent; and yellow light criteria that raise serious suspicion about a journal’s intentions and business practices.

RED LIGHT

  • Falsely claiming to have an impact factor
  • Claiming a higher IF than it actually has
  • Referencing a non-existent impact measurement
  • Naming individuals to the editorial board without their permission
  • Refusing to remove people from the editorial board when they object
  • Claiming falsely to provide peer review
  • Claiming falsely to provide meaningful editorial review
  • Claiming falsely to be selective (in fact publishing any article for which the APC is paid)
  • Falsely claiming affiliation with a society or other scholarly/scientific organization
  • Falsely claiming to be indexed
  • Taking previously published content from other journals and presenting it as new and original
  • Publishing only research results that favor the interests of some group or organization
  • Pervasive or systematic plagiarism

YELLOW LIGHT

  • Lack of transparency about APC charges
  • Misleading journal title
  • Excessively rapid publication turnaround
  • False addresses
  • One editor is listed as EIC for a large number of titles
  • No EIC is identified
  • One editorial board is listed for a large number of titles
  • Publishing articles far outside of journal scope
  • Excessively broad journal scope
  • Publishing obvious pseudo-science
  • Lack of a retraction policy and/or practice of “stealth retraction”

ThinkCheckSubmit is another helpful tool, as well as other existing tools such as AuthorAID (www.authoraid.info). The aim of AuthorAID is to help researchers in developing countries write about and publish their work. One way this group achieves this is through developing a global network of researchers. Through the network, researchers can find long-term mentors or short-term advice to help them through the process of research design, writing and publication. This also enables researchers to find others in their field for collaboration, discussion and information.

New Services

With regard to establishing new and innovative services to address the demand for faster and cheaper publishing, there may also be a market opportunity here for legitimate actors to step up and help. For instance, what if existing legitimate publishers started offering “entry-level brand” alternatives—stripped down, minimal formatting, minimal editing, online-only versions that simply check articles to make sure they’re not plagiarized, and spot-check to make sure the researchers are legitimate. Peer review and other “features” can be offered as part of an a-la-carte services menu, but at minimum, they could simply provide a legitimate and affordable way for these researchers to get their work published. It’s a half-measure to be sure, but this new class of publications combined with education measures and improved lists might create enough pressure on the scammers to drive them out of business.

New efforts (what? how?) to improve the scholarly publishing system’s capacity for capturing and listing legitimate science being published in all regions of the world should also be explored. The All Scholarship Repository is one such innovation, as well as cutting edge meta search tools now under development that will help researchers find information much more easily than now.

And then

Once a framework for action is constructed, it will be easy for institutions and agencies to move forward on action as they see appropriate. This framework has four pillars:

  1. Clarity and precision: Once the scholarly communication community clearly defines “deceptive publishing” and creates a list of deceptive publishers, it will then possible for degree granting institutions and funding agencies like NIH to decide what to do with this information. Rather than being forced to issue appeals to avoid such publications, for instance, NIH can have the option of being more specific—publishing in x will lead to penalty y (demerits on the next grant application, for instance).
  2. Legal action: The FTC recently moved against OMICS, one of the world’s most prolific deceptive publishers. However, when there’s money to be made quickly, the potential threat of eventual legal action is hardly a widespread deterrent to small and persistent offenders who can take the money and run. It should be an effective deterrent, though, to use against deceptive publishers when they grow to Amazon-esque proportions.
  3. Outreach and education: This will continue to be important. More use of user tools like ThinkCheckSubmit will help at the individual level, and OSI can help coordinate and conduct broader outreach on this issue, including creating new tools and resources.
  4. Collaboration: Adopting some of the other measures described herein—from standards to master lists to simply conversations that push the ball forward—is an important fourth pillar to the global approach to deceptive publishing. Global, multi