Predatory Open Access: Part 1 – A Sting Op and Indictment of the OA Model

In the last couple of days, an article from Science has literally gone viral in the scientific circles. It is yet another indictment of what Jeffrey Beall has termed as Predatory Open Access. In a series of posts, I shall comment on this issue. In the first post of this series, I talk about briefly regarding the Science article that is making such waves. In the subsequent posts, I shall explore this issue in more depth.

Been there, done that!

I shall begin with a short narrative of my own experience in this matter.

I had authored a paper along with some colleagues on Health2.0 related issues and had been looking for a suitable avenue to publish it. Being an open access activist, I always try my best to get whatever I write in to OA journals, sometimes at the prospect of submitting to lesser known, but niche journals, rather than bigger and closed, paywalled ones. (Of course, there is no guarantee that the latter shall accept the articles, but the point is not that – the point is supporting OA from a moral and principles point of view). Anyways, so this journal accepts our article after a rapid review session in which we are asked to make a few editorial adjustments. And once it is accepted, they ask for processing charges! Now we scanned through their site and looked through the stuff they had put up, but did not find any mention of author processing charges. Seeing this, I brought this issue to the notice of Beall, who has become a bit of a sole crusader in the field of identifying predatory open access outlets. He strongly recommended withdrawal of the article from the said journal and mentioned that he will take a look at it and if needed, incorporate it into his list. [Details slightly changed and to protect the identity of my co-authors who were queasy about making the admission to this event publicly.] We followed his advice, wrote a strongly-worded withdrawal mail and subsequently published it in a journal with a decent standing, which also happened to be an open access journal as well!

The Science Saga:

Coming back to the article at hand. In it, they generated “spoof papers’ which are nothing but false, scientific sounding papers which have no substance in it. They went a long way to ensure comparability of the papers and ensuring that the science was bad. Here is an excerpt from their methods section:

The paper took this form: Molecule X from lichen species Y inhibits the growth of cancer cell Z. To substitute for those variables, I created a database of molecules, lichens, and cancer cell lines and wrote a computer program to generate hundreds of unique papers. Other than those differences, the scientific content of each paper is identical.

The fictitious authors are affiliated with fictitious African institutions. I generated the authors, such as Ocorrafoo M. L. Cobange, by randomly permuting African first and last names harvested from online databases, and then randomly adding middle initials. For the affiliations, such as the Wassee Institute of Medicine, I randomly combined Swahili words and African names with generic institutional words and African capital cities. My hope was that using developing world authors and institutions would arouse less suspicion if a curious editor were to find nothing about them on the Internet.

The papers describe a simple test of whether cancer cells grow more slowly in a test tube when treated with increasing concentrations of a molecule. In a second experiment, the cells were also treated with increasing doses of radiation to simulate cancer radiotherapy. The data are the same across papers, and so are the conclusions: The molecule is a powerful inhibitor of cancer cell growth, and it increases the sensitivity of cancer cells to radiotherapy.

There are numerous red flags in the papers, with the most obvious in the first data plot. The graph’s caption claims that it shows a “dose-dependent” effect on cell growth—the paper’s linchpin result—but the data clearly show the opposite. The molecule is tested across a staggering five orders of magnitude of concentrations, all the way down to picomolar levels. And yet, the effect on the cells is modest and identical at every concentration.

One glance at the paper’s Materials & Methods section reveals the obvious explanation for this outlandish result. The molecule was dissolved in a buffer containing an unusually large amount of ethanol. The control group of cells should have been treated with the same buffer, but they were not. Thus, the molecule’s observed “effect” on cell growth is nothing more than the well-known cytotoxic effect of alcohol.

Well, you should just read their whole paper here. It makes for fantastic reading!

What was damning was that a majority of the open access journals accepted his spoof paper. The authors have shown the tangled web of “deceit”. They followed the money and the journals and came up with this rather self-explanatory infographic:

Tangled web. The location of a journal's publisher, editor, and bank account are often continents apart. Explore an interactive version of this map at
Tangled web. The location of a journal’s publisher, editor, and bank account are often continents apart. Explore an interactive version of this map at

What is especially disheartening is to see the number of dots that land in India. It seems like India has become quite the hotbed for unethical, predatory open access players. The authors of the Science article also seem to think so, as they comment:

About one-third of the journals targeted in this sting are based in India—overtly or as revealed by the location of editors and bank accounts—making it the world’s largest base for open-access publishing; and among the India-based journals in my sample, 64 accepted the fatally flawed papers and only 15 rejected it.

They go on to mention that it is not just a localized problem. There are a host of journals that are hosted or share brands with reputed publishing houses that have been caught in the sting operation. The authors comment on the ludicrous nature of the whole issue of acceptance of the spoof:

Acceptance was the norm, not the exception. The paper was accepted by journals hosted by industry titans Sage and Elsevier. The paper was accepted by journals published by prestigious academic institutions such as Kobe University in Japan. It was accepted by scholarly society journals. It was even accepted by journals for which the paper’s topic was utterly inappropriate, such as the Journal of Experimental & Clinical Assisted Reproduction.

It is indeed ridiculous!

Predatory Open Access: Giving OA A Bad Name!

Since Beall’s list came out, there has been much mutterings and whisperings about it in the academic circles since it is difficult to envision a strict set of criteria to adjudge a journal predatory or not. A large segment of his work builds on his experience and hence there is a chance that a few journals that appear unconventional may have been labeled as predatory without being so actually. However, although there can be debate about what goes into calling a journal “predatory”, there is no debate about the fact that his endeavor was sorely needed by the academic fraternity. In an environment where such a travesty is gaining momentum every day, it needed a crusader like Beall to step in and raise the battle cry!

However, not all open access journals are predatory. This bogus article got shot down by PLoS ONE, which is the largest OA journal in the world. It was also rejected by journals from the Hindawi Group, which, once, was included in Beall’s list of predatory publishers. Good for them! Proving their worth for once, at least. PLoS ONE, owing to its size and stature, had come under a lot of fire from the international community since it opened with the approach to publish any study that was scientifically and ethically sound. However, the authors of the spoof article came to realize that they were the ones to provide solid peer review on this article. They not only rejected the article within 2 weeks for poor scientific quality, but also, was the only journal to point out the potential ethical breaches in the article.

So, the impulse to label all open access journals as predatory and money-seeking is strong, but one has to keep in mind that just as in life, there are shades of black and white, with fifty shades of grey in between, in the medical publishing industry as well. The only way to protect oneself from such fraudulent scamsters while, at the same time, promoting open access, is by pushing for quality control in journal recognition. One can stay safe from such predators by sticking to journals that have adopted a “free” open access model, where the running costs of the journal are borne by a recognised and respected scientific body, and the authors are exempt from having to pay to make their articles open access.

Developing World Scientists: The Naive Targets

The worst part is that the target of these fraudulent publishers are the researchers in the developing world, which sort of justifies why there are so many predatory open access journals coming out of India and Africa. The easy publication model is attractive for the researchers who need to bolster the number of papers they have under their belt in order to achieve academic success and recognition in the form of a loaded curriculum vitae. Although they may not be able to cough up significantly large amounts (some journals ask fees in thousands of dollars, which is clearly beyond the range of developing world scientists, who usually work under massive resource restriction), the predatory journals are happy to make whatever they can off of them. The Science article speaks of journals that have accepted peanut-payments of 90$ as author charges as well.

This is a really worrisome trend because not only does this show a lack of scientific integrity on behalf of the journal publishers, but also shows a certain affinity for graft and con that is a strict no-no in the world of science.

Dealing with Predatory OA:

The best way is to be aware of the show going on. While not everyone may have the guts that Jeffrey Beall has to name and shame big publication houses and label them as “predatory” on a publicly available resource, we all can fight to protect ourselves and our colleagues. This is more about the need to spread awareness regarding the phenomenon than anything else. And to that effect, the Science article has been phenomenally successful. While Beall’s list has been known to the academia for a while now and everyone appreciates the amount of work he has put into it, this article has come out with irrefutable proof, that some players are in it just for the money. And the fact that Beall’s list has been shown to be over 80% accurate in identifying the miscreants successfully, goes miles in stating the rigor with which he went about this task.

In conclusion, the onus is on us. Like in the real world, in the academic publication world as well, there are hyenas and wolves waiting to scavenge on our research work and meager resources. It is in our OWN interest to be aware of these miscreants and protect our rights.

In the next segment I shall deal with the issue of peer review in open access journals. Does the payment of author fees taint the process of peer review and lower acceptance thresholds? Hang on till the next time then…

6 thoughts on “Predatory Open Access: Part 1 – A Sting Op and Indictment of the OA Model

  1. The experiment should be reproduced in some sort with conventional journals as well, and then compare rates of acceptance, instead of soul-ly targeting OA ones(Oh God, I hope I read it right and they haven’t already done that :P). At the end of the day it comes down to individuals and having a team of reviewers strong enough to nit-pick well enough, ain’t it? Things may/could slip by any groups, Lancet and the Vaccine autism paper, etc. Though the concentration of burgeoning OA journos and other fee processes is disconcerting.


    1. Thanks for caring to read and leave behind a well thought out comment. I agree it would possibly pass by many a conventional journal as well, but that would prove the much criticised weaknesses of the peer review system and do nothing to remove the “blemishes” on the open access model. The only way out of this is complete disclosure of all data and raw information so that one can analyse and come to their own conclusions. Free the data! Vive le revolucion!



    To show that the bogus-standards effect is specific to Open Access (OA) journals would of course require submitting also to subscription journals (perhaps equated for age and impact factor) to see what happens.

    But it is likely that the outcome would still be a higher proportion of acceptances by the OA journals. The reason is simple: Fee-based OA publishing (fee-based “Gold OA”) is premature, as are plans by universities and research funders to pay its costs:

    Funds are short and 80% of journals (including virtually all the top, “must-have” journals) are still subscription-based, thereby tying up the potential funds to pay for fee-based Gold OA. The asking price for Gold OA is still arbitrary and high. And there is very, very legitimate concern that paying to publish may inflate acceptance rates and lower quality standards (as the Science sting shows).

    What is needed now is for universities and funders to mandate OA self-archiving (of authors’ final peer-reviewed drafts, immediately upon acceptance for publication) in their institutional OA repositories, free for all online (“Green OA”).

    That will provide immediate OA. And if and when universal Green OA should go on to make subscriptions unsustainable (because users are satisfied with just the Green OA versions), that will in turn induce journals to cut costs (print edition, online edition), offload access-provision and archiving onto the global network of Green OA repositories, downsize to just providing the service of peer review alone, and convert to the Gold OA cost-recovery model. Meanwhile, the subscription cancellations will have released the funds to pay these residual service costs.

    The natural way to charge for the service of peer review then will be on a “no-fault basis,” with the author’s institution or funder paying for each round of refereeing, regardless of outcome (acceptance, revision/re-refereeing, or rejection). This will minimize cost while protecting against inflated acceptance rates and decline in quality standards.

    That post-Green, no-fault Gold will be Fair Gold. Today’s pre-Green (fee-based) Gold is Fool’s Gold.

    None of this applies to no-fee Gold.

    Obviously, as Peter Suber and others have correctly pointed out, none of this applies to the many Gold OA journals that are not fee-based (i.e., do not charge the author for publication, but continue to rely instead on subscriptions, subsidies, or voluntarism). Hence it is not fair to tar all Gold OA with that brush. Nor is it fair to assume — without testing it — that non-OA journals would have come out unscathed, if they had been included in the sting.

    But the basic outcome is probably still solid: Fee-based Gold OA has provided an irresistible opportunity to create junk journals and dupe authors into feeding their publish-or-perish needs via pay-to-publish under the guise of fulfilling the growing clamour for OA:

    Publishing in a reputable, established journal and self-archiving the refereed draft would have accomplished the very same purpose, while continuing to meet the peer-review quality standards for which the journal has a track record — and without paying an extra penny.

    But the most important message is that OA is not identical with Gold OA (fee-based or not), and hence conclusions about peer-review standards of fee-based Gold OA journals are not conclusions about the peer-review standards of OA — which, with Green OA, are identical to those of non-OA.

    For some peer-review stings of non-OA journals, see below:

    Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187-195.

    Harnad, S. R. (Ed.). (1982). Peer commentary on peer review: A case study in scientific quality control (Vol. 5, No. 2). Cambridge University Press

    Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242.

    Harnad, S. (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8).


  3. Very useful comments from Stevan Harnard for an excellent write-up by Pranab. Thank you both for being generous in your shadings.


Debates and Discussions...

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.