Yesterday, the National Contact Point Open Access (Germany) sent a mail to several mailing lists in which it pointed to a list of highly cited Open Access journals. According to the mail, this list could be helpful in motivating scientists to publish in Open Access journals [translated by the author of this posting]: “From now on, an overview is available under the resources category with approx. 700 highly cited Open Access journals from all scientific disciplines. The aim of this overview is to make established and relevant Open Access journals visible to researchers and to increase their findability. In this way we are helping researchers to identify suitable and influential Open Access journals from their discipline to submit their manuscripts.” The list relies on the journal metric Source Normalized Impact per Paper (SNIP) (based on Elsevier’s database Scopus), which differs from the Journal Impact Factor (JIF) mainly by considering average citation rates of the discipline when calculating the impact (or SNIP score) of a specific journal.
In an answer Katja Mruck, editor of the Open Access journal Forum Qualitative Sozialforschung FQS (not in the list mentioned above), commented critically on this list. For example, she pointed out that in order to be indexed in relevant citation databases (like Scopus or the Web of Science, the latter is used to calculate the JIF), FQS would have to adapt its characteristics to the specifications of the databases, e.g. by
- publishing less articles and issues per year
- covering a smaller thematic spectrum
- ask FQS-related authors to place their publications strategically, namely in journals that are already indexed in these databases, and to cite FQS-articles.
None of this has been done because the journal obviously has attractive characteristics, which unfortunately are not represented in a citation count. Katja Mruck answers the questions about the quality and visibility (the characteristics with which the journals mentioned in the list were associated, see the screenshot above) of FQS [translated by the author of this posting]: “Quality? FQS articles are reviewed double-blind and, in the case of a publication recommendation, are only published after they have been editorially proofread in the mother tongue. Visibility? We have over 20,000 registered readers worldwide, plus those who use FQS but are not registered. Relevance? A look at the contents and authors could answer this question.”
She concludes with a pessimistic statement: “So from the heart of the OA movement we are now challenged to do things we would rather not do in order to remain in the heart of the OA movement.”
Unfortunately, I agree with her. Arguments used to promote Open Access are sometimes (or even often) ambivalent. For example, the impact argument, according to which Open Access boosts impact and which, reciprocally, is intended to strengthen support for Open Access. On the one hand, everyone complains about impact measures; on the other hand, scientists are lured with them. Yes, I know that SNIP has advantages over the JIF, but it (just like any other metric, especially the Altmetrics which are not better than citation measures) cannot extrapolate quality from quantity – especially since the quantity found is the result of multiple selections. I have the same problem with the argument that Open Access fosters economy – if we follow this argument, then it is not a big step to assess a publication’s quality by its economic value – and then journals like FQS are worse off than STM journals and may easily be considered worthless. But quality has nothing to do with SNIP, JIF or return of investment. The argument that Open Access really has advantages immanent to science (acceleration of science, transparency, dissemination and participation, science as a public good) is, at least in my opinion, becoming more and more marginalized.
There may also be reasons for this, because large-scale attempts to promote Open Access often come from institutions, e.g. the European Union (I recommend in this context this publication by Jutta Haider), that are concerned with impact and economic exploitability, not least to document the economically efficient use of public funds. And I have to admit this, too: These institutions may really have the power to promote Open Access (albeit possibly under conditions that I don’t like in the end), but unfortunately their commitment to Open Access couldn’t be stimulated with the above-mentioned scientific-immanent arguments, but only in the impact-economy-framework.
What remains? Perhaps that good Open Access journals, which have no measurable citation impact (as questionable as it may be) and no economic exploitability, have wrongly little significance in the alleged qualitative evaluation outside their community – and which for this reason are not recommended to scientists as a venue for publication. In case of doubt, does an Open Access Journal therefore have to decide against a format acknowledged in its community and adapt it to the market mechanisms of scientific publishing and the economic area if it wants to be assigned characteristics such as “quality, visibility, and relevance“?
Copyright note: Icon available under MIT License from https://www.iconfinder.com/icons/2561489/unlock_icon