Best journal? Depends on whom you ask

Which journals are the most influential? The answer obviously depends on the individual who is answering, and is sensitive to variables like one’s field, subfield, position, and recent professional history — but that doesn’t stop people from compiling lists of the most important publications within broad disciplines like “biology”.

The methodology by which one makes the comparison is also important. If one proceeds by “impact factor” (a complex, proprietary, and increasingly challenged function of how frequently papers in a given journal are published), the answers tend to converge on a few very high-profile journals. If one takes a poll of librarians and experts chosen by librarians, the resulting list overlaps somewhat with the one derived from impact factor, but with far greater diversity (possibly as a result of honoring journals that were once great but whose readership has fallen off somewhat in the past few decades). Neither list mentions open-access journals at all.

But citation is just one means to measure the importance of a paper. Speaking personally: there are lots of papers that change the way I think, and contribute tremendously to the intellectual “backstory” of my projects, but never end up getting referenced in a primary paper. Beyond that, citations take a long time to accumulate; they generally don’t even start appearing until more than a year after a paper is published.

What if we were able to measure the actual use of a paper by scientists, irrespective of whether they eventually got cited? One could measure the rate at which papers were downloaded from journal websites, and indeed this is already being proposed as an alternative metric of journal impact.

There’s also a lot of free information floating around on social bookmarking sites like Connotea and Mendeley. One could ask which journals are publishing the articles most likely to be bookmarked and shared by users of those sites. Doing so reveals that open-access journals may be a good deal more influential — in the sense of actually being read by a large number of working scientists — than predicted by conventional metrics.

Of course, all of this discussion presupposes that there is some reason why we need to pick a “best” journal at all. Good search engines, in conjunction with rapid indexing of the primary literature, have greatly flattened out the landscape — however, at the same time, the proliferation of journals have caused that same landscape to greatly expand. We need some filter on the literature, but it’s increasingly unclear to me whether selecting papers to read based on the brand name on the journal’s cover (which I never see anyway) is a good solution to that problem.

Advertisement

8 comments

  1. Well said.

    I was aiming at this idea — less concisely — with the points about search and the flattening out of the landscape. If all articles were equally findable and equally accessible, there would in a sense be only one (big) journal.

  2. Re: “What if we were able to measure the actual use of a paper by scientists, irrespective of whether they eventually got cited? One could measure the rate at which papers were downloaded from journal websites . . . .”

    I agree that we need a better criterion than the current “impact factor” system, but wouldn’t measuring the number/rate of article downloads simply be a measure of the number of people in a particular field?? I suspect that a majority of scientists rarely read articles far outside of their particular field of expertise or research. So, almost by definition, a research paper on HIV will get more downloads than one on lymphatic filariasis. Though, you could say that this is precisely the intended measure of “impact” we’re looking for– more people will read the HIV paper because more people research HIV. Therefore, the journal that includes many HIV papers has a wider and larger impact than journals including lots of articles on lesser-known and lesser-studied topics.

  3. @Renee: Good points.

    Regarding the “measuring the size of the field” idea:

    1. That’s what normalization factors are for: One might want to know how a given paper about a given subject compared to other papers about that same subject, and that would be an easy comparison to do.

    2. This effect already exists in the status quo: impact factor is higher for less specialized journals, because the effective “fields” for these journals are larger. e.g. “Aging Cell” will never have a higher impact than “Science”, but it might still be nice to know how the papers about aging in AC were used/read, compared to the papers about aging in Science.

  4. A search may find thousands of papers on a topic, but we need to identify the most important ones. Often, these will be published in highly cited journals. Citation is a more objective measure of use by other scientists than number of downloads, even if we didn’t have to worry about downloads by nonscientists, robot downloads to puff up “impact”, etc. A measure of how broadly citations are distributed among fields would be interesting, however. Is a paper cited only by others working on C. elegans biochemistry, or also by molecular biologists, cancer researchers, wildlife biologists, etc.?

  5. What we would really like to know about a paper is its value, which is more a qualitative than quantitative measurement. The quality of a paper is a function of its real (not perceived) contribution to science and its integrity. It would be difficult to measure quality, especially in the short term, but I suspect the correlation with journal impact factors would be low. If that’s true, it’s unfortunate that a bad paper in Science or Nature can convey more recognition and rewards than a great paper in a less visible disciplinary journal.

Comments are closed.