San Francisco Declaration on Research Assessment

An impact factor of an academic journal is the average number of citations to their recent published articles. For example, a journal with an impact factor of 10 receives, on average, 10 citations per each article it publishes. Of course, very highly cited articles greatly boost the factor, and by it’s nature, the impact factor cannot tell anything about the merits of individual articles published in the journal.

However, this is exactly what the impact factor is mostly used for. Time-stressed tenure and funding committees routinely use the impact factors of the journals (via their names) that an applicant has published in as a fast way of gauging the publication record of the applicant. Hence the self-perpetuating competition to publish in the “top” journals that is the mainstay of academic life. Incidentally, this practice has obvious  implications for the attractiveness of new open access journals.

As the editorial of Science puts it:

The impact factor, a number calculated annually for each scientific journal based on the average number of times its articles have been referenced in other articles, was never intended to be used to evaluate individual scientists, but rather as a measure of journal quality. However, it has been increasingly misused in this way, with scientists now being ranked by weighting each of their publications according to the impact factor of the journal in which it appeared.

Much (virtual) ink has been spilled to decry this rampant misuse of the impact factor. However, concrete actions to remedy the situation have been few and far between. Thus it may not be a surprise that the San Francisco Declaration on Research Assessment (DORA) has  gotten so much attention. As the description on their site states:

The San Francisco Declaration on Research Assessment (DORA), initiated by the American Society for Cell Biology (ASCB) together with a group of editors and publishers of scholarly journals, recognizes the need to improve the ways in which the outputs of scientific research are evaluated. The group met in December 2012 during the ASCB Annual Meeting in San Francisco and subsequently circulated a draft declaration among various stakeholders. DORA as it now stands has benefited from input by many of the original signers listed below. It is a worldwide initiative covering all scholarly disciplines. We encourage individuals and organizations who are concerned about the appropriate assessment of scientific research to sign DORA.

The Nature News report includes comments from the DORA chairman:

“We, the scientific community, are to blame — we created this mess, this perception that if you don’t publish inCellNature or Science, you won’t get a job,” says Stefano Bertuzzi, executive director of the American Society for Cell Biology (ACSB), who coordinated DORA after talks at the ACSB’s annual meeting last year. “The time is right for the scientific community to take control of this issue,” he says.

Their first and main recommendation is clear and striking – impact factors are declared unfit for duty:

Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.

Interestingly, Nature is not amongst the signatories:

Nature Publishing Group, which publishes this blog, has not signed DORA: Nature’s editor-in-chief, Philip Campbell, said that the group’s journals had published many editorials critical of excesses in the use of JIFs, “but the draft statement contained many specific elements, some of which were too sweeping for me or my colleagues to sign up to”.)

If journal impact factors are not to be used anymore, the way forward is seen to lie with more modern article-level metrics. Altmetric indicators are developing rapidly, and new research on their relationships to traditional metrics appears at an increasing pace.

Advertisements

3 thoughts on “San Francisco Declaration on Research Assessment”

  1. science does not progress when one promotes people who specialize in generating repetitious least-publishable units and/or in knowing best how to game the bureaucracy and how to please failed-ph.d.-kingmakers at major journals, the mafiosos at NIH/NSF study sessions, and their peers by citing them or giving them openings for further fassade publications.

    science progresses when important scientific breakthroughs are made.

    therefore whatever contributes to increasing the probability of such breakthroughs is an important contribution to science (this includes onerous teaching beyond the textbook).

    i propose to evaluate scientists and their output according to how many established theories (or “scientific” fads) they have refuted, how many seminal hypotheses and crucial new questions they have proposed, how many breakthrough new methods they have developed, etc.

    5, 10, and 15 years after the ph.d., the scientist would write his/her own explicitly argued and heavily footnoted evaluation describing his vision and merits as breakthrough thinker and scientist by commenting explicitly on his established-theory refutations, novel hypotheses and questions proposed, breakthrough new methods developed, etc., and by contrasting everything to how things were before his work.

    the factuality of the listed results and presented context and the relevance that the evaluated person attributes to the topics and results mentioned in the self-evaluation would then be critiqued by

    a) a group of experts recommended and justified by the evaluated person and

    b) a group of international experts chosen by a panel of national experts themselves chosen by the country’s professional organization.

    (this could be refined of course; the most important thing is to avoid both invidious and crony reviewing).

    the two reviews would then be exchanged between groups and contradictions would be eliminated.

    those who would come up empty-handed would start performing more and more work for others who have delivered breakthrough work in the past (and the technical training of such “”support specialists” would be augmented and everybody would get paid the same to avoid careerists).

    these “specialists” would also carry out work for starting postdocs.

    say one would start working 25%, 50%, 75%, 100% for others, after coming up empty-handed after 5, 10, 15, and 20 years….

    of course, after delivering an important breakthrough (“hard work” and a “steady output” do to qualify as such), one would regain all of the “lost” ground (quotation signs because it must be a nightmare to have to feign that one is a creative scientist when one is not, especially if one is paid the same either away).

Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s