Different types of peer review
Last week, I was at the ARL fall membership meeting in Arlington. As always, the meeting was terrific in the range of topics discussed and the discussion that took place among those who attend were first rate. During a session hosted by the Scholarly Communications Steering Committee, James J. O’Donnell, Provost at Georgetown University, gave a talk entitled, “Monocles, Monographs, and Monomania: A Look Ahead”. For those of you who haven’t heard Jim speak before, his depth, range and scope of presentation is amazing.
In this particular presentation, he described his experience in scholarly communication, both as a scholar, editor and collegiate administrator. One of the topics he touched on was the state of peer review and as he described them the current four forms of peer review in scholarly publishing. The first and most obvious is the pre-publication peer review process that is most commonly thought of when one mentions peer review. This is the editorial review process that takes place prior to publication and is often an iterative improvement process. The second form is the post-publication review process of published reviews of a work, where other scholars comment directly of the relative strengths and weaknesses of their colleagues work. The third form is the longer-term citation ranking of a work in the published literature. Despite its flaws, it does capture the relative impact of a work (generally journals) over time since publication. The final form that Jim covered is the less appreciated and less discussed secretive process of tenure and promotion review. Through the tenure process, outside reviewers consider the corpus of a scholar’s work when making tenure and promotion decisions, which combines many of the aspects of the first and second forms of review in a private setting.
During his discussion, Jim described what he saw as some of the potential flaws in this system and the dangers of the diminishing quality of these review processes. Two interesting points in the discussion stood out for me. The first dealt with the potential impact of the conservativeness of changes in this process that might come from a more open approach to collaboration on research. Unfortunately I couldn’t stay for Friday’s ARL/CNI Forum on “Reinventing Science Librarianship“, which looked like a great program. One thing is certain, however, the topic of openness and sharing of data will be one topic that surely will be discussed. Until the P&T system at universities changes, I fear that science will be locked up by the conservative doctrine of the sole scholar toiling away at his work — completely missing the proven value of collaboration and joint projects. It will be interesting to see over the coming years how the incorporation of these new methods of scholarly communication and publication impact the promotion systems that have developed over centuries. I expect that not until the senior faculty in universities are slowly replaced with junior faculty more accustomed with and attuned to the strengths of networked workflows will the process change.
The second point that Jim made, ties to the increasing quantity of materials published. Jim made the point that it was too easy to get published and that it was too hard to know what is really good. In a way, Jim postulated, this is owing to decreasing quality of peer review and publisher’s (broadly speaking) willingness to publish works by scholars, which is adding to the problems of information overload. While I question the premise on his first point there is a great deal to consider in his second. There is data that refutes this point, particularly a report published by the STM Association that correlates the amount of papers and journals published with the increasing size of the number of scholars. His second point about knowing what is really good is the real crux of information distribution. What is key is not what is available, but what is important and relevant to me. One of the most promising area of information science is providing context and additional informaiton about the item so that it can be more easily discovered and assessed. There is lots of ongoing work in ontologies and semantics (NB – links are illustrative not comprehensive). I believe that there is also a lot that the scholarly community can learn from usage analysis and the work of the Los Alamos Library team working on MESUR led by Johan Bollen. There are also some of the other usage-based metrics of quality, such as COUNTER’s Usage Factor and the Eigen factor.
My feeling is that lessons could also be used from the Web 2.0 world and user interactions in quality assessment. I don’t know whether any publisher is using something like “rate this article” in the same way that Amazon is for books. I tend to doubt it, given the results of the Nature trial a few years back of “Open Peer Review“. While open peer-review wasn’t a success; in part because of the quality of the reviews and the low take-up. Perhaps the issue of greatest concern (and limiting participation) was the idea of scooping someone’s ideas. However, post-publication commenting doesn’t present the same challenges. I am sure though that publicly airing one’s opinions in our community has its downsides, which could limit its success.
Recently, Elsevier launched a contest on providing new service models for articles, entitled the Article 2.0 Contest. It is being billed as an opportunity to become a publisher and provide new services based on Elsevier’s Science Direct journal database. From the site:
“Each contestant will be provided online access to approximately 7,500 full-text XML articles from Elsevier journals, including the associated images, and the Elsevier Article 2.0 API to develop a unique yet useful web-based journal article rendering application.” Perhaps some innovator will find a way to improve interaction and discovery through user engagement. If nothing else, Elsevier will get some good ideas and someone will win $4,000.