Law Review Impact Factor as RIPS: Some Initial Concerns
As I discussed earlier, I am skeptial about this recent focus on law review citation count, both as an author and a journal board member. Although I agree that there certainly is a need for an RIPS measure, I'm still not convinced that law review citation counts or impact factors would fit the bill.
My concerns about this proposal all involve the natural reactions that law review editors (and the law school administrators that influence them) will have if law review citation counts or impact factors became widely recognized as the best objective indicators of law school quality. Though Jim believes that law review citation counts are "incredibly hard to influence, let alone to "fake," absent an extraordinary (and therefore laudable) scholarly effort by a faculty," I really don't think this is the case at all, since faculty wouldn't be the ones doing the faking:
- As David Zaring points out in the comments to this post, a journal only needs to average six citations per article to get into the top 25. What's to stop the editor-in-chief of a flagship law review at a third tier school (perhaps acting on his dean's "suggestion") from instituting a blanket policy of "every single article must contain at least three cites to articles we have recently published, whether the author wants those cites in there or not?"
- You know those legal disciplines that traditionally get fewer citations? Well, let's just say I wouldn't expect to see any legal history articles in any flagship law review if citation counts became a generally accepted measure of law school quality.
- Oh, and I would expect book reviews to become a thing of the past. In fact one commenter is already blaming Michigan Law Review's poor showing in the W&L rankings on its book review issue.
- Student notes and comments would also be on the chopping block. They're often counted against the journal as "articles" for measures such as citations per article, but receive far fewer citations on average than the actual articles published, so journals would benefit by eliminating student notes outright or replacing them with more articles. Now, normally I wouldn't be opposed to this, since I believe student notes are one of the worst instances of rent seeking in the law journal industry. But...
- ... this would also increase the impact of the letterhead effect. After all, what better measure of who is likely to get cited a lot in the future than who has been cited a lot in the past? So, expect students, recent graduates, new professors at low ranked schools, and others without a strong publication history to be largely shut out of student-edited law reviews.
- In a similar vein, I'd expect to see a large increase in the number of solicited pieces published. Oh, and it should go without saying that those solicited pieces would be written by big name authors with solid publication histories, rather than young turks or energetic upstarts.
- Oh, and I almost forgot to mention that exploding offers would probably become the norm rather than the exception.
Some of these issues could be remedied by tweeking the ranking system in various ways... but law reviews and administrators would find ways to work around those tweeks as well. And some issues, such as the letterhead effect and the rise of solicited pieces, could not be easily fixed without significant collective action.
Of course this doesn't even address the issue of whether law review citation counts are a good independent measure of quality! Yes, citation counts are an objective measure of law review quality -- but the Cooley rankings are also an "objective measure" of law school quality, and I don't see anyone lobbying U.S. News to adopt that methodology...
So, I must remain skeptical. But feel free to explain why I'm wrong here.
Labels: academia, Anthony Ciolli, scholarship
0 Comments:
Post a Comment
<< Home