Download How to Rate Research: A Strange Little Proposal Megaupload


how_to_rate_research_a_strange_little_proposal.zip
  DOWNLOAD THIS ALBUM

File download is hosted on Megaupload

This weekend, I got to thinking about how scholars in the humanities might rate their research productivity and quality. This line of thinking was prompted by both a new university mandate that scholars in the humanities and social sciences figure out how to rank or evaluate our publications and a reading of I. Stengers’s Another Science is Possible: A Manifesto for Slow Science (2018)

The problem is a long-standing one. First, as scholars a pecking order already exists, largely in our own minds, for what constitutes a good publication and what may not be. In general, this follows the basic contours of journal and publisher quality but also is riddled with meaningful exceptions and ambiguities that are significant when evaluating research for a relatively small sample like a department. Over time, I think most of us would agree that top tier journals (e.g. the American Historical Review or American Journal of Archaeology) generally publish better articles, especially over time, but this does not exclude the possibility of good articles appearing in less well regarded journals. The latter becomes particularly important when, say, reviewing the research of a smaller department, like ours, whose annual output might not map neatly onto long-standing patterns of quality. Moreover, having a nuanced system that goes beyond the typical lists of journal rating makes it possible to rate  the quality of highly specialized work that might not fit into the broad purview of many top tier journals, but is nevertheless significant. Decisions to publish specialized work destined for specialized audiences is often shaped by considerations of “fit” rather than overall ratings of journal quality. Finally, in a small department with a rather irregular output, an ideal system will allow for the occasional misfired article or publication that has higher quality than the ranking of a publication. Over time, such exceptions will become outliers as most of the best article appear in most of the best journals, but with an annual based on a small sample, these exceptions might have a meaningful impact on efforts to evaluate a department.

Here’s my proposal.

Each publication receives five scores provided by the scholar (who, in a department like ours, is the only person really able to judge the character of the field).

1. Rank. 25 points. This is the most standard evaluation of publication quality. Better quality journals and publishing houses get higher scores with the standard gaggle of top tier journals and publishing houses (Oxford, Cambridge, Princeton, Harvard, et c.) scoring in the top quintile and so on. This should be most susceptible to the “smell test;” that is we should be able to smell an overrated or underrated journal or publishing outfit.

2. Type. 25 points. Generally speaking the gold standard in my corner of the humanities are peer-reviewed books and articles, so these would occupy the top quintile here. The next quintile would be peer reviewed book chapters or edited peer-reviewed works, with the third quartile representing book chapters and other solicited articles. I would rank review essays, non-peer reviewed popular pieces or editorials, next and then book reviews and shorter conference proceedings in the bottom quintile. By allowing for some wiggle room in each quintile, we can distinguish between, say, a peer review in a top tier journal and one in a small regional outfit. We can also allow for various exceptions. Obviously, a 6,000 review essay in a top tier journal might be more significant and impactful than a short review essay for an online publisher. Again, most of this should follow the smell test.  

3. Fit. 25 points. One issue with the general journal rankings is that they tend to be biased toward traditional fields of study and research accessible to a large audience (and therefore sweeping and generalizable). In some ways, this is a good thing, but it also tends to overlook the daily grind of folks working to produce significant specialized knowledge, to explore overlooked periods and places in the past (cough… North Dakota or Cyprus), to chart new subfields (archaeology of the contemporary world, for example), and to develop methods or theory of most use to specialists. A fit score allows us to reward articles that appear in places where they are likely to find a receptive audience rather than simply appearing in the “to ranked” journals. Again, for a small department like ours, this rewards on an annual basis work that might not find a home in a top tier journal but has an obvious and interested audience. This would reward, say, an article in an edited volume dedicated to a narrow topic, specialized research that tends to appear in less prestigious regional or specialist journals, or even books that appear in a series developed by a regional press.  

4. Quality. 25 points. This will likely be the most controversial category in my ranking system, but I contend that most of us can be honest about the quality of our own work. In other words, we know when we write a good piece or a mediocre one, but we also know that there are times when an article or chapter simply isn’t as good as we wanted it to be (but still good enough for publication). This self-awareness also serves as the counter-balance to poor fit. For example, one of my favorite and best articles ever appeared in rather obscure Hesperia Supplement. While my fit score would be pretty low and the type and ranking of the publication, for example, would be middling, the quality of the article is high. The same could be said for my “Slow Archaeology” article in North Dakota Quarterly. Some other article of mine trend the other way, of course.

5. Other Considerations. 10 points. Like any system, there needs to be some flexibility to take into account considerations that the existing publishing and academic system does not cover. For example, a book that makes innovative use of published data, an open access publication, or even a more conventional work, like the publication of a series of lectures, that might not correlate neatly with our established categories. It would also allow for us to mark particular involved pieces of research or to denote research that has won awards or other distinctions. The considerations will have to be spelled out.

~

To be honest, I’m not sure that a system like this will satisfy my colleagues or the powers that have requested this kind of ranking, but to my mind, this kind of system, that takes into account rankings, types of scholarly output, fit, and quality. It allows for nuance, while at the same time offers an easy to read “quantitative” score that fits the limited attention span of the assessocracy. Finally, including “fit” and “quality” responds to some fo the factors that people like Stengers or Gary Hall’s Uberfication of the University. (2016), which critique our tendency to conform to rankings systems imposed on us from outside of our programs, disciplines, and departments. It seems to be, at least, that a system like this that is both reflective of our own values (as individual scholars) and larger trends in academia (which despite what we say, do matter) offers another path toward understanding what makes us good scholars.


Rated 4.5/5 based on 432 customer reviews


Related Articles & Comments

Comments are closed.



punk bands tshirts punk bands tshirts

BUY PUNK BANDS MERCH







Punk Bands Merch ★ Free Worldwide Shipping