Recently, the CNBR network carried a message announcing a conference in Australia, and one of the recipients sent a rejoinder about how this particular conference was one that was selected as an "A" rated conference for the Australian Excellence in Research for Australia (ERA) scheme. This scheme involved ranking all journals and conferences and giving them a rating. It provides an administrative route for Australians to decide whether to appoint/promote academics, based on how many research outputs they have got in the top-rated outlets on the list. It also means that Australian universities would probably not fund their staff to go to conferences that are not A-rated. This is symptomatic of a much wider movement in society, generally, to avoid the need for professional judgement at all levels. Some commentators (e.g. O’Neill, O. A Question of Trust: The BBC Reith Lectures 2002. Cambridge: Cambridge University Press.) have connected this to the decline of trust in society. Of course, although there is a general feeling that trust is in decline, and that we need more of it, it does not necessarily follow that we do indeed need more trust. It is interesting to read Cook, Hardin and Levi (2007) (Cooperation Without Trust?) who argue that there are better things than trust to account for social order.
When I was working on the Professional Futures study, it was clear that there was a growing difficulty in society in terms of making judgements. I think this is getting worse. People are so frightened of making judgements of any kind that they would prefer to have objective lists of publications, rather than actually look at things and figure out whether they are any good. This prompted me to respond to the list as follows:
I think it flags up the lunacy of the whole process of ranking conferences and journals in this way. Are we to suppose that just because a paper has been published in an A*-rated journal or A-rated conference that it is definitely better than a paper in a B-rated forum? And are we to suppose that anything that was published outside of these A-list things is somehow sub-standard?
I sincerely hope that most of us are clever enough not to accept administrative views of what constitutes quality when it comes to research outputs! I would hope that academics have an eye on their peer groups, and on their own careers beyond their current employer, even if we are told where to publish. What senses do we lack that we cannot detect good conferences, journals and papers for ourselves?
Various colleagues responded privately to me, agreeing and offering observations about how the list was compiled in a very narrow and parochial way. One colleague from the Netherlands observed that although she agreed wholeheartedly with my views, if you find yourself in this situation, there is no much that you can do. And my rejoinder was:
It places you in a difficult position, for sure. If it is not your peers who are deciding which are your most important outputs, what chance have you got?
I think that in such a situation, it may be helpful to acknowledge that there are two different games to be played, and they are somewhat incompatible, but not wholly. First, our bosses want us to perform in a strictly quantifiable way that requires numbers of papers in certain places. Second, we need for our own careers to figure out a publication strategy that places our work where it reaches the peer group we seek to access. Admittedly, these two activities overlap, hopefully by a lot. But we have to acknowledge that sometimes it is necessary to publish outside the manufactured lists. In the long run, it is important that academics keep an independent sense of what their field looks like. Also, I would expect that this whole discussion may look different in other fields.
So, I agree that it is not possible to ignore the lists when you are subjected to that kind of regime. But I also think that we have to keep a sense of perspective, and ensure that our publications help each of us to develop an appropriate publication profile that can be appreciated by our peers not just for quantity, but for quality of the research. And that requires subjective, professional judgement.
It is worrying when intelligent people like academics collude in the erosion of their own judgement.
No comments:
Post a Comment