How papers killed academia

Posted on Wed 12 July 2023 in personal

Reasearch is made of deadends. Of wandering, broad and wide. Of peeks into different topics, of interest and study without the expectation of a payoff. Of ideas, connections.

Papers don't allow any of these. They want results, meaningful or not. They feed the academic beast without generating any knowledge. When the world became globalized, the system needed not only a simple and objective metric to evaluate people in academia, but any metric. With hundreds of applicants for each academic position, 95% of which ore total strangers to the committee, you need a way to sift through them. And anyway, having a half hour chat with a stranger is as helpful and effective as looking at a number. So we decided to fight quantity with quantity. Modernity doesn't allow for subjectivity: the old "I know him, he's a good candidate" is not acceptable. We need numbers and objectivity to validate our choice – and also to shield us in case our assessment was wrong.

The problem is that any metric is broken the moment you start optimizing for it, it's the basic lesson of AI. When they tried to fix the issue introducing citation counts in the equation they made it even worse, as they confined people in their research fields forever. The only way to accumulate citations is by cross-referencing your own stuff, get renowned in your niche, and get your friends to cite your stuff. If you change niche, if you look into different problems, you are dead.

When you take a PhD, you take it in a subject, not in a topic. It's mathematics, not Finite Element Methods for ice modeling. You are gonna learn so much more by looking at different problems and fields, but that's actively discouraged. Richard Hamming switching field every 7 years at most.

The unfitness of papers is apparent even by the wording: you are assigned one PhD project, but somehow expected to make multiple papers out of it (however many you manage to, really). If you'd write today Shannon's "A mathematical theory of communication", to this date the "paper" I mention when people ask me what papers should look like, you'd get a nod, a pat in the back, and smiley remarks like "looking forward to see how you are gonna expand this in the next 3 papers!" (plus frowns because it's not in 3 columns and there are examples). Nobody sane in their minds who wants any career in academia would ever write such a paper, and no editor would ever publish it these days. You'd break it into meaningless chunks that the beast can be fed with. It swallows papers whole, regardless of length, quality, and clarity, and it demands one per year, no matter what. Extraordinarily good papers (i.e. published on renown journals, like Nature) may satiate the beast a little longer, just your overweight uncle skipping lunch after Thanksgiving, but its hunger is relentless – that is, until you get tenure. Then the beast accepts that you are untouchable and are allowed not to play anymore. The consequence of not playing is to slowly rot, though.

Papers say "you may entertain yourself with any problem you wish, as long as it is fruitful". Which is a guarantee that nobody will do any science. It's just too dangerous: think if, at the end of the year, you don't have anything for the beast. It's gonna happily take one of your limbs instead – and you only have so many limbs before you are academically dead. Proper intellectual exploration probably still happens only in large research groups where everybody is a co-author and your name gets magically dropped in papers, so that you can dismiss the whole paper counter bullshit. There they can maybe still pursue real exploration, but it's still one of those fine balances that work only as long as the beast is happy, as long as you please it.

As much as everybody agrees that this metric is flawed, it stack because nobody has a good idea of replacement. After all, how do you quantify a researcher's quality? It's a trick question: you don't. How is it different from "how do you quantify a good mother?"? Quality is hard to fix into a metric, and AI has taught us that, as soon as you do, players will invariably start to game it to achieve a better score. And in fact, we should not even try to quantify qualitative assets. Good science, as good parenting, is made of a thousands different elements that only other humans can evaluate. There's no hard criteria, but we recognize when we see it (read "The zen and the art of motorcycle maintenance"). It takes time, though, and our modern consumerist world has none to waste, even in academia. You have to invite the candidate over for a while, listen to his lessons, talk to his students, involve him in trial projects. Who's got time for all this. Compare it to a selection based on paper count, number of years abroad, and a 45-minutes trial lecture (which is already a generous hiring process).

These days academia is just sustaining itself, and few dare to do anything definable as intellectual exploration. Those who do are guaranteed not to go an inch further, unless very special – there's a whole army of people just pleasing the beast, and the beast will favor them.