Academic Reviewing Is Not Fun Anymore
Academic reviewing used to be more fun. (As fun as it can be, of course.)
It used to be the case that you could tell a bad paper just from its looks. Scientific communication is all about being able to express your thoughts clearly, motivating your research, and presenting the results in an easy-to-understand fashion. Chances are, if you can’t do that, you probably need one more round of reviews and just… doing the work.
Don’t get me wrong: behind sloppy papers there can be interesting approaches, challenging hypotheses, and useful results. But more often than not Occam’s razor applies: if it looks bad, it’s probably bad. No need to dive into the methodological madness when there’s no way anyone can comprehend what was done. Definite Reject.
This has changed. Ever since the advent of LLMs and AI-assisted grammar/spell checking, paper manuscripts have become better — on the outside. Everything looks plausible. Nice word choices everywhere. So. Many. Different. Words. Easy to digest and very approachable.
The apparent competency makes it all the more complicated to properly assess the underlying method. I used to think that bad writing was detracting from the core idea and that it was unfair to primarily judge the contribution based on the looks. I don’t think that’s true anymore. You now have to spend more time finding the signal from the noise in a pool of plausibility.
I don’t have a good proposal either. AI detectors exist, and we should use them, but they are easy to fool, and the issue is not using AI per se. I’ve done it, too. Ultimately, it’s about accountability and whether you can communicate your ideas properly. Just shut off the AI and let your brain do the writing. It really helps.