Papers that are more difficult to read might be worth it if AI increased the amount of good science being produced. But this doesn’t seem to be the case. Organization Science is desk-rejecting (e.g., rejecting a paper before even sending it to peer reviewers) nearly 70% of manuscripts that made heavy use of AI. This number drops to 44% for papers written without AI.

Easy Is Overrated

Can we do this kind of analysis on grant submissions?