Shortwave: When institutions stop thinking for themselves
The erosion of institutional norms in the age of AI
This week’s source is a paper (“How AI Destroys Institutions”) that reads less like a forecast and more like a systems warning.
The authors are not debating whether generative AI is impressive. They take that as a given. Their concern is more structural. What happens to institutions when judgment, interpretation, and discretion are replaced by statistically confident outputs.
It is not a technology question. It is an institutional one.
Signal boost
The paper’s core argument is straightforward. Institutions do not earn legitimacy by producing answers. They earn it by how decisions are made.
Law, education, journalism, and science rely on processes such as judgment, peer review, professional norms, and deliberation. These are not inefficiencies. They are the mechanisms that allow institutions to reason about ambiguity, values, and edge cases.
Generative AI disrupts this by collapsing process into mas-produced banal output.
It flattens expertise by treating all inputs as equivalent data. It encourages cognitive offloading, reducing opportunities for skill formation. It produces confident, plausible results without making the underlying value judgments visible or contestable.
The danger is not that AI makes mistakes. Institutions already know how to handle mistakes. The danger is that AI bypasses the processes that once made mistakes detectable and correctable.
Over time, authority becomes performative and trust erodes.
Crossfade
This matters because AI adoption rarely looks reckless. It appears, on the surface, to be helpful.
Write faster. Decide sooner. Summarize more. Each use case seems reasonable on its own. But institutions do not fail all at once. They fail through accumulation.
When reasoning is outsourced, education becomes credentialing. When legal argument is automated, law becomes pattern matching. When investigation is replaced, journalism becomes content generation.
This paper warns of a feedback loop that accelerates this decline. AI generated outputs increasingly feed back into training data and workflows, degrading quality while maintaining the appearance of productivity.
The system looks busy. It looks authoritative. And it gradually loses the ability to evaluate itself.
What disappears is not just intelligence. So does individual and institutional agency.
Final note
Judgment is what gives institutions power and legitimacy.
When it is outsourced, institutions become a shell.



I have personally seen this happen with AI usage, and judgement and peer review were absolutely the first things to go, leading directly to tech debt solutions that seemed mildly plausible at face value but failed under scrutiny. This is a great article thank you.