Andrew Critch Calls for End to Non-Expert AI Safety Activism
Why It Matters
The shift from 'awareness' to 'action' in AI safety is creating friction between technical experts and grassroots activists, potentially influencing how future safety policy is crafted.
Key Points
- Critch argues that the marginal value of non-expert AI safety activism is now low because industry experts and think tanks are already vocal about risks.
- He claims that current activism predictably incites violence by fostering a subculture that vilifies AI developers as 'murderers.'
- The critique suggests that people 'awakened' to AI risks by non-expert activists are likely to make poor, rushed judgments on complex policy matters.
- Critch distinguishes between the strategic necessity of activism in the past (pre-2023) and its current counter-productive nature in 2026.
AI researcher Andrew Critch has issued a public critique of non-expert AI safety activism, arguing that the movement has reached a point of negative marginal utility. In a detailed statement, Critch contends that while public awareness campaigns were necessary prior to 2023, the current landscape of 2026 is already saturated with expert warnings. He argues that non-expert 'thought leaders' are now primarily contributing to a culture of extremism, citing instances where online subcultures label AI developers as 'murderers.' Critch aligns with commentator Dean Ball, suggesting that merely disavowing violence is insufficient when activism predictably incites it. He posits that individuals who still require non-expert jolts to recognize AI risks are poorly positioned to make nuanced policy decisions, and that such activism ultimately degrades collective intelligence and public safety.
Imagine someone screaming at you to do your taxes at 3 AM; you'll probably wake up stressed and make a mess of the paperwork. That’s how Andrew Critch sees non-expert AI safety activists today. He says back in the day, we needed people to shout about AI risks to get attention, but now that the experts are already talking, the shouting is just causing trouble. Instead of helpful warnings, we’re seeing angry online mobs calling developers 'murderers.' Critch thinks this 'forced urgency' leads to bad laws and more violence, rather than actually making AI safer.
Sides
Critics
Argues that non-expert activism in 2026 is counter-productive, incites violence, and degrades the quality of public policy discourse.
Contends that activists must do more than just disavow violence when their rhetoric predictably leads to inflammatory or violent outcomes.
Defenders
Maintain that high-pressure activism and public awareness are necessary to force legislative action against AI risks.
Noise Level
Forecast
Tensions between technical AI safety researchers and grassroots 'doomer' movements will likely escalate, leading to a fracturing of the safety community. We can expect more formal 'disavowals' from technical organizations to distance themselves from radicalized online subcultures.
Based on current signals. Events may develop differently.
Timeline
Activism Phase One
Critch acknowledges that activism was once necessary to get AI developers to speak publicly about safety concerns.
Critch Issues Critique
Andrew Critch posts a lengthy argument stating that the marginal costs of non-expert activism now outweigh the benefits.
Expert Mainstreaming
AI experts and reputable think tanks begin warning about risks, reaching a point where the 'world is awake' to the issue.
Join the Discussion
Discuss this story
Community comments coming in a future update
Be the first to share your perspective. Subscribe to comment.