Anthropic’s Jack Clark Warns of Imminent Automated AI Research
Why It Matters
The shift to recursive self-improvement marks a transition where AI progress is limited by compute rather than human talent, potentially leading to an intelligence explosion. This creates profound challenges for safety, as development could outpace the ability to implement guardrails.
Key Points
- Jack Clark estimates a 60% probability that AI research will be automated by the end of 2028.
- Current models are already demonstrating capabilities in reproducing research papers and optimizing training code by over 50 times.
- The shift toward automated R&D suggests that AI does not need creative genius to effectively iterate on its own architecture.
- Concerns are mounting regarding the recursive self-improvement loop, which could make future AI development unpredictable.
Anthropic co-founder Jack Clark has projected that artificial intelligence is approaching a critical threshold where it can automate its own research and development. Writing in his "Import AI" newsletter, Clark estimated a 60% probability that AI research will be largely automated by the end of 2028. He cited recent evidence including models successfully reproducing academic papers, optimizing kernels, and improving training code efficiency by up to 52 times. Clark argues that AI does not require human-level genius to contribute to its own evolution; rather, the ability to perform iterative technical tasks is sufficient for self-improvement. The primary risk associated with this milestone is the loss of predictability, as models could begin accelerating their own capabilities at speeds that outpace human oversight and existing regulatory frameworks.
Imagine if a car could not only drive itself but also go into the garage to build a faster engine for its successor. That is what Jack Clark, a co-founder of Anthropic, says is coming for AI. He thinks there is a very good chance that by 2028, AI will be doing the heavy lifting of designing the next generation of AI. Right now, models are already helping write code and fix bugs, but soon they might be inventing entirely new ways to learn. If AI starts building AI, development could move so fast that it becomes impossible for humans to keep it safe or predictable.
Sides
Critics
Argues that AI is nearing a point of self-automated research which could lead to development cycles that are impossible to predict or control.
Defenders
No defenders identified
Neutral
The safety-focused organization where Clark is a co-founder, currently observing the rapid transition from coding assistants to research agents.
Noise Level
Forecast
Expect a surge in specialized 'AI for AI' tools and autonomous agents designed specifically for machine learning engineering. As these tools mature, the timeline for AGI may compress, likely forcing regulators to shift their focus toward monitoring compute resources rather than just software outputs.
Based on current signals. Events may develop differently.
Timeline
60% Probability Milestone
The date by which Clark believes it is more likely than not that AI research is automated.
30% Probability Milestone
Clark's predicted date for a significant chance of AI research becoming automated.
Clark Publishes Automation Projections
In his Import AI newsletter, Clark details the 52x speedup in training code and sets probabilities for automated research.
Join the Discussion
Discuss this story
Community comments coming in a future update
Be the first to share your perspective. Subscribe to comment.