Esc
ResolvedRegulation

Global Rift Over AI Prohibitions Emerges One Year After EU Ban

AI-AnalyzedAnalysis generated by Gemini, reviewed editorially. Methodology

Why It Matters

The divergence between the EU's strict prohibitions and the lighter-touch approaches in markets like South Africa creates a fragmented global compliance landscape for AI developers.

Key Points

  • The EU has officially completed one year of enforcing bans on AI systems deemed an 'unacceptable risk' to fundamental rights.
  • EU market surveillance authorities can now levy fines of up to €35 million or 7% of global revenue for violations of these bans.
  • South Africa is opting for a 'middle-of-the-road' approach using existing laws rather than a single comprehensive AI Act like Europe's.
  • Significant transparency requirements for chatbots and deepfakes are set to become mandatory in the EU by August 2026.
  • A major global regulatory gap exists as South Africa lacks mandatory audits or disclosure duties for AI in high-stakes sectors like hiring and lending.

The European Union has marked one year since the implementation of the AI Act’s most stringent prohibitions, which ban systems deemed to pose an 'unacceptable risk.' These regulations, legally binding since February 2025, prohibit real-time biometric surveillance, social scoring, and manipulative AI that targets vulnerabilities. The European AI Office has shifted its focus to active market surveillance and enforcement, with non-compliant firms facing fines up to €35 million or 7% of global turnover. Concurrently, other jurisdictions like South Africa continue to rely on sector-specific guidelines and existing laws rather than comprehensive AI legislation. South Africa's Department of Communications and Digital Technologies is currently drafting a National AI Policy, but enforceable regulations are not expected until at least 2027, highlighting a significant regulatory gap in the global governance of sensitive AI applications such as predictive policing and emotion recognition.

It has been over a year since the EU officially banned 'dangerous' AI, like facial recognition in public and systems that score your social behavior. While the EU is now handing out massive fines to keep tech companies in line, other parts of the world, like South Africa, are taking a much slower 'wait and see' approach. In those regions, there are still no hard laws against things like using AI to guess your emotions at work or using it to predict crimes. It's like having one neighborhood with strict traffic laws while the next town over has no speed limits at all.

Sides

Critics

Pierre MurrayC

Expresses skepticism over predictive policing bans and criticizes South Africa's decision to split oversight among multiple existing regulators.

Defenders

European AI OfficeC

Enforcing strict bans on AI that threatens fundamental rights through active market checks and heavy penalties.

Neutral

Department of Communications and Digital Technologies (DCDT)C

Developing a National AI Policy for South Africa that favors high-level principles over immediate, sweeping bans.

Join the Discussion

Discuss this story

Community comments coming in a future update

Be the first to share your perspective. Subscribe to comment.

Noise Level

Quiet2?Noise Score (0–100): how loud a controversy is. Composite of reach, engagement, star power, cross-platform spread, polarity, duration, and industry impact — with 7-day decay.
Decay: 5%
Reach
41
Engagement
7
Star Power
15
Duration
100
Cross-Platform
20
Polarity
65
Industry Impact
85

Forecast

AI Analysis — Possible Scenarios

The EU will likely issue its first major fine under the 'unacceptable risk' category in late 2026 to set a precedent. Meanwhile, South Africa is expected to face domestic pressure to accelerate its National AI Policy as public concern over unregulated biometric surveillance grows.

Based on current signals. Events may develop differently.

Timeline

  1. South Africa Regulation Target

    Earliest expected window for the finalization of South Africa's enforceable AI regulations.

  2. Transparency Rules Kick In

    The EU deadline for mandatory disclosure when users interact with chatbots or deepfakes.

  3. High-Risk AI Deadline

    Strict requirements for AI in critical infrastructure and hiring become mandatory in the EU.

  4. One-Year Enforcement Milestone

    Reports indicate the EU is transitioning from policy setup to active market surveillance and inspections.

  5. EU AI Act Bans Take Effect

    The first set of prohibitions on 'unacceptable risk' AI systems becomes legally binding across 27 countries.