The compliance emails are arriving. If you're building AI products for European customers, you know exactly what I'm talking about.
The EU AI Act isn't coming. It's here. And it's the first binding regulation with real teeth to reach production AI systems.
Whether you're building in Brussels or Boston, this matters. The EU sets the template others follow. What starts in Europe rarely stays in Europe.
Here's what you need to know.
This Week in AI
EU AI Act First Enforcement Actions
What's new: The EU AI Office issued formal warnings to three companies for non-compliance with high-risk system requirements.
The issues cited: - Insufficient human oversight mechanisms - Inadequate risk documentation - Missing technical documentation for regulatory review
The fines possible: Up to 7% of global revenue for serious violations. These warnings precede enforcement.
The signal: This isn't paper regulation. Enforcement has teeth.
US Federal Trade Commission Expands AI Oversight
What's new: FTC announced it will use existing consumer protection authority to police AI harms - no new legislation required.
The focus: - Deceptive AI marketing claims - AI-enabled fraud and scams - Discrimination in AI-powered services - Unfair data practices in model training
The mechanism: Section 5 of the FTC Act prohibits "unfair or deceptive acts." AI harms can fit this framework.
The implication: Federal AI regulation through enforcement, not legislation.
China Updates Generative AI Service Rules
What's new: China's CAC (Cyberspace Administration) expanded rules requiring pre-deployment security assessments for generative AI services.
The requirements: - Training data must be "lawful" (politically as well as legally) - Outputs must not undermine "social stability" - Services must be registered and approved before launch - User identity verification required
The effect: More friction for launching AI services in China. But also: clear rules, unlike the US patchwork.
Deep Dive: What the EU AI Act Actually Requires
The Risk-Based Framework
Prohibited AI (banned entirely): - Social scoring systems - Real-time biometric surveillance (with narrow exceptions) - Manipulation and exploitation of vulnerable groups - Emotion recognition in workplace and education
High-Risk AI (heavy regulation): - Employment and recruiting systems - Educational access decisions - Credit and financial services - Law enforcement applications - Migration and asylum decisions
Limited Risk AI (transparency requirements): - Chatbots (must disclose AI nature) - Deepfakes (must be labeled) - Emotion recognition (must inform users)
Minimal Risk AI (no specific requirements): - Most consumer AI applications - Creative tools - General productivity software
What High-Risk Means in Practice
Before deployment: - Conformity assessment (self or third-party depending on application) - Technical documentation covering training, data, and architecture - Quality management system - Risk management system - Logging and transparency measures
During operation: - Human oversight mechanisms - Accuracy, robustness, and cybersecurity requirements - Record-keeping for monitoring - Incident reporting to authorities
The cost: Significant compliance overhead. Estimates range from 50K to 500K+ EUR depending on system complexity.
The Foundation Model Provisions
General-purpose AI models (GPAI): - Technical documentation requirements - Training data transparency - Copyright compliance mechanisms
Systemic risk models (high-capability GPAI): - Model evaluation and adversarial testing - Serious incident tracking and reporting - Cybersecurity measures - Energy consumption disclosure
Who qualifies as systemic risk? Models trained with >10^25 FLOPS. Currently: GPT-4 class and above.
The Trend: Regulatory Fragmentation
What's happening: Different regions are taking different approaches.
- EU: Comprehensive, risk-based, legally binding
- US: Sector-specific, enforcement-driven, no comprehensive federal law
- UK: Principles-based, sector regulators interpret for their domains
- China: Control-focused, emphasizing content and stability
- Others: Many countries developing their own frameworks
The challenge for builders: Compliance is becoming a multi-jurisdiction puzzle. What's fine in the US may violate EU rules. What's permitted in Europe may be banned in China.
What Builders Should Do
Immediate Actions
If you serve EU customers: - Determine your risk classification - Document your AI systems thoroughly - Implement required transparency measures - Establish human oversight mechanisms
If you serve US customers: - Watch FTC enforcement patterns - Don't make claims you can't substantiate - Document your testing and safety measures - Prepare for state-level requirements
Everywhere: - Build compliance infrastructure now - Document training data provenance - Create audit trails for AI decisions - Design for transparency from the start
Strategic Considerations
Build flexibility in: - Regional feature variations may be necessary - Opt-out mechanisms for regulated uses - Granular logging for compliance demonstration
Consider timing: - Early compliance is easier than retrofit - Regulatory requirements tend to expand, not contract - Building compliant by default saves rework
Tool of the Week: AI Compliance Tracker
Open-source dashboard for tracking AI regulatory requirements across jurisdictions.
What it does: - Maps regulations to system features - Tracks compliance status - Generates documentation templates - Alerts for regulatory updates
Who it's for: Teams building AI products for international markets.
The limitation: Regulation is complex. This is a starting point, not legal advice.
What We're Reading
"The Brussels Effect in AI" - Analysis of how EU regulation shapes global AI development, even for non-European companies.
"Compliance as Competitive Advantage" - Counter-intuitive argument that early compliance creates market position.
"The Regulatory Fragmentation Problem" - Survey of global AI regulation and its implications for development.
One More Thing
Regulation is coming whether we like it or not. The question is what kind.
The best case: regulation that prevents genuine harms while preserving innovation. Difficult to achieve but possible.
The worst case: patchwork rules that add friction without preventing harm, or overly broad restrictions that push development to less regulated jurisdictions.
Builders have a stake in this. Engaging with regulatory processes - providing technical input, identifying unintended consequences, proposing workable alternatives - isn't just good citizenship. It's self-interest.
The rules being written now will shape AI development for decades. Showing up matters.
See you next week.
Zero to One: Tech Frontiers: Understanding AI technology and building the future.
Enjoyed this issue?
Subscribe to Zero to One: Tech Frontiers for AI insights delivered to your inbox.
Subscribe on LinkedIn