Welcome to Your AI Governance Journey
Navigate EU AI Act compliance and Microsoft Cloud Adoption Framework for building trustworthy AI systems
EU AI Act Overview
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It establishes rules for AI systems based on their risk level and applies to anyone who develops, deploys, or distributes AI systems in the European Unionβregardless of where they're based.
Global Reach
Applies to any AI used in or affecting the EU, regardless of provider location
Risk-Based
Four risk tiers with proportionate requirements
Significant Penalties
Up to β¬35M or 7% of global turnover
What Counts as an "AI System"?
A machine-based system that operates with autonomy, may adapt after deployment, and generates outputs such as predictions, content, recommendations, or decisions that influence physical or virtual environments.
Why Developers & IT Leaders Must Care
If You Build AI Systems:
- βYou may be a "Provider" with documentation and conformity assessment obligations
- βHigh-risk systems require CE marking before EU market placement
- βGPAI models have specific transparency requirements
- βFine-tuning models may make you a Provider
If You Deploy AI Systems:
- βYou're a "Deployer" with human oversight duties
- βMust conduct Fundamental Rights Impact Assessments
- βRequired to ensure AI literacy among staff
- βModifications may make you a Provider
Risk-Based Classification
Banned outright
Strict requirements, conformity assessment
Transparency obligations
No specific requirements
π« Prohibited AI Practices (Article 5)
Banned as of February 2, 2025:
Subliminal/deceptive techniques causing harm
Targeting age, disability, social situation
Behavioral scoring with detrimental treatment
Crime risk based on profiling alone
Untargeted database creation
At workplace/school
Inferring race, politics, religion
Public spaces law enforcement
β οΈ High-Risk AI Systems (Annex III)
π Limited Risk
Transparency obligations:
β Minimal Risk
No specific requirements. Most AI falls hereβspam filters, games, inventory management.
High-Risk AI Requirements (Articles 9-15)
Risk Management
- β’ Continuous risk identification
- β’ Risk estimation & evaluation
- β’ Mitigation measures
Data Governance
- β’ Representative training data
- β’ Bias examination
- β’ Data gap analysis
Technical Documentation
- β’ System description
- β’ Design specifications
- β’ Development process
Record-Keeping
- β’ Automatic logging
- β’ Decision traceability
- β’ Post-market monitoring
Transparency
- β’ Clear instructions for use
- β’ Intended purpose
- β’ Accuracy & limitations
Human Oversight
- β’ Enable intervention
- β’ Prevent automation bias
- β’ Override capability
Accuracy & Security
- β’ Appropriate accuracy
- β’ Adversarial robustness
- β’ Cybersecurity
Quality Management
- β’ Written policies
- β’ Design controls
- β’ Testing procedures
Conformity Assessment & CE Marking
Self-Assessment (Most)
- 1. Implement QMS
- 2. Prepare documentation
- 3. Conduct assessment
- 4. EU Declaration of Conformity
- 5. Affix CE marking
- 6. Register in EU database
Third-Party (Some)
Required for biometric ID systems and AI in regulated products. Notified Bodies assess QMS, documentation, and processes.
General-Purpose AI (GPAI) Models
What is a GPAI Model?
An AI model with significant generality capable of performing a wide range of distinct tasks. Examples: GPT-4, Claude, Gemini, Llama, Mistral.
Standard GPAI
- βTechnical documentation
- βInformation to downstream providers
- βTraining content summary
- βCopyright compliance
GPAI with Systemic Risk
- βModel evaluations + adversarial testing
- βSystemic risk assessment
- βIncident reporting
- βCybersecurity protections
β οΈ Fine-Tuning Warning
Substantially modifying a GPAI model through retraining or fine-tuning makes you a Provider with all provider obligations.
Implementation Timeline
π° Penalties
Exemptions & Special Cases
Military & Defense
AI used exclusively for military/defense/national security is exempt.
R&D
AI for scientific research before market placement is exempt. Once deployed, exemption ends.
Personal Use
Purely personal, non-professional use is exempt.
Open Source
Generally exempt, unless prohibited, high-risk, transparency-required, or GPAI with systemic risk.
π‘οΈ Case Study: Defense AI
| Scenario | Status |
|---|---|
| AI exclusively for armed forces | EXEMPT |
| AI sold to defense AND commercial | REGULATED |
| Commercial AI adapted for military | PARTIAL |
6 Principles: Lawfulness, Accountability, Explainability, Reliability, Governability, Bias Mitigation
Requires "meaningful human control" for funded projects
5 principles: Responsible, Equitable, Traceable, Reliable, Governable
Microsoft Cloud Adoption Framework
AI Agent Adoption Guidance
Microsoft's Cloud Adoption Framework provides structured guidance to help organizations successfully adopt AI agents. It addresses unique considerations that AI agents introduceβfrom planning through governance, building, and operations.
The 4-Phase AI Agent Adoption Process
1. Plan
- β’ Business strategy
- β’ Technology selection
- β’ Org readiness
- β’ Data architecture
2. Govern
- β’ Responsible AI policies
- β’ Security controls
- β’ Compliance
- β’ Data governance
3. Build
- β’ Environment setup
- β’ Development process
- β’ Testing & validation
- β’ Security controls
4. Operate
- β’ Integration
- β’ Monitoring
- β’ Lifecycle management
- β’ Continuous improvement
Agent Types by Autonomy Level
π Knowledge Agents
Retrieve and synthesize information. Answer questions using RAG patterns.
Low autonomy β’ Low risk
β‘ Action Agents
Perform specific tasksβupdate records, trigger processes, create tickets.
Medium autonomy β’ Medium risk
π€ Automation Agents
Manage complex multi-step processes with minimal oversight.
High autonomy β’ High risk
Technology Decision Tree
Choose the right platform based on your use case:
| Platform | Best For | Control | Skill Level |
|---|---|---|---|
| M365 Copilot | Productivity within Microsoft 365 | Low | End user |
| Copilot Studio | Custom agents, workflow automation | Medium | Low-code maker |
| Microsoft Foundry | Custom AI apps, multi-agent systems | High | Pro developer |
| Custom IaaS | Full-stack control, specialized needs | Full | ML engineer |
Organizational Readiness
AI CoE
Centralized advisory body. Drives strategy, prevents fragmented adoption, develops standards.
Platform Team
Manages foundation and guardrails. Enforces responsible AI policies and governance.
Workload Teams
Own end-to-end agent lifecycle. Define requirements, curate data, integrate with business.
Responsible AI & Governance
Microsoft's Responsible AI Principles
Key Protocols
MCP (Model Context Protocol)
Structured, secure access to tools, APIs, and data. Enforces boundaries and prevents unauthorized actions.
A2A (Agent-to-Agent Protocol)
Consistent communication between agents. Supports task delegation and context sharing in multi-agent systems.
π Additional Resources
GitHub Resources
Combined Compliance Checklist
πͺπΊ EU AI Act Compliance
βοΈ Microsoft CAF AI Agent Adoption
About This Guide
π Purpose
This comprehensive guide was created to provide developers, IT leaders, and organizations with practical information and awareness about AI governance, trustworthy AI practices, and compliance requirements. It combines insights on the EU AI Act regulations with Microsoft Cloud Adoption Framework for AI Agents, helping you build secure and responsible AI applications on the cloud, particularly Microsoft Azure.
About the Author
Jonah Andersson
Professional Role
Senior Cloud Engineer & Architect Consultant
Author
Learning Microsoft Azure (O'Reilly Media)
Achievements
Microsoft MVP, Microsoft Certified Trainer
Public Speaker
International speaker on Azure and cloud technologies
Jonah is a recognized expert in Microsoft Azure and cloud development, specializing in .NET technologies, cloud-native development, serverless architecture, and cloud security. Based in Sweden, Jonah is the founder and leader of the Azure User Group Sweden and an advocate for gender equality and diversity in tech. As a frequent public speaker, mentor, and podcast host (Extend Women In Tech Podcast), Jonah is passionate about solving complex problems, developing modern applications, and inspiring others to thrive in technology careers.
π― Why This Guide Was Created
As AI technologies rapidly evolve and become increasingly integrated into business and society, the need for responsible AI governance has never been more critical. This guide was developed to:
- Create awareness about AI governance requirements and best practices
- Guide organizations in building trustworthy and secure AI applications
- Demystify compliance with the EU AI Act and related regulations
- Promote responsible AI development integrated with cloud platforms like Microsoft Azure
- Empower developers and IT leaders to make informed decisions about AI adoption
- Bridge the gap between regulatory requirements and practical implementation
By combining regulatory guidance with practical cloud adoption frameworks, this guide aims to help you navigate the complex landscape of AI governance while building innovative, secure, and compliant AI solutions.
π Important Note
This guide is provided for educational and informational purposes only. It does not constitute legal advice. For specific compliance requirements and legal guidance, please consult with qualified legal counsel familiar with AI regulations in your jurisdiction.