Your investors forwarded you a compliance newsletter. Your legal team flagged something about "high-risk AI." A consultant quoted you six figures for an "AI Act readiness program." Before you sign anything, read the actual regulation. Most mid-market SaaS companies are not in the crosshairs the way the compliance industry wants you to believe.
That is not the same as saying the EU AI Act does not apply to you. It probably does — just not in the way you have been told. The Act classifies AI systems into risk tiers, and the obligations differ enormously between tiers. Most SaaS companies offering AI-powered features — customer support chatbots, recommendation engines, analytics dashboards, content generation — land in the "limited risk" tier, which carries transparency obligations but not the conformity assessments, quality management systems, and post-market monitoring regimes that dominate the fear-driven marketing.1
The classification question
Everything turns on whether your AI system qualifies as "high-risk" under the Act. There are two paths to that classification.3
Path one (Article 6(1)) applies when your AI system is a safety component of a product already covered by EU harmonization legislation — medical devices, machinery, toys, aviation systems. If your SaaS is a safety-critical component embedded in one of these regulated products, you are high-risk by default. Obligations under this path apply from August 2, 2027.3
Path two (Article 6(2)) applies when your AI system falls into one of the eight domains listed in Annex III. This is where the confusion lives, because people read the domain headers without reading the specific use cases underneath them.
What is actually in Annex III
Eight domains, each scoped to specific use cases rather than to entire industries.4
- Biometrics. Remote biometric identification systems, biometric categorization based on sensitive attributes, and emotion recognition in workplaces and educational institutions.
- Critical infrastructure. AI used as safety components in road traffic management and in the supply of water, gas, heating, or electricity. Also covers critical digital infrastructure safety components.
- Education and vocational training. AI that determines access to or admission into educational institutions, evaluates learning outcomes when those outcomes determine educational paths, or monitors students during exams to detect prohibited behavior.
- Employment and workers management. AI used for recruitment and selection — filtering applications, evaluating candidates, targeting job advertisements — and for decisions about promotion, termination, task allocation, and performance monitoring.
- Essential private and public services. AI that evaluates creditworthiness, sets risk premiums for life and health insurance, evaluates eligibility for public benefits, or is used to dispatch emergency services.
- Law enforcement. Polygraph-equivalent tools, evidence reliability assessment, profiling in criminal investigations, crime analytics.
- Migration, asylum, and border control. Risk assessment tools, document authenticity verification, asylum application processing.
- Administration of justice. AI used to research and interpret facts or law, or to apply the law to concrete facts.
If your product does not do any of those specific things, you are not high-risk under Annex III. Building an AI-powered project management tool? Not high-risk. Offering AI-generated marketing copy? Not high-risk. Running a recommendation engine for e-commerce? Not high-risk. Summarizing customer support tickets? Not high-risk.
The exception most people miss
Even if your system technically falls within an Annex III domain, Article 6(3) provides a carve-out. An Annex III system is not considered high-risk when it does not pose a significant risk of harm to health, safety, or fundamental rights — specifically when it does any of the following:3
- Performs a narrow procedural task — converting unstructured data into structured data, classifying documents into categories, or detecting duplicates.
- Improves the result of a previously completed human activity — the human already did the work; the AI refines it.
- Detects decision-making patterns or deviations from prior patterns, without replacing or influencing the human assessment that follows.
- Performs a preparatory task for an assessment — the AI output feeds into a human decision process but does not drive the outcome itself.
This exception matters. A recruitment SaaS that uses AI to sort resumes into categories for a human recruiter to review might qualify. A credit-scoring system that makes autonomous lending decisions would not.
Providers who believe their system qualifies for the exception must document that assessment before placing the system on the market, and must register it in the EU database.3
What this means for a typical Series B SaaS team
Most mid-market SaaS companies with AI features will land in one of two buckets.
Bucket one: transparency obligations only (Article 50). If your product uses a chatbot, generates synthetic content, or produces outputs that could be mistaken for human-created, you owe your users disclosure. Specifically:5
- AI systems that interact directly with people must clearly indicate that the interaction is with an AI — unless that is already obvious from the context.
- AI systems generating synthetic audio, image, video, or text must mark the output in a machine-readable format as artificially generated.
- Deployers of deepfake-generating systems must disclose that the content was AI-generated.
These obligations are real but scoped. For most products they are an engineering task — not a six-figure compliance program. The technical standards for synthetic content marking under Article 50(2) are still being developed, so factor in adjustment as those standards mature.
Bucket two: high-risk obligations (Articles 8-27). If your product genuinely makes or materially influences decisions in employment, credit, education, or one of the other Annex III domains — and the Article 6(3) exception does not apply — you face the full regime: risk management systems, data governance, technical documentation, record-keeping, transparency to deployers and users, human oversight measures, accuracy and robustness requirements, quality management systems, conformity assessments, and post-market monitoring. These are substantial, and the timeline is real: August 2, 2026 for Annex III systems.1
What to do this quarter
Run a classification exercise. Map every AI feature in your product against Annex III categories. For each one, determine whether it falls within a listed use case, and if so, whether the Article 6(3) exception applies. Document your reasoning — the assessment itself is required under Article 6(4) if you claim the exception.
Implement transparency disclosures. If you have customer-facing AI — chatbots, AI assistants, content generation — add clear disclosure that users are interacting with AI. This is the obligation most likely to apply to you, and it is the easiest to implement.
Check your synthetic content pipeline. If you generate text, images, audio, or video, implement machine-readable labeling. The technical standards are still being developed, but the obligation is already in law.
If you are genuinely high-risk, start now. The conformity assessment and quality management system requirements under Articles 8-27 are not a quarter's work. They require sustained effort. If your classification exercise puts you in this bucket, budget accordingly.
The regulation does not require you to stop using AI. It requires you to know what kind of AI you are using, tell your users about it, and — if your system makes high-stakes decisions about people — prove that you built it responsibly.
Where the panic is misplaced
Three things the compliance industry is selling that most mid-market SaaS companies do not need.
Full AI management system implementations for limited-risk systems. An ISO/IEC 42001-aligned AI management system is excellent practice and may become table stakes for enterprise sales. But the EU AI Act does not require it for systems that only carry transparency obligations. If a consultant tells you that you need a full AIMS to comply with the AI Act and your systems are not high-risk, they are selling you insurance against a risk you do not carry.
Panic about the February 2025 prohibitions. The prohibited practices under Article 5 — social scoring, real-time remote biometric identification in public spaces, subliminal manipulation — are things no legitimate SaaS company was doing in the first place. If you need to check whether you are doing social scoring, you probably have larger problems.2
Treating general-purpose AI model obligations as your problem. If you are building on top of a foundation model (GPT, Claude, Gemini, Llama), the GPAI obligations under Chapter V fall on the model provider, not on you as the deployer. You have deployment-side obligations — transparency, and potentially high-risk compliance if your use case qualifies — but you are not responsible for the provider's systemic risk assessment or model evaluation.6
The way to tell whether you are actually exposed: look at what your AI system does to people, not what technology it uses. A rules-based system that autonomously rejects loan applications is higher-risk than a large language model that drafts marketing emails. The Act regulates use cases, not architectures.
The EU AI Act is real regulation with real enforcement. It is not, for most SaaS companies, the existential compliance event it has been marketed as. Know your classification. Implement your transparency obligations. And if your product does make high-stakes decisions about people's employment, credit, education, or access to services — take the time to build it right.
Full disclosure: Assay Cyber implements ISO/IEC 42001 management systems for companies that need them. Some readers of this post will. Most will not. The classification exercise is the first conversation — not a readiness program. If that exercise surfaces that you are genuinely high-risk, the implementation work that follows is substantial and worth doing properly. If it surfaces that you carry only transparency obligations, build those and move on.
If you want a 30-minute walkthrough of how the classification exercise applies to your specific product, I'm happy to have that conversation.
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (the EU AI Act). EUR-Lex OJ L 2024/1689.
- Article 5: Prohibited AI Practices. artificialintelligenceact.eu/article/5
- Article 6: Classification Rules for High-Risk AI Systems. artificialintelligenceact.eu/article/6
- Annex III: High-Risk AI Systems Referred to in Article 6(2). artificialintelligenceact.eu/annex/3
- Article 50: Transparency Obligations for Providers and Deployers of Certain AI Systems. artificialintelligenceact.eu/article/50
- Articles 51-56: General-Purpose AI Models (Chapter V). artificialintelligenceact.eu/chapter/5
- EU AI Act Implementation Timeline. artificialintelligenceact.eu/implementation-timeline