How the assessment works
A structured, deterministic interpretation of the EU AI Act risk classification framework — translated into an operational decision model.
No generative AI is used to determine your risk classification.
From regulation to decision logic
The EU AI Act introduces a risk-based classification system for AI systems, but its application can be complex in practice.
This tool translates the regulatory framework into a structured set of decision rules, allowing a fast and consistent preliminary assessment.
Methodology in 3 steps
Regulatory mapping
We map the EU AI Act provisions into a structured set of classification criteria.
- —Articles and definitions are analyzed and grouped into decision-relevant elements
- —Key conditions (use case, context, impact) are identified
- —Ambiguities are handled with conservative interpretation logic
Decision model
The criteria are translated into a deterministic decision framework.
- —Questions are designed to capture legally relevant factors
- —Each answer activates specific logical branches
- —The system follows a predefined classification path
Risk indication
The output provides a preliminary risk classification with contextual explanation.
- —Likely risk category (e.g. prohibited, high-risk, limited risk)
- —Key drivers behind the classification
- —Directional indication for next steps
What this assessment is designed for
- ✓Early-stage compliance screening
- ✓Internal decision support
- ✓Understanding potential regulatory exposure before legal review
What this tool is — and is not
What it is
- ✓A structured pre-assessment based on the EU AI Act
- ✓A consistent and repeatable classification approach
- ✓A support tool for compliance and product teams
What it is not
- ×Legal advice
- ×A formal compliance certification
- ×A substitute for professional legal analysis
Design principles
Clarity over complexity
The regulatory logic is simplified without losing essential meaning.
Determinism over opacity
The same inputs always produce the same output.
Conservative interpretation
When ambiguity exists, the model prioritizes safer classification paths.
Limitations
The assessment is based solely on the information provided by the user and does not account for:
- —Full system architecture or technical implementation details
- —Jurisdiction-specific interpretations or future regulatory updates
- —Contextual factors not explicitly captured in the questionnaire
Part of a broader compliance toolkit
This tool is the first component of a broader set of AI Act compliance tools designed to support organizations across different stages:
- —Initial classification
- —Risk management and documentation
- —Ongoing compliance monitoring
Start your assessment
Get a structured preliminary classification in minutes and understand where your AI system may fall under the EU AI Act.
Start Free AssessmentFree assessment • 2–3 minutes • No signup required