Visionify & The EU AI Act

Our Compliance Position and Governance Framework

Executive Summary

Visionify is committed to building and deploying AI systems that are safe, transparent, privacy-first, and aligned with European regulatory requirements, including the EU Artificial Intelligence Act (EU AI Act).

We have conducted an internal assessment of our platform and deployment models and categorized our system accordingly under the EU AI Act's risk-based framework. Based on our current product architecture and use cases, Visionify systems are classified as Limited Risk AI systems, not High-Risk AI systems.

This page outlines our position, assessment, and compliance controls.

1. Understanding the EU AI Act

The EU AI Act establishes a risk-based regulatory framework for artificial intelligence systems deployed in the European Union.

AI systems are categorized into:

  • Unacceptable Risk – prohibited systems
  • High Risk – subject to strict regulatory obligations
  • Limited Risk – subject primarily to transparency obligations
  • Minimal Risk – largely unregulated

Visionify has formally assessed its product suite against Annex III of the EU AI Act.

2. Visionify Risk Classification Assessment

2.1 What Visionify Does

Visionify provides computer vision–based workplace safety analytics using existing CCTV infrastructure. Core use cases include:

  • PPE Compliance Detection
  • Forklift & Pedestrian Near-Miss Detection
  • Smoke & Fire Detection
  • Slip & Fall Detection
  • Exclusion Zone Monitoring
  • Mobile Phone Policy Compliance

The system:

  • Does not perform facial recognition
  • Does not identify individuals
  • Does not use biometric identification
  • Does not score, profile, or rank individuals
  • Does not make automated employment decisions

2.2 Why Visionify Is Not High-Risk AI

Under Annex III of the EU AI Act, high-risk systems include AI used for:

  • Biometric identification
  • Employment decision-making
  • Worker performance evaluation
  • Law enforcement profiling
  • Critical infrastructure control

Visionify does not:

  • Identify specific individuals
  • Evaluate worker performance
  • Make hiring, firing, promotion, or disciplinary decisions
  • Conduct social scoring
  • Perform remote biometric identification

Visionify systems detect safety conditions and environmental compliance events, not individual identity.

Conclusion: Visionify qualifies as a Limited Risk AI system under the EU AI Act.

3. Transparency Obligations (Limited Risk AI)

Under the EU AI Act, Limited Risk AI systems must meet transparency requirements.

Visionify complies as follows:

3.1 AI Usage Disclosure

Customers are informed contractually that:

  • AI is used to detect safety events.
  • Computer vision algorithms process CCTV footage.
  • Detection is automated.

3.2 Worker Notification

We recommend and support customers in:

  • Posting workplace signage indicating AI-based safety monitoring.
  • Informing works councils and unions.
  • Including system descriptions in internal policy documentation.

3.3 No Deceptive AI Interaction

Visionify does not simulate human interaction, impersonate individuals, or generate synthetic personas.

4. Data Protection & Privacy Safeguards

Visionify's system is architected to minimize privacy risk:

4.1 No Biometric Identification

  • No facial recognition
  • No identity matching
  • No database cross-referencing

4.2 Edge-Based Processing

  • Video processing occurs on local edge servers.
  • Raw footage remains under customer control.
  • Visionify does not maintain centralized raw video archives.

4.3 Data Minimization

  • Only short safety event clips are stored.
  • Timestamps and camera IDs are recorded.
  • No GPS tracking.
  • No employee database integration.

4.4 Anonymization

  • Face and body blurring options available.
  • Privacy masking configurable per deployment.

4.5 Customer Data Ownership

Customers retain full control over:

  • Data retention policies
  • Data deletion
  • Access control

5. Governance & Risk Management Framework

Even though Visionify is classified as Limited Risk, we voluntarily implement structured governance controls consistent with High-Risk best practices.

5.1 AI Risk Review Process

Each new feature undergoes:

  • Legal impact review
  • Privacy assessment
  • Bias and fairness evaluation
  • Technical validation testing

5.2 Human Oversight

  • Visionify does not operate as a fully autonomous system.
  • All alerts are reviewed by designated safety personnel.
  • No automatic penalties or disciplinary actions are triggered.
  • Final decisions remain human-controlled.

5.3 Model Validation & Performance Monitoring

  • Accuracy benchmarking before deployment
  • Continuous performance monitoring
  • False positive/negative rate tracking
  • Customer feedback loops

5.4 Security Controls

  • SOC-2 Type II compliance
  • Encrypted data transmission (TLS)
  • Role-based access control
  • Audit logging
  • Secure cloud and on-prem deployment options

6. Prohibited Practices Statement

Visionify does not engage in:

  • Social scoring of workers
  • Real-time remote biometric identification
  • Emotion recognition for employment decisions
  • Manipulative or exploitative AI practices
  • Worker profiling for behavioral prediction

7. Customer Responsibilities Under the EU AI Act

Under the EU AI Act, deployers (customers) have responsibilities.

Customers must:

  • Ensure proper workplace notification
  • Use the system for safety purposes only
  • Avoid repurposing the system for performance surveillance
  • Maintain documented internal governance policies

Visionify provides documentation to support customer compliance.

8. Ongoing Regulatory Monitoring

Visionify actively monitors:

  • EU AI Act delegated acts and updates
  • European Commission guidance
  • National enforcement developments
  • Intersection with GDPR and works council requirements

We will update this page as regulations evolve.

9. Downloadable Resources

Download our comprehensive EU AI Act Compliance Packet, which includes:

  • EU AI Act Compliance Position Statement - Our formal compliance position and intended use limitations
  • Formal AI System Classification Report - Detailed Annex III assessment and risk classification matrix
  • Works Council & Worker Protection Summary - Worker privacy commitments, human oversight controls, and GDPR alignment

Download Visionify EU AI Act Compliance Packet v2.0 (PDF)

For additional documentation requests or compliance questionnaires, please contact us at compliance@visionify.ai.

10. Contact

For EU AI Act documentation requests, risk classification letters, or compliance questionnaires:

Email: compliance@visionify.ai

Download Your Safety Companion

Workplace safety at your fingertips

Download on the App Store
Get it on Google Play