AIPIA Europe

Regulatory brief

The EU AI Act and the role of professional associations.

Regulation (EU) 2024/1689 — adopted by the European Parliament and Council on 13 June 2024, published in the Official Journal on 12 July 2024, and entered into force on 1 August 2024 — is the first horizontal statute on artificial intelligence in the world. This brief sets out what the Act covers, when its provisions apply, and what it expects from professional AI associations.

Timeline

Application is staged across four years.

  1. 1 August 2024

    Regulation enters into force across the Union.

  2. 2 February 2025

    Prohibited practices in Article 5 begin to apply. The AI literacy duty in Article 4 binds every organisation using AI systems.

  3. 2 August 2025

    Governance rules and obligations for providers of general-purpose AI models apply.

  4. 2 August 2026

    Bulk application: high-risk AI systems under Annex III, employment-related AI, transparency obligations and the rest of the Act apply in full.

  5. 2 August 2027

    High-risk systems embedded in already-regulated products (Annex I) reach full applicability. Legacy general-purpose AI models must reach compliance.

Risk categories

Four levels of risk, four corresponding regulatory tracks.

Unacceptable

Prohibited practices (Article 5)

Eight categories are prohibited outright: subliminal manipulation that causes harm, exploitation of vulnerabilities, social scoring by public authorities, predictive policing based solely on profiling, untargeted scraping of facial images, emotion recognition in workplaces and education, biometric categorisation on sensitive attributes, and real-time remote biometric identification in public spaces for law enforcement (with three narrowly drawn exceptions).

High

High-risk systems (Annex I and Annex III)

The bulk of the Act sits here. Annex I covers AI as a safety component of products already covered by EU harmonisation law — medical devices, machinery, vehicles. Annex III adds eight standalone domains: biometrics, critical infrastructure, education, employment, essential public and private services, law enforcement, migration, and the administration of justice and democratic processes.

Providers and deployers of high-risk AI must run a risk management system, ensure data quality, document the system technically, log operations, guarantee human oversight, and meet thresholds on accuracy, robustness and cybersecurity.

Limited

Transparency obligations

Lighter requirements: end-users must be made aware that they are interacting with an AI system. Chatbots, deepfakes and synthetic-content generators fall here.

Minimal

No mandatory requirements beyond Article 4

The majority of AI applications on the EU single market — spam filters, recommendation systems for non-sensitive content, AI-assisted productivity tools — fall outside the regulated tiers. Article 4 AI literacy still applies.

Implications

What this means for AI professionals.

The Act formalises three things that until now lived in industry expectations: documented practice, professional accountability, and continuing competence. Practitioners who develop, deploy or audit AI systems will increasingly need to evidence what they know, how they decide, and how they record their decisions.

Article 4 — the AI literacy duty — now obliges every organisation that uses AI to ensure its workforce can understand, deploy and supervise those systems competently. That is a curriculum problem, not a compliance checkbox. Verifiable credentials, professional codes of conduct and accredited training pathways become load-bearing infrastructure.

Article 95 invites the Commission and national authorities to encourage the drafting of voluntary codes of conduct, with professional associations explicitly named as legitimate co-authors.

AIPIA's contribution

A professional association sized for the Act.

AIPIA was constituted under Italian Law 4/2013 specifically to occupy this role: registering professional practitioners, issuing the European AI Credential through Europass and eIDAS, publishing a code of ethics, and representing members at the European AI Alliance.

The European-facing presence at aipia.eu makes the credential, the directory and the community accessible to practitioners across the Union, regardless of which regulator they answer to at home.

  • — Member of the European AI Alliance (European Commission)
  • — Official endorser of the Rome Call for AI Ethics
  • — Accredited issuer of European Digital Credentials
  • — Aligned with OECD AI Principles since 2019

Frequently asked

Five questions practitioners ask first.

When does the EU AI Act fully apply?
The Regulation entered into force on 1 August 2024. Prohibited practices and AI literacy duties applied from 2 February 2025. Governance rules and obligations for general-purpose AI models applied from 2 August 2025. The bulk of obligations on high-risk AI systems applies from 2 August 2026, with an extended transition for systems embedded in regulated products until 2 August 2027.
Which AI systems are prohibited?
Article 5 lists eight categories: subliminal manipulation causing harm, exploitation of vulnerabilities of specific groups, social scoring by public authorities, predictive policing based solely on profiling, untargeted facial-image scraping, emotion recognition in workplaces and educational settings, biometric categorisation on sensitive attributes, and real-time remote biometric identification in public spaces for law-enforcement purposes (with three narrow exceptions).
What counts as a high-risk AI system?
Two routes apply. Annex I covers AI as a safety component of products already regulated by EU harmonisation legislation — medical devices, machinery, vehicles. Annex III lists eight standalone domains: biometrics, critical infrastructure, education, employment, essential public and private services, law enforcement, migration, and the administration of justice and democratic processes.
Does the AI Act apply to small organisations?
Yes, with some proportionality measures. The AI literacy duty in Article 4 applies to every organisation that uses AI systems professionally, regardless of size. Obligations on high-risk AI providers apply to small and medium enterprises that fall within scope, although the Act provides for support measures and lighter procedural requirements where allowed.
Why join a professional AI association now?
Professional associations such as AIPIA are formally recognised as channels through which the Commission and national authorities consult industry, as venues for the development of voluntary codes of conduct under Article 95, and as accredited issuers of professional credentials. Membership puts a practitioner inside the regulatory conversation rather than on the receiving end of it.