
Why do I need to know about the EU AI Act?
When did the AI Act come into force and when do rules apply?
What do I need to do to comply with the EU AI Act in my organisation?
Is my organisation affected by the EU AI Act?
Who supervises AI under the EU AI Act?
What is Ireland's approach to the EU AI Act?
What is an AI system and why does it matter for my organisation?
How does the EU AI Act categorise AI systems that I use in my organisation?
Do prohibited AI practices affect my organisation?
Where can I get official guidance and support for my organisation?
Why do I need to know about the EU AI Act?
The EU Artificial Intelligence Act directly affects any organisation using or providing AI within the EU, and Ireland is fully part of this regulatory framework.
The EU AI Act is a set of rules that apply across the European Union to make sure artificial intelligence is used safely.
Its goal is to support the development of AI that is trustworthy and focused on people, while also protecting things like health, safety, basic rights, democracy, the rule of law, and the environment.
The EU AI Act groups AI systems into different risk levels. Most AI systems are considered low risk, so they don’t have to meet any special requirements under the Act.
Even if you’re just using AI tools in Ireland, understanding the Act protects your organisation, reduces legal risk, and positions you as a responsible player in the AI economy.
Why it matters
It applies to organisations operating in Ireland
Any Irish company that develops, sells, or uses AI—even internally—must comply with the Act. Non-compliance could lead to fines, legal risks, or reputational damage.
High-risk AI comes with strict obligations
If your work involves AI in hiring, credit scoring, health, safety, or critical services, you must meet documentation, transparency, and oversight requirements.
Prohibited practices are banned everywhere in the EU
Certain AI uses (like manipulative or exploitative systems) are illegal. Using them, even unknowingly, can have serious consequences.
Ireland is implementing the AI Act locally
The framework of the distributed model of regulatory oversight in conjunction with the AI Office of Ireland is responsible for supervision and guidance. Being aware now helps you stay ahead of enforcement and guidance.
It’s about trust and business opportunity
Complying with the AI Act demonstrates responsible AI use, which can improve client trust, competitiveness and open doors to EU and global markets.
When did the AI Act come into force and when do rules apply?
The AI Act entered into force on 2 August 2024.
Its obligations are being phased in over 36 months: certain rules have already started, and others will take effect through to 2027 depending on the risk category of the AI system.
What do I need to do to comply with the AI Act in my organisation?
- Identify where AI is used in your organisation
Start by mapping any tools, software, or systems that use artificial intelligence. This includes AI you build internally as well as AI provided by third-party vendors.
- Determine the risk category of the AI system
The Act classifies AI systems into risk levels (for example: minimal, limited, high-risk, and prohibited). Your compliance obligations depend on the category your AI system falls into.
- Check whether any AI practices are prohibited
Ensure your organisation is not using AI systems that are banned under the Act, such as certain forms of manipulative or exploitative AI.
- Put governance and oversight in place
Create internal policies for responsible AI use. This may include assigning responsibility for AI oversight, maintaining documentation, and establishing procedures for monitoring AI systems.
- Ensure transparency and documentation
For some AI systems, organisations must clearly inform users when they are interacting with AI and keep technical documentation explaining how the system works and how risks are managed.
- Build AI literacy within the organisation
Staff involved in developing, deploying, or managing AI should have appropriate training so they understand both the technology and the legal obligations - organisations need AI literacy for compliance.
- Work with trusted suppliers
If you are using AI systems from external providers, confirm that the vendor complies with the AI Act and can provide the necessary documentation.
- Monitor updates and guidance
Implementation of the Act will continue over the coming years. Organisations should follow guidance from national authorities and EU bodies as the rules become fully operational.
Please visit the AI Act Compliance Checker to support organisation evaluation.
Is my organisation affected by the EU AI Act?
Most organisations will be impacted in some way by the EU AI Act if they develop, use, supply, or distribute AI systems within the European Union.
Your organisation is likely affected if you are one or more of the following actors involved in creating, distributing, or using AI technologies:
- Providers
Companies or organisations that develop AI systems or place them on the EU market under their name or brand
- Deployers
Organisations that use AI systems in their operations or services (for example businesses using AI for hiring, healthcare diagnostics, or customer service)
- Importers
Entities that bring AI systems developed outside the EU into the EU market
- Distributors
Businesses that make AI systems available in the EU market without modifying them (for example resellers or technology distributors)
The Act applies to both the public and private sector AI use.
Excluded are AI systems used exclusively for military, defence, or scientific research.
Extraterritorial Effect: AI Act applies if AI system has impact on EU citizens.
AI Act may also apply to non-EU providers and deployers.
The level of impact depends on how the AI is used. The Act applies a risk-based approach.
Who supervises AI under the EU AI Act?
These organisations oversee compliance, enforcement, and coordination of the AI Act at both EU and national level.
- EU AI Office
A body at EU level responsible for coordinating implementation of the AI Act, particularly for advanced AI models and ensuring consistent application across EU member states.
- Department of Enterprise, Tourism and Employment
The Irish government department responsible for national policy related to AI regulation and implementation of the AI Act in Ireland.
- Market Surveillance Authorities
National authorities responsible for monitoring AI systems placed on the market and ensuring they comply with the requirements of the AI Act.
- Notified Bodies
Independent organisations designated by Member States to assess and certify whether high-risk AI systems comply with the AI Act before they are deployed.
- Notifying Authorities
National bodies responsible for designating and supervising notified bodies and ensuring they meet required standards.
- Fundamental Rights Bodies
Authorities or organisations responsible for ensuring AI systems respect fundamental rights such as privacy, non-discrimination, and human dignity.
- AI Office of Ireland
A national coordination structure responsible for implementing AI governance, regulatory coordination, and engagement with EU AI governance frameworks.
What is Ireland's approach to the EU AI Act?
Ireland has a central coordinated and sectorial supervision implementation approach. Ireland uses a distributed model of regulatory oversight, where multiple existing sector regulators (for example, Central Bank, Data Protection Commission, Health and Safety Authority) are designated as competent authorities.
Market Surveillance Authority (MSA)
This is the group that checks AI systems already on the Irish market to make sure they follow the rules.
- They inspect, monitor, and enforce compliance
- If something goes wrong, they can investigate and take action
Notified Authority (NA)
This is the body that approves and oversees organisations (called “notified bodies”) that test or certify high‑risk AI systems.
- They make sure the testing organisations are qualified and do their job properly.
- They help maintain quality and trust in how AI is assessed.
Fundamental Rights Body (FRB)
This group checks that AI systems do not harm people’s rights.
- They look at risks to privacy, fairness, discrimination, democracy, and freedoms.
- They raise concerns if an AI system could negatively affect people or society.
Ireland plans to establish the AI Office of Ireland to act as the central coordinating authority and 'single point of contact' for the AI Act’s implementation and enforcement across sectors. It will also support innovation (for example, regulatory sandboxes). Email: aiinfo@enterprise.gov.ie
What is an AI system and why does it matter for my organisation?
The Act applies equally to uses of AI in the public service as to the private sector. However, it provides exemptions for certain applications of AI relating to national defence; national security; scientific R&D; R&D for AI systems, models; open-sourced models; and personal use.
Machine-based system: must be computationally driven and based on machine operations
Adaptiveness after deployment: Adaptiveness is not mandatory for a system to qualify as an AI system (“may exhibit”)
Explicit or implicit objectives: An AI system must be designed to achieve specific objectives
Autonomy: All systems that are designed to operate with some reasonable degree of independence of actions fulfil the condition of autonomy
- The inference capacity of an AI system is key to bringing about its autonomy
Outputs influencing physical or virtual environments includes influencing decision-making
Infers from input how to generate outputs: Systems that only execute human-defined rules without drawing conclusions from data are excluded
- Recital 12: the capacity of an AI system to infer transcends basic data processing by enabling learning, reasoning or modelling
- The element of the definition delineates an AI system from simpler software
High-risk AI systems require conformity assessments and technical documentation. Many systems must also be classified under specific criteria and listed by providers.
How does the EU AI Act categorise AI systems that I use in my organisation?
It uses a risk-based approach:
- Prohibited AI practices – banned outright because they pose unacceptable risk.
- High-risk AI systems – subject to strict compliance requirements.
- Limited-risk systems – subject to certain transparency duties (for example, chatbots).
- Minimal-risk systems – no additional obligations beyond existing law.
Do prohibited AI practices affect my organisation?
- Practices such as social scoring, emotion recognition in workplaces and education, predictive criminal profiling, and untargeted face-image scraping are among those prohibited because of their risk to rights and freedoms.
- Providers of high-risk AI must comply with quality management, risk assessment, documentation, transparency, human oversight and robustness requirements before placing systems on the market.
More information: Article 5: Prohibited AI practices - AI Act Service Desk
Where can I get official guidance and support for my organisation?
The AI Act Service Desk and its Single Information Platform offer online tools such as the AI Act Explorer and Compliance Checker, plus more answers to detailed questions on obligations and compliance.
Useful links
Information on the EU AI Act and list of Competent Authorities
AI Act Single Information Platform - AI Act Service Desk
EU AI Act Compliance Checker - AI Act Service Desk
An Introduction to the Code of Practice for General-Purpose AI - EU Artificial Intelligence Act
EDIH - CeADAR
General Scheme of the Regulation of AI Bill