Experts Across Tech Sector Share Their Views On EU AI Act Changes Coming Into Force

Preparatory obligations that are related to high risk AI systems take effect today as the EU AI Act moves closer to full application. These rules apply to organisations inside and outside the EU that place AI systems on the EU market or use them within the bloc.

The European Commission says the law responds to safety and rights risks tied to certain AI uses. In its guidance, it says, “The AI Act is the world’s first comprehensive law for AI. It aims to address risks to health, safety and fundamental rights.”

High risk systems cover AI used in areas such as recruitment screening, credit scoring, access to healthcare, education assessment and law enforcement. These are uses where automated outputs can influence decisions about individuals in direct and lasting ways.

The Commission links the law to trust. It says uneven national rules and legal uncertainty slowed uptake of AI across the EU, creating the need for a single framework.

 

What Do Providers Need To Do Before Placing High Risk AI On The Market?

 

Providers must complete a conformity assessment before a high risk AI system is placed on the market or put into service. This assessment checks risk management, data governance, technical documentation, transparency, human oversight, accuracy and cybersecurity.

A quality management system must also be in place across the system’s lifecycle. According to the Commission, “Providers of high risk AI systems remain responsible for the safety and compliance of the system throughout its lifecycle.”

Each high risk system must be entered into a public EU database. Authorities can review this information as part of market surveillance.

If the system or its intended use changes in a meaningful way, the assessment must be carried out again. For AI used as safety components in regulated products, Article 6 links these duties directly to third party product conformity checks.

 

What New Duties Are On Deployers And Public Authorities?

 

Deployers must follow instructions of use and monitor how systems operate in practice. Human oversight must be assigned to staff with the authority to intervene when risks appear.

Public authorities and organisations delivering public services must complete a fundamental rights impact assessment before first use. This looks at effects on rights protected under EU law, alongside data protection duties.

People affected by AI supported decisions must be informed. Where a decision has legal effects, individuals can request an explanation. The Act says deployers must give “a clear and meaningful explanation.”

Workplace use brings added notice duties. Employees and workers’ representatives must be informed before high risk systems are deployed.

 

 

Why Is Classification And Guidance Still Needed At This Stage?

 

The Act classifies high risk AI by intended purpose. Annex III lists sensitive uses in employment, education, migration, justice and biometric identification.

Providers can argue that an Annex III system is not high risk if it performs a narrow or preparatory task and does not influence outcomes. That assessment must be documented and shared with authorities on request.

The Commission says it will issue guidance with practical examples to support classification. It says the aim is to give businesses clarity while keeping protection for health, safety and fundamental rights in place.

Penalties reinforce the rules, with fines reaching €35m or 7% of global turnover for banned practices, and lower thresholds for other breaches.

 

Tech experts react to the updates:

  • Ian Jeffs, UK&I Country General Manager at Lenovo Infrastructure Solutions Group, on operationalising compliance as AI scales
  • Adam Spearing, VP of AI GTM ServiceNow EMEA, on Europe leading responsible innovation and ‘governed acceleration’
  • Christian Kleinerman, EVP Product, Snowflake, on designing systems with regulatory requirements built-in and AI leadership

 

Ian Jeffs, UK&I Country General Manager, Lenovo Infrastructure Solutions Group (ISG)

 

 

“This milestone in the EU AI Act is an important step in giving businesses greater clarity as AI moves from experimentation to large-scale deployment across Europe. The Commission’s forthcoming guidance on high-risk classification and post-market monitoring will be critical in helping organizations operationalize compliance in a consistent and practical way.

“Our latest CIO research shows Europe is at an AI inflection point: while 57% of organizations in Europe and the Middle East are already approaching or in late-stage AI adoption, only 27% have a comprehensive AI governance framework in place. The AI Act helps close that gap by reinforcing trust, accountability and transparency as enablers of innovation rather than barriers.

“As AI scales across hybrid and edge environments, risk-based and implementation-focused regulation will be essential to sustaining Europe’s competitiveness. This moment should be seen not just as a compliance deadline, but as an opportunity to embed responsible AI practices into enterprise AI strategies from day one.”

 

Adam Spearing, VP of AI GTM ServiceNow EMEA

 

 

“February 2026 marks a pivotal moment as the EU AI Act moves from intent to execution. The Commission’s work on post-market monitoring and clearer guidance on high-risk AI systems will give organisations the clarity they need to adopt AI responsibly and at scale.

“Crucially, this milestone reinforces Europe’s ambition to lead through responsible innovation. Clear rules don’t slow innovation – they prevent the technical debt that comes from bolting on governance after the fact. As with GDPR, Europe has the opportunity to shape global standards while driving real economic value.

“As I see it: reactive AI governance is a hindrance; proactive AI governance is an accelerator to business value. The next challenge is what I call ‘governed acceleration’ – operationalising these rules by embedding governance directly into everyday workflows. Organisations must balance speed with accountability. Those that succeed will turn compliance into a competitive advantage and help ensure AI becomes a long-term growth engine for Europe.”

 

Christian Kleinerman, EVP Product, Snowflake

 

 

“The EU AI Act’s new guidelines are a defining moment for AI in Europe, in advocating for the safe and responsible use of AI. The AI Act’s risk-based approach is especially important because it focuses regulation on how AI is deployed and used, rather than treating all AI systems equally. By elevating AI literacy as a core requirement, the EU is putting people at the center of AI’s discourse. As AI evolves to become more autonomous and drive business decision-making, trust and safety are essential foundations to the skills and platforms we rely on.

“With the regulatory landscape taking shape, businesses must place a greater focus on transparency, traceability, and auditability. This will ensure they have the right frameworks in place to meet their respective obligations. For many organizations, the biggest challenge isn’t the models themselves, but understanding and governing how AI systems interact with sensitive data and business processes.

“This shift creates a real opportunity for European organizations. Balancing innovation with compliance doesn’t mean slowing down, it means designing systems where regulatory requirements are built in by default, not managed manually or bolted on later. When governance, security, and monitoring are native to a platform, teams can innovate faster because they’re not constantly reinventing guardrails. Companies that move fast and earn trust in AI will be the ones that unlock a competitive advantage.

“Responsible innovation demands partnership — across industries, with regulators, and with society at large. Leadership in AI starts with transparency, strong governance, and human oversight, ensuring that systems are explainable and not treating AI as an opaque black box. AI itself isn’t inherently good or bad, it reflects how responsibly we choose to wield it. Responsibility doesn’t stop at model design, it extends to how we prepare people and institutions for an AI-driven economy.”