Part 2: Experts Comment As The EU Artificial Intelligence Act Enters Force

With the EU’s AI Act coming into action, it might be worth looking at what the different categories of risk the Act uses. The EU AI Act sorts artificial intelligence systems into categories based on their risk levels, affecting the degree of regulatory attention each receives.

 

Prohibited AI Systems

 

These are systems that are considered too dangerous to be allowed. These might involve technologies that manipulate decision-making or profile people in ways that lead to harmful consequences, like unauthorised use of biometric data in public places.

 

High-Risk AI Systems

 

These require careful oversight due to their profound effects on individual rights and public safety. This group includes AI used in sensitive areas such as healthcare, law enforcement, and employment, where AI decisions have a major impact. Regulations are there to make sure that these systems are transparent and their operations are well-documented to prevent abuses.

 

Limited Risk AI Systems

 

Risks that are subject to less stringent regulations, focusing primarily on transparency. Technologies in this category, like chatbots and AI-generated media, need to clearly inform users that they are interacting with AI to prevent deception.

 

Minimal Risk AI Systems

 

These are subject to minimal regulatory oversight. These generally safe applications, such as AI in video games or spam filters, have little direct effect on users’ rights or safety.

 

How Can Businesses Check Compliance?

 

Businesses need to make sure that their AI systems comply with the Act, and this involves taking a few measures…

Identifying AI Applications is the first way to go. Companies must review their technologies to see which ones fit the AI definitions under the Act. This includes systems that autonomously analyse data to make decisions or create outputs.

Using the Compliance Checker is an effective way for businesses to evaluate their AI systems against legal standards. This tool asks pertinent questions about the AI’s functionality and usage, helping to highlight any areas of non-compliance.

Reviewing Compliance Measures is another one, especially for AI systems identified as high-risk. Businesses must check that these systems have effective risk management and data governance frameworks in place. Maintaining detailed records that demonstrate compliance is also essential.

Finally, Keeping Up-to-Date and Adhering to Local Regulations is so important. Staying up to date with changes and updates, regular consultations with legal experts and participation in relevant forums, are some ways to make sure you’re staying compliant with AI regulations.

 

Experts Weigh In On The Act

 

More experts have shared their views on the Act and its impact on the different industries:

 

Our Experts:

 

Jad Jebara, President & CEO, Hyperview
Denas Grybauskas, Head of Legal, Oxylabs
Erin Nicholson, Global Head of Data Protection, Privacy, and AI Compliance, Thoughtworks.
Jacob Beswick, Director of AI Governance, Dataiku
Karthik Sj, General Manager of AI, LogicMonitor
Yohan Lobo, Industry Solutions Manager, Financial Services, M-Files
Paul Cardno, Global Digital Automation & Innovation Senior Manager, 3M
Pieter Arntz, Senior Threat Researcher, Malwarebytes
Curtis Wilson, Staff Data Engineer, Synopsys Software Integrity Group
Eleanor Lightbody, CEO, Luminance

 

Jad Jebara, President & CEO, Hyperview

 

 

“We are in the early stages of the AI journey, and as with every technological advancement, legislation tends to lag behind. The initial steps taken by the EU AI Act are commendable, but we must recognise that the rapid advancements in AI require equally swift and iterative evolution of our laws.

“Ideally, this should be done through global collaboration, similar to the recent OECD initiative that established a 15% minimum tax bracket.

As Uncle Ben from ‘Spider-Man’ famously said, ‘With great power comes great responsibility.’

Consequently, laws and regulations must address the following key areas:

Policy compliance: Implementing and auditing regulations efficiently and seamlessly. The necessity for smooth and efficient processes in this area cannot be overstated.
Risk management: Proactively detecting and mitigating risks, including fairness issues, bias, drift, hallucinations, IP and copyright infringement, jurisdictional requirements, and data privacy regulations. The potential risks in AI are vast, underscoring the complexity of this challenge.
Lifecycle governance: Effectively managing, monitoring, and governing AI models throughout their lifespan.
Accessibility and fair competition: Ensuring AI development isn’t limited to large corporations and multinationals but remains accessible to a diverse range of entities.”

 

Denas Grybauskas, Head of Legal, Oxylabs

 

 

“As the AI Act comes into force, the main business challenges will be uncertainty in its first years. Various institutions, including the AI office, courts, and other regulatory bodies, will need time to adjust their positions and interpret the letter of the law. During this period, businesses will have to operate in a partial unknown, lacking clear answers if the compliance measures, they put in place are solid enough.

“One business compliance risk that is not being discussed lies in the fact that the AI Act will affect not only firms that directly deal with AI technologies but the wider tech community as well.

“Currently, the AI Act lays down explicit requirements and limitations that target providers (i.e., developers), deployers (i.e., users), importers, and distributors of artificial intelligence systems and applications. However, some of these provisions might also bring indirect liability to the third parties participating in the AI supply chain, such as data collection companies.

“Most AI systems today are based on machine learning models that require an abundance of data for training to ensure that the model has an adequate contextual understanding, is not outrightly biased, and does not hallucinate its outputs.

“Today, AI developers are looking for ways to scrape as much publicly available web data as possible. Although the AI Act does not target data-as-a-service (DaaS) companies and web scraping providers, these firms might indirectly inherit certain ethical and legal obligations.

“A prime example is web scraping companies based in the EU who will have to ensure they do not supply data to firms developing prohibited AI systems. If a company willingly cooperates with an AI firm that, under EU regulation, is breaking the law, such cooperation might bring legal liability.

“Moreover, web scraping providers will need to install robust know-your-customer (KYC) procedures to ensure their infrastructure is used ethically and lawfully, ensuring an AI firm is collecting only the data they are allowed to collect, not copyright-protected information.

“Another broad compliance-related risk that I can foresee comes from the decision to grant some exemptions under the AI Act for systems based on free and open-source licences

“There is no consolidated, single definition of “open-source AI”; and it is unclear how the widely defined open-source model might be applied to AI. This situation has already resulted in companies falsely branding their systems as “open-source AI” for marketing purposes. Without clear definitions, even bigger risks will manifest if businesses start abusing the term to win legal exemptions.

“The AI Act has the potential to establish trust across the industry but may also be detrimental to innovation across the technology industry. Organisations must be on their toes, as they may face penalties in the millions for severe violations involving high-risk AI systems.”

 

Erin Nicholson, Global Head of Data Protection, Privacy, and AI Compliance, Thoughtworks

 

 

“The EU AI Act places human oversight, explainability, and data governance centre stage. This is critical, both in terms of mitigating bias and discrimination in AI outputs and decision making, and also as a cornerstone of public trust.

“The Global AI Index shows the UK lagging when it comes to its AI ‘operating environment’ – including the public’s opinion on artificial intelligence. There is much work to be done to educate the public on AI and its usage, and further regulation can help build public trust.

“Transparency and ethics in AI also matter hugely. People should be aware when they’re interacting with artificial, not human, intelligence, for example.

“Abuses like manipulating user preferences or utilising AI for harmful purposes like social mobilisation or creating illegal content must be actively discouraged.

A strong example here is the way companies frequently fall into a data protection blind spot by mistakenly assuming ‘personal data’ only applies to their customers – leaving employee data dangerously exposed to damaging data breaches.

“Important employee data, from home addresses, bank account details, IP addresses to even more sensitive details like political opinions or genetic data could potentially be accessed by unauthorised bad actors.”

 

Jacob Beswick, Director of AI Governance, Dataiku

 

 

“Today marks the EU AI Act officially coming into force and given its extraterritorial application many UK businesses will be preparing to comply with the new rules in order to continue operations within the EU.

“As one of the most comprehensive pieces of AI regulation to be passed to date, preparing for compliance is both a step into the unknown as well as an interesting bellwether as to what might be to come in terms of AI-specific regulatory obligations across the globe.

“With the countdown now starting until the regulation fully applies, there are a number of steps UK businesses should be taking over the next 18 or so months to ensure they are prepared.

“First, UK businesses should take stock of their AI assets and review what AI systems are operationalised within Europe. Once businesses have a full overview of where their AI assets are and where they are operating, they should move on to qualifying these assets.

“As a step towards EU AI Act compliance readiness, businesses should extend their understanding of where they are deploying AI systems to the intended purpose of these systems, the technologies used (e.g. generative AI), and where these systems fall in terms of the risk tiering established in the EU AI Act.

“Determining exposure to future compliance obligations will enable businesses to begin taking action to mitigate the risk of non-compliance and avoid disruptions to business operations whether through fines or pulling operational systems from the market.”

 

Vasagi Kothandapani, President of TrainAI, RWS

 

 

“The EU AI Act will significantly impact how AI is used right now within the financial sector but to realise positive impacts, organisations first need to overcome some hurdles. The financial sector is already using AI extensively which is positive, as organisations can explore different applications of AI within their business.

“However, there are major concerns that the Act may stifle innovation. With the Act in place, it’s important we ensure that there is still a level-playing field for global firms to invest in AI and maintain the momentum within the sector.

“Additionally, the EU AI Act poses some challenging questions relating to what data should be collected and made available to the supply chain. Within this debate, organisations must consider how they can demonstrate transparency in the way data is used to train and inform AI models.

“For example, where financial institutions are developing additional AI-based creditworthiness processes, which are classified as high-risk AI use cases under the EU AI Act, they will need to not only address heightened requirements but also focus on the communication of privacy and safety to consumers.

“On the other end of the spectrum, the Act could also make a great positive impact. For instance, it can help firms manage risk in critical areas such as fraud and financial crime, the avoidance of discrimination and in delivering transparency to stakeholders.

“It can also positively impact the advancement of AI literacy among financial services workers by helping to develop a code of practice. It’s important that we have regulation within the industry, and as long as negative impacts are negotiated carefully, the positives will far outweigh them.”
 

 

Karthik Sj, General Manager of AI, LogicMonitor

 

 

“My concern about the EU AI Act is with the ‘high’ risk category. By saying all high-risk AI systems need to be assessed before going to market, as well as throughout their lifecycle, we create a number of additional hoops for companies in the AI ecosystem to jump through. High-risk AI systems will also be subject to public complaints to national authorities.

“Regulating AI is no mean feat, and I applaud all global efforts to encourage the governance of AI that will ultimately support public buy-in. But I fear this regulation, in practice, will unintentionally stifle innovation and hinder AI deployment and adoption.

“Of course we need the correct safeguards in place when dealing with any technology, but by taking an overly stringent approach and applying unnecessary red tape when the AI technology is in its nascent stage we will collectively lose out on the transformational capabilities of AI that has to benefit all corners of society.

“I have no doubt that other regions will shortly follow in the EU’s footsteps, the global impact of this legislation should not be overlooked. I’m also intrigued to see if other regions, particularly the UK, will be bold enough to challenge this regulation in favour of a more pro-innovation approach.

“I wonder too whether the Act will lead to the creation of low-regulation hotspots where companies will put down roots, thereby creating ‘AI hubs’.”

 

Yohan Lobo, Industry Solutions Manager, Financial Services, M-Files

 

 

“Now that GenAI has broken into the mainstream, businesses across industries are rushing to implement these solutions and get ahead of the competition. However, firms can only implement a GenAI tool if they are sure it is safe and reliable.

“The EU’s AI act adds another layer of complexity for business leaders with concerns about the safety of their models. The majority of GenAI solutions will fall into the first tier of regulation, where organisations must prove that the data the model is grounded in can be relied upon.

“The impact of the bill is likely to permeate beyond the EU, with other nations and governing bodies sure to follow suit if the legislation is a success. As a result, it’s crucial that companies developing GenAI models consider how they can better align with any upcoming regulatory changes.

“Satisfying the requirements of the EU AI act is dependent upon three key pillars: trust, security, and accuracy. The easiest way to comply with the legislation is by deploying a solution that operates within reliable internal data.

“A question all companies should ask themselves, is do they trust their data? If so, they can count on the results their GenAI tool produces.

“Correctly implementing an approach driven by internal data means businesses can act with certainty on their AI outputs, improving productivity by equipping knowledge workers with tools to quickly search for, access and analyse the information they need.

“When a model is given time to adapt to the data it is built upon and learn more about the requirements of the individual employees it services, it will grow in intelligence and intuition and further support the needs of its users.

“Trust, security, and accuracy are all intrinsically linked, and companies looking to embed a GenAI strategy that complies with the EU AI act should begin by organising data across all operations.

“In doing so, they can lay the foundation for a GenAI tool that protects their customers while delivering vital work automation that will increase efficiency and streamline processes for employees.”

 

Paul Cardno, Global Digital Automation & Innovation Senior Manager, 3M

 

 

“With nearly 80% of UK adults now believing AI needs to be heavily regulated, the introduction of the EU’s AI Act is something that businesses have been long-waiting for. We know that AI is shaping the future, but companies will only be able to reap the rewards if they have the confidence to rethink existing processes and break away from entrenched structures.

“Like any new technology, AI could potentially cause more problems, faster if used in the wrong way. While the EU Act isn’t perfect, and needs to be assessed in relation to other global regulation, having a clear framework and guidance on AI from one of the world’s major economies will help encourage those who remain on the fence to tap into the AI revolution, ensuring it has a safe, positive ongoing influence for all organisations operating across the EU, which can only be a promising step forwards for the industry.”

 

Pieter Arntz, Senior Threat Researcher, Malwarebytes

 

 

“Looking at the EU AI Act, I am immediately reminded of NIS2. They are very much alike. And that actually makes sense, because as always, laws are running behind on technological developments.

“Even though the law provides some guidelines, it is mostly about classifying AI models based on the risk they pose. Which means that a lot of the legislation will require an explanation of terms that are not very well known in the judiciary system.

For example, systems considered a threat to people will be banned. This may seem like a clear directive to follow, but it is immediately obfuscated by examples that talk about privacy, discrimination, and the use of biometrics, as there are many cases where exceptions for law enforcement exist in these areas.

“Many of the guidelines are based on old-fashioned product safety regulations which are hard to translate into regulations for something that’s evolving. A screwdriver does not turn into a chainsaw overnight, whereas a friendly AI-driven chatbot turned into a bad-tempered racist in just a few hours.

“Meaning it is hard to judge a book by its cover, or in this case even by the early versions. And that’s fine when we’re talking about AI models that are specifically designed for one goal. But the much more general-purpose Large Language Models (LLMs) are a lot harder to classify, let alone the open-source models that can be adapted by users to fit their own purposes.

Personally, I think it’s good that legislators have thought about the issue, as it provides law enforcement with some tools to keep it under control, but it will always be subject to changes that develop along with the new trends and features that become available.”

 

Curtis Wilson, Staff Data Engineer, Synopsys Software Integrity Group

 

 

“The greatest problem facing AI developers is not regulation, but a lack of trust in AI. For an AI system to reach its full potential, it needs to be trusted by the people who use it. Internally, we have worked hard to build this trust using rigorous testing regimes, continuous monitoring of live systems, and thorough knowledge-sharing sessions with end users to ensure they understand where, when, and to what extent each system can be trusted.

“Externally, though, I see regulatory frameworks like the EU AI Act as an essential component to building trust in AI. The strict rules and punishing fines will deter careless developers and help customers feel more confident in trusting and using AI systems.

“The Act itself is mostly concerned with regulating high-risk systems and foundational models. However, many of the requirements already align with data science best practices, such as risk management, testing procedures, and thorough documentation. Ensuring that all AI developers adhere to these standards is to everyone’s benefit.

“Similar to GDPR, any UK business that sells into the EU market will need to concern themselves with the EU AI Act. However, even those that don’t can’t ignore it. Certain parts of the AI Act, particularly those in relation to AI as a safety component in consumer goods, might also apply in Northern Ireland automatically as a consequence of the Windsor Framework.

“The UK government is moving to regulate AI as well, and a whitepaper released by the government last month highlighted the importance of interoperability with EU (and US) AI regulation. UK companies aligning themselves to the EU AI Act will not only maintain access to the EU market, but hopefully get ahead of the curve for the upcoming UK regulation.

“From software licensing to data privacy regulations, UK businesses are already used to having to deal with EU regulatory frameworks. Many of the obligations laid out in the act are simply data science best practices and things companies should already be doing. There are some additional obligations around registration and certification, which will probably lead to some friction.

“Small companies and start-ups will experience issues more strongly; the regulation acknowledges this and has included provisions for sandboxes to foster AI innovation for these smaller businesses. However, these sandboxes are to be set up on the national level by individual member states, and so UK businesses may not have access.”

 

Eleanor Lightbody, CEO, Luminance

 

 

“The EU AI Act is a historic piece of legislation for responsible AI regulation, balancing the prevention of harmful AI use with clear guidelines for acceptable applications.

“Indeed, there is such a breadth of AI technology and varying applications of Large Language Models – from general purpose chatbots to highly specialised domain-specific AI. A one-size-fits-all approach to AI regulation risks being rigid, and given the pace of AI development, quickly outdated.

“With the passing of the Act, all eyes are now on the new Labour government to signpost the UK’s intentions for regulation in this crucial sector. Implementing a flexible, adaptive regulatory system will be key, and this involves close collaboration with leading AI companies of all sizes.

“Only by striking the right balance between innovation, regulation and collaboration can the UK maintain its long heritage of technological brilliance and achieve the type of AI-driven growth that the Labour Party is promising.”